the-centre-for-effective-altruism feed - EA Forum Reader the-centre-for-effective-altruism’s posts and comments on the Effective Altruism Forum en-us Stefan Schubert: Moral Aspirations and Psychological Limitations <p><em>In this talk from EAGx Nordics 2019, Stefan Schubert explores the psychological obstacles that stop us from maximizing our moral impact, and suggests strategies to overcome them.</em></p><p><em>A transcript of Stefan&#x27;s talk is below, which CEA has lightly edited for clarity. You can also watch this talk on <a href="">YouTube</a> or read the original transcript on <a href="">Stefan&#x27;s website</a>.</em></p><h1>The Talk</h1><p>Effective altruism is of course about doing the most good. The standard way in which effective altruism is applied is by taking what I call the external perspective. On this perspective, you look out into the world, and you try to the find the most high-impact problems which you can solve. Then you try to solve them in the most effective way. These can be problems like artificial intelligence risks or global poverty.</p><p>But there is another perspective, which is the one that I want to take today: what I call the internal perspective. Here you rather look inside yourself, and you think about your own psychological obstacles to doing the most good. These could be obstacles like selfishness and poor epistemics. Together with Lucius, I’m studying these obstacles experimentally at Oxford, but today I won’t talk so much about that experimental work. Instead, I will try to provide a theoretical framework for thinking about this internal perspective. </p><p>Ever since the start of the effective altruism movement, people have talked about these psychological obstacles. However, the internal perspective hasn’t been worked out, I would say, in as much detail as the external perspective has. So in this talk I want to give a systematic account of the internal perspective to effective altruism.</p><br/><span><figure><img src="" class="draft-image center" style="width:100%" /></figure></span><br/><p>The structure of this talk is as follows. First, I’ll talk about psychological obstacles to doing the most good. Then, I will talk about how to do the most good you can, given that you have these psychological obstacles. There will also be sub-sections to both of these main sections.</p><br/><span><figure><img src="" class="draft-image left" style="width:100%" /></figure></span><br/><p>The first of these sub-sections concerns three particular obstacles to doing the most good. To do the most good, you need to, first, allocate sufficient resources to moral goals. Second, you need to work effectively towards those moral goals. And third, you need to have the right moral goals. There are psychological obstacles to doing the most good associated with all of these three factors. I will walk you through all of them in turn.</p><br/><span><figure><img src="" class="draft-image left" style="width:100%" /></figure></span><br/><p>The first factor is the resources we allocate to moral goals. A bit of a simplified picture (I will add some nuance to it later) is that you can allocate some of your resources (like money and time) towards non-moral goals—typically selfish goals—and other resources towards moral goals. (I should also point out that the quantities and numbers during the first part of the talk are just examples—they are not to be taken literally.)</p><span><figure><img src="" class="draft-image left" style="width:100%" /></figure></span><p>Now we are in a position to see our first obstacle: that we allocate insufficient resources to others. Here we see the dashed red line—that’s how much you ideally should allocate to others. We see that you actually allocate less than that—for instance, because you are selfish. So you allocate insufficient resources to others; that’s the first obstacle.</p><p>And now we can define what I call the altruism ratio: the amount of resources that you actually allocate to others, divided by the amount of resources that you potentially or ideally should allocate to others. We see that the altruism ratio is 1/2. Again, this is just an example.</p><span><figure><img src="" class="draft-image left" style="width:100%" /></figure></span><p>The second factor is effectiveness: how good you are at translating your moral resources into impact. We know, of course, that people are often very ineffective when they are trying to do good; to help others. That’s part of the reason why the effective altruism movement was set up in the first place.</p><p>We can also define an effectiveness ratio analogous to the altruism ratio. This is just your actual effectiveness divided by the potential maximum effectiveness of the most effective intervention.</p><span><figure><img src="" class="draft-image left" style="width:100%" /></figure></span><p>But what ultimately counts is your impact on a &quot;correct&quot; moral goal, and your moral goal may be flawed. Historically, people’s moral goals were often flawed, meaning that we have reason to believe that our moral goals may be flawed as well. (I should point out that when I talk about “the correct moral goals” it might seem as if I am suggesting that moral realism is true—that there are objective moral truths. However, I think that one can talk in this way even if one thinks that moral anti-realism is true. But I won’t go into details on that question here.)</p><p>Rather, let’s notice that it’s important that your moral goal and the correct goal are aligned. That will often not be the case. There will be goal misalignment, when you have a flawed moral goal and your method of reaching it doesn’t help much with the correct moral goal. For instance, suppose that your goal is to maximize welfare in Sweden over the short run, and that the correct moral goal is to maximize welfare impartially over the long run. Then your actions in service of your moral goal might not help very much with the correct goal.</p><p>----</p><p>Now we can define an alignment ratio, which is the effectiveness of your work towards the correct goal divided by the effectiveness your work towards your goal. This ratio will often be much less than one.</p><p>(I should also say, as a side note, that this definition only works in a special case: when maximally effective interventions towards the two goals are equally effective. Otherwise, you need to have a slightly more complex definition. However, I won’t go into those details in order not to be bogged down by math. The spirit of this definition will remain the same.)</p><span><figure><img src="" class="draft-image center" style="width:100%" /></figure></span><p>Let’s move on to a formula for calculating impact loss. We want a formula for calculating how much impact we lose because of these three psychological obstacles. That will be useful to calculate the benefits of overcoming the three obstacles. Therefore we need a definition of the impact ratio, which is associated with impact loss, and a formula for calculating the impact ratio as a function of the other three ratios.</p><span><figure><img src="" class="draft-image center" style="width:100%" /></figure></span><p>The impact ratio is simple: it is just your actual total impact divided by your potential total impact. We see that in this example the vast majority of the potential impact is lost. The actual impact is just a tiny fraction of the potential impact.</p><p>Now we want to calculate the impact ratio as a function of the other three ratios. I came up with a somewhat simplified formula (which I think is not quite correct but is a good first stab), which is that the impact ratio is the product of the other three ratios. So if the altruism ratio is 1/2 and the effectiveness ratio is 1/3 and the alignment ratio is 1/4 then the impact ratio is 1/24, which of course is a very small number.</p><span><figure><img src="" class="draft-image center" style="width:100%" /></figure></span><br/><p>Some implications of this formula:</p><ul><li>First, the impact ratio is smaller than the other ratios, because the other ratios will at most be one and often less. </li><li>Also, the impact ratio is typically very small, meaning that one has vast opportunities to increase one&#x27;s impact. In this example, the impact ratio was 1/24; thus, one could increase one’s impact 24 times. </li><li>And lastly, the potential benefits of improving on small ratios are larger than the potential benefits of improving on large ratios. For example, you can potentially double your impact if your ratio is 1/2, but triple it if your ratio is 1/3. (Of course, it might be harder to improve on the small ratios, but in principle you can improve your impact more via focusing on them.)</li></ul><p>----</p><p>Let’s move on to the underlying causes of these obstacles. Here I will again give a simplified account, focusing on the causes which we have reason to believe that we can change. </p><span><figure><img src="" class="draft-image center" style="width:100%" /></figure></span><p>The first cause is that you don’t know what you should do: you have false beliefs or incorrect values. </p><p>Second, you might know what you should do, but you don’t actually do it: you suffer from psychologists call intention-behavior gaps. A classic example of an intention-behavior gap is that you want to quit smoking but nevertheless continue to smoke. You fail to behave in line with your considered intentions or values.</p><span><figure><img src="" class="draft-image center" style="width:100%" /></figure></span><p>On false beliefs: </p><ul><li>First, we could have false beliefs about morality or incorrect moral values. </li><ul><li>We might underestimate our moral obligations, which might lead to the first obstacle: that we invest insufficiently in others. </li><li>We might have false beliefs about what moral goals we ought to have, which can lead to the third obstacle: that our moral goals are misaligned with the correct moral goal.</li></ul><li>Second, we could have false empirical beliefs. </li><ul><li>For example, we might have false beliefs about the effectiveness of different interventions, which can lead to the second obstacle: ineffective interventions.</li><li>These false beliefs may in turn be due to poor epistemics. We know from psychological research that humans are often poor at statistical inference, for instance. But there are also epistemic failures which are specific to moral issues. For instance, there is political bias—that people have a tendency to acquire empirical beliefs just because those beliefs support their political views. Also, they tend to use intuition-based moral thinking—they acquire moral beliefs not because of good reasons but just because of intuition.</li></ul></ul><p>----</p><p>Moving on to intention-behavior gaps: one kind of gap is when you have a moral intention, but you behave selfishly: you fail to resist selfishness. That can lead to the first obstacle. That’s obviously well-known.</p><p>But there is another kind of intention-behavior gap which is, I think, less widely discussed. That’s when you have moral intentions, and you do behave morally, but you behave morally in another way, as it were. You intend to effectively pursue a certain moral goal, but you actually choose interventions which you know are ineffective, or you pursue another moral goal. This can lead to the second or the third obstacle. Here you fail to behave in line with your considered moral views. Rather, you behave in line with other moral impulses.</p><p>For instance, you might passionately support a specific cause. You might know that indulging in that passion is not the way to do the most good, but you might fail to resist this passion. Similarly, you might have a tribal impulse to engage in animated discussions with your political opponents. You might know that this is not an effective way of doing the most good, but you fail to resist this impulse.</p><span><figure><img src="" class="draft-image center" style="width:100%" /></figure></span><p>We see that there are multiple psychological obstacles to doing the most good. There is a lot of relevant moral-psychological research on these topics, by people like Jonathan Haidt, Paul Bloom, and many others. They have demonstrated that in the moral domain, we’re often very emotion-driven: our actions are quite haphazard, our epistemics are not too good, etc.</p><p>Much of this literature is highly relevant for effective altruism, but one thing that’s mostly missing from it is an account of what kind of mindset we should have instead of the default moral mindset. The authors I mentioned focus on describing the default moral mindset and say that it&#x27;s not so good, but they don’t develop a systematic theory of what alternative moral mindset we should have.</p><p>Here I think that the effective altruist toolbox for thinking could help. In the remainder of this talk, I’ll use this toolbox to think about how to do the most good you can, given these obstacles. I think that we should honestly admit that we have psychological limitations. Then we should think carefully about which obstacles are most important to overcome, and how we can do that. </p><p>(I should also point out that my hypotheses of how to do this are just tentative, and subject to change.)</p><p>----</p><p>First, let me talk about the benefits and costs of overcoming the three obstacles. When we think about this, we need to have a reference person in mind to calculate the benefits of overcoming the obstacles. The hypothetical reference person here is a person who has just found out about effective altruism, but hasn’t actually changed their behavior much.</p><p>Let’s now focus on the benefits and costs of overcoming the three particular obstacles. First, increasing altruism. The altruism ratio varies a lot with kind of resource. It’s perhaps easiest to calculate with regards to donations. If your donations are 2% of your income and your ideal donations are 20% of your income, then the altruism ratio is 10%. Of course, some moral theories might be more demanding, in which case the ratio would be different.</p><span><figure><img src="" class="draft-image center" style="width:100%" /></figure></span><p>It is a bit more difficult to calculate the altruism ratio regarding direct work, but in general I would say that for many people the benefits of increasing the altruism ratio can be considerable. However, it may be psychologically difficult to increase the altruism ratio beyond a certain point. We know from historical experience that people have a hard time being extremely self-sacrificial.</p><span><figure><img src="" class="draft-image center" style="width:100%" /></figure></span><p>Let’s move on to increasing effectiveness. The effectiveness ratio may often be very low. Lucius and I are studying the views of experts on the effectiveness of different global poverty charities. We find that the experts think that the most effective global poverty charities are 100 times more effective than the average global poverty charity. If that’s true, there are vast benefits to be made from finding the very best charities. And, even if that number happened to be a bit lower, there would be large benefits to finding the best charities.</p><p>At the same time, the costs of switching to the best charities seem to be relatively low. First, you need to do research about what interventions are best. That seems quite low-cost. Bridging your intention gap might be somewhat higher-cost, because you might feel quite strongly about a specific intervention, but that psychological cost still seems to me to often be lower than the cost associated with bridging intention-behavior gaps due to selfish impulses. I would think that most people feel more strongly about giving up a significant portion of their income than they feel about ceasing to support a specific charity.</p><p>I should also point out that people seem to think a bit differently about lost impact which is due to too little altruism, compared to lost impact which is due to too little effectiveness. </p><p>If someone were to decrease their impact by 99% because they decreased their donations by 99%, they would probably feel guilty about that. But we don’t tend to feel similarly guilty if we decrease our impact by 99% through decreased effectiveness. Thus, it seems that we don’t have quite the right intuitions when we think about effectiveness. This is another reason to really focus on effectiveness and really try to go for the most effective interventions.</p><span><figure><img src="" class="draft-image center" style="width:100%" /></figure></span><p>Let’s move on to acquiring the right moral goals. </p><p>The alignment ratio (the extent to which working toward your personal goals leads to working toward correct moral goals) may be very low. One reason for that is that, as we already saw, you have some reason to believe that your moral views are wrong, because people have often been wrong in the past. And if your moral views are wrong, and you are working towards incorrect moral goals, then you may have a small or negative impact on the correct moral goal. It would be very lucky indeed if it just so happened that our work towards one moral goal were also effective towards a very different moral goal. We should not count on being lucky in that way.</p><p>This means that the benefits of finding the right moral goal might be very high. Of course, we might not actually find it if we try, but the expected benefits might still be high. And the psychological cost of changing moral goals may be quite small, for the same reasons as the psychological costs of increasing effectiveness may be small.</p><p>----</p><p>At this point, let me look at an additional way that we can increase our impact. We’ve focused on how to expand and use our moral resources. We haven’t discussed our non-moral or selfish resources. An additional way of increasing impact is through coordinating between our moral and selfish selves. This can be seen as an aspect of effectiveness, but it’s different from the kind of effectiveness that we’ve focused on so far, which has been about how to use your moral resources most effectively.</p><p>For instance, you might find the most effective option that you could choose intolerable from a selfish perspective. It might be some job that you could take but which you just don’t like from a selfish point of view. Then you should try to find the most effective compromise: such as a job which is still high-impact (if not quite as high-impact as the highest-impact job) and which is better from a selfish perspective.</p><p>Similarly, you may consider what particular selfish obstacles to focus on to overcome. For instance, your donations might be too small; or so you feel. Or you might be employing self-serving reasoning because you don’t want to admit that you were wrong about something. Thereby you nudge other effective altruists slightly in the wrong direction, and decrease their impact. Or you might have a disagreeable personality, leading people who know you not to want to get involved in the effective altruism movement, which lowers your impact. When you are considering what selfish obstacle to overcome, you should be altruistic first and foremost where doing so has the highest impact and the smallest selfish costs.</p><span><figure><img src="" class="draft-image center" style="width:100%" /></figure></span><p>Let’s move on to the last section, which is on how to address the underlying causes. First, we should correct false beliefs. We should learn what it is that we should do, by improving our epistemic rationality. And we should bridge our intention-behavior gaps. We should actually do what we know that we should do. We should improve our instrumental rationality.</p><p>Let’s walk through these in turn. </p><p>First: correcting false beliefs and improving our epistemics.</p><p>Some aspects to improve:</p><ul><li>We should search for knowledge systematically, including knowledge about how to acquire knowledge—what is called epistemology. This seems to be something that we don’t naturally do: we don’t have a knowledge-oriented mindset in the moral domain. We should change that.</li><li>We should also overcome motivated reasoning and tribal tendencies. We should develop epistemic virtues such as intellectual curiosity and intellectual honesty. </li><ul><li>One thing that arguably could help here is to make good epistemics part of your identity. Similarly, developing a community culture emphasizing good epistemics could help. Both of these things are things that effective altruists often try to do.</li></ul><li>We should also bridge our intention-behaviour gaps. To do that, we should develop instrumental rationality virtues, such as having an impact-focus. Here, again, changing our identity might be useful: to make doing the most good a part of our identity.</li><ul><li>However, I think that making doing the most good part of our identity might help more with overcoming passions for a specific causes than with overcoming selfishness—which might, in line with what what I said before, be a stronger urge which is harder to overcome.</li></ul></ul><span><figure><img src="" class="draft-image center" style="width:100%" /></figure></span><p>Conclusions: </p><ul><li>We could greatly increase our impact through overcoming obstacles to doing the most good. </li><li>Obstacles to acting altruistically, pursuing the most effective interventions, or acting on the correct moral goal can be very important to overcome. </li></ul><p>To overcome these obstacles, we should acquire more knowledge and develop better epistemics: we should learn what it is that we should do. We should bridge our intention-behavior gaps: we should actually do what we know that we should do. And lastly, we should replace the default moral mindset, which we saw is quite emotion-driven and characterized by poor epistemics and haphazard decision-making, with an impact-maximizing mindset and impact-maximizing virtues.</p> the-centre-for-effective-altruism 4jEhHFqWH5z2zovws 2019-05-23T00:32:34.098Z Fireside Chat with Rachel Glennerster <p><em>Rachel Glennerster is the Chief Economist of DFID, the UK's ministry for coordinating international development. In this conversation with Nathan Labenz, she discusses the most important lessons she's learned about development and what it really means for a study's result to "generalize".</em></p> <p><em>A transcript of the conversation between Rachel and Nathan, which we have lightly edited for clarity, is below. You can also watch this talk on <a href="">YouTube</a>, or read its transcript on <a href=""></a>.</em></p> <h2>The Conversation</h2> <p><em>Nathan</em>: Thank you for being here. I'm very excited to have this conversation. I'm going to do my best Rob Wiblin impression and start with what is his traditional first question, which is, what are you working on at the moment and why do you think it's especially important?</p> <p><em>Rachel</em>: I'm doing lots of different work at DFID, but let me talk a little bit about some research work I'm doing, which is evaluating a mass media campaign, a radio campaign, run by Development Media International, which is an NGO here in the UK. They're doing a family planning program on radio in Burkina Faso. I think it's really important to look at radio and mass media, because it's a very cheap way to reach a large number of people and you can make sure that the message is accurate and consistent. The problem is that it's very hard to evaluate radio for exactly those reasons, because one radio program, which is millions of people at the same time, it's very hard to randomize.</p> <p>Now, it happens that Burkina Faso is kind of one of the few places in the world where one can evaluate this effectively. It's also true that at DFID, we're worrying a lot about how it's an area that hasn't had a lot of UK interest until recently and also, family planning is a hugely important issue, because if you get the demographic transition right, it can have incredible benefits for women, for economic development, and for the health of children.</p> <p>--</p> <p><em>Nathan</em>: Why is it that it's more testable in Burkina Faso, is that a language group issue?</p> <p><em>Rachel</em>: It's a complex series of factors, that means that you have a lot of different radio stations across Burkina Faso, which indeed have different languages, and so there's less spillover. So you can randomize at the level of the radio station. You also have people who are so poor that they can't afford radios. We're randomly handing out radios to women who don't have radios. We have kind of two levels of randomization, both at the radio station and within a radio area. Some women already have radios, some women are given radios, and some aren't. It's kind of a conglomeration of things that allow you to be able to test.</p> <p>--</p> <p><em>Nathan</em>: Well, we'll probably come back to that in a minute, when we get to the section on randomized control trials in general. We're going to try to cover a variety of areas here, including your views on careers and policy, advice for audience members who are interested in policy careers, and the role of evidence in general in aid work. But I thought we might also go back to the beginning of your career, and you could give us a sketch of how your career has developed. You studied economics as an undergraduate, and then began your career at the Treasury, and then it's taken a lot of different turns from there. Tell us all about that.</p> <p><em>Rachel</em>: I think there's some benefit in telling my story. It sounds very self centered to talk about my story, but I think it's useful to show that careers don't always have to be one-directional, and they can take many turns. I think it's interesting to be doing this in an 80,000 hours podcast, where people are thinking really seriously about what's the best thing to do, because in some ways, my career has been a bit random. But I think it's both, right? We have to think seriously about what our next step is and also realize that stuff happens in life.</p> <p>As an undergraduate, I was very interested in development economics, and in thinking about how I could contribute to addressing the issues of global poverty. And I spent a summer traveling around Kenya, talking to lots of people in the aid sector, and got really depressed and thought: A, lots of things don't seem to be working, and B, what do I know? What honestly could I bring to these complicated issues? I'm 21, I don't know anything about Kenya. I decided instead to go into the Government Economic Service, which is a special training position within the government that trains you to use analytical skills to address policy issues.</p> <p>I did that at the Treasury, I worked on domestic policy, I worked on reform of the health service, trade policy, monetary policy, all sorts of different policies. And I got fantastic training in how to use analytical skills to help us think more rationally about policy decisions. And then I went to get more education. I went to Harvard, and met my future husband, and my life got thrown up in the air because I was convinced I was going to stay in the Government Economics Service for the rest of my life. I loved it.</p> <p>Michael, my future husband, was American. After failing to persuade him to move to the UK, I had to look for something to do in the US. Then I went back to my old love of development, and having, I think, gained some skills, I worked to represent the UK at the IMF and World Bank. I learned a lot about international institutions and worked on financial regulation in Russia as a way to move to Boston, where Michael was. That all went up in smoke, because a big scandal hit Harvard and its work in Russia. I went back to the IMF and I realized I needed more economics knowledge to really make a difference.</p> <p>I went back and did a PhD, but I did it part-time. I was constantly on the border between academia and policy work. I think that's a really interesting and important nexus, where, when I was within government, I was taking academic work and explaining and translating it into policy needs: How can we use this academic work to do our policy work better? Then I moved to J-PAL, where I was on the other side of the fence. I was in academia, but I was helping academia translate what it was doing into the policy world.</p> <p>I helped found J-PAL MIT, which promoted the use of randomized trials. I think the key thing that J-PAL did was to work really hard on this policy/academic nexus and say: "How do we make sure that this academic work is being translated into policy change?" That's the thing that's been common throughout my work. And so yet again, I've jumped back onto the other side of the bridge, and I'm now working at the Department for International Development back in the Government Economic Service that I started in all that time ago, as chief economist. And again, I'm helping to bring academic insights and research insights into decisions in government. There's so much need for that translation across this border.</p> <p>--</p> <p><em>Nathan</em>: I think that's interesting, as you said, for an 80,000 hours podcast audience that's thinking a lot about their careers. But also, as we sit here at EA Global London, I think a lot of attendees are trying to figure out how they can shape their own careers to have the most impact. It strikes me that one of the central questions people are asking themselves often is, should I go and do direct work now? Or should I focus on upscaling myself and becoming a more powerful person in whatever domain? It seems like you have climbed those two ladders at different points in your career, is that a model that you would recommend to others?</p> <p><em>Rachel</em>: First of all, let me just say that upscaling is not only in academia. You don't just do it in graduate school. I think the place where I learned the most, as I said, was probably those first two years in the Government Economic Service. I was never trained to write until someone literally went line by line through my work in government to help me write more clearly, and that's been incredibly important in conveying ideas. You learn technical things when you go to college and graduate school, and later you learn other skills, like how to influence people, how to get things done, and how to run effective organizations.</p> <p>You need all of those skills if you want to make a difference in the world. Different people will prefer skills that are more on the technical side or more on the "influencing people" side. Think about where your best skills are. I wrote a <a href="">blog post</a> a while ago about whether economists should go into policy or academia, and I listed the skills that you need in those different areas. I didn't say one is better than the other or one is harder than the other; it's just different. Think about building on your natural skills and acquiring new skills, both within institutions and through more formal training.</p> <p>I do think that people often underestimate the number of skills you learn from being in an effective organization. A lot of people want to start their own organization, and that's great. But I have seen people start organizations without having really <em>experienced</em> an effective organization. When I arrived at J-PAL, we were three people, and when I left there were 350 people. My deputy and I had both come from the civil service. We had really strong views about how you run an effective organization, and I think that was absolutely critical in building J-PAL.</p> <p>--</p> <p><em>Nathan</em>: Yeah, I think that's fascinating and very apt. My background is in the software startup world, which is quite different. But often young people will ask me, how can they start their own company or whatever. And I always say, "Work for a great company, before you even think about trying to start one of your own, because there's a lot that you don't want to have to reinvent from scratch or try to derive from first principles." You mentioned randomized controlled trials, and that's definitely a big subject in your work. Before we dive into the current state of that debate, here's another historical question.</p> <p>You started out in an era when effectiveness in aid was much less of a concern. People cared more about just giving and feeling good about it and hoping for the best, or maybe not even really worrying about what happened downstream. How would you sketch the kind of intellectual trajectory of aid and development work over the last three decades?</p> <p><em>Rachel</em>: Great. Big questions. I think it's worth looking at two different trajectories. There's the trajectory of the aid sector, the development sector, and then there's the trajectory of academic research on development. and those are a bit different. One of the nice things about RCTs is that they've brought those two really closely together. I think the trajectory of the aid sector has been one in which you say, there was not enough emphasis in my view about understanding really rigorously what works. There were a lot of different theories and views about what we should be doing and you see these big swings and fads -- for example, the view that development was mainly about investment in physical capital.</p> <p>Under this theory, the reason we needed aid was that countries didn't have enough investment, which meant that we should be building stuff. Places like India had five-year plans, and they built steel factories, and then there was a big swing toward a new popular view: "No, we need to worry about human capital and not just steel." But people aren't benefiting from human capital programs like job training; they're still malnourished, they're not learning. And you saw these big swings and interest about what we should be doing for development, but not a lot of it was especially data-driven. There has been a really big change in the last 10 years or so, especially within the DFID and within the World Bank, to really seriously think about the evidence behind the decisions that we're making.</p> <p>One of the reasons I moved to DFID is it has been one of the agencies that I think has changed the most to constantly be examining itself. At the moment, we're going through the process of looking at what we're doing and saying, "Is it really evidence-based? What does the new research say? What should we stop doing, because the new evidence is saying we shouldn't be doing it?" And that's an important sea change in the aid world.</p> <p>One of the biggest changes in research, in academia and economics, is just that there are now a lot more people thinking about the developing world, and that's great. A lot of them are doing RCTs, but if you look at the data, actually, there are just as many non-RCT workers as there were before. Within economics, development used to be a bit off the side; most economists didn't think about it. But people have realized that the questions in development are actually very similar to the questions in other bits of economics, and we should be learning from each other.</p> <p>--</p> <p><em>Nathan</em>: A lot of behavioral economics lessons came from development, and they are now being taken up and used and learned from in rich countries. I think that it's been really important to see lessons going in both directions; development has helped us understand that there are a lot of similarities between people, and there's a lot we can learn from each other.</p> <p>Moving to RCTs in general, and the state of debate around how much we should rely on them: You mentioned that it's kind of a 50/50 split right now, in today's work. Do you think that's an inappropriate split? Do you think that it should be all RCTs? What do you think is the right balance as we try to figure out what is obviously a very complicated world?</p> <p><em>Rachel</em>: I think it's really important to say that all of us who have worked on randomized trials have never suggested that this is the only methodology that you should use. Sometimes it's held up as a straw person that we go around saying: "This is the only methodology." But nobody who's done RCTs has ever thought that they are the only the right approach. I think the right way to see things is that you have a toolbox of ways to answer questions. The right tool depends on the question that you're asking.</p> <p>I think we need good descriptive work to understand what the problems are. A lot of development programs fail because they're trying to solve a problem that doesn't exist. They're just solving their own problem. The first really important thing you've got to do is to understand what the issue is in any given area. If we're worried about girls not going to school because of menstruation, let's start by finding out whether they actually don't go to school more when they're menstruating! That's a really basic, obvious thing, but we actually need more of that kind of thinking; understanding the problem is a really important first step.</p> <p>RCTs are useful for answering really specific questions, but I think the best RCTs are the ones that test a theory. They test something that's more generalizable than just "does this program work?" They ask a question about human beings.</p> <p>Here's an example. I did a project looking at how to improve immunization rates in India. Only 3% of kids in a certain part of India were getting fully immunized. And given that immunization is one of the most effective things that you could do, so that rate is just appallingly low. There were a number of theories about why that could be. A lot of people said, "Well, people here don't trust the doctors." That is, not doctors, because you rarely get doctors in rural India, but nurses and clinics. They don't trust their formal health system.</p> <p>There were also other theories. The clinics were often closed, so is that the problem? Is nurse absenteeism the problem?</p> <p>We had read all this behavioral economics literature, and behavioral economics tells us that people will happy to get their kids immunized, but they'd rather do it tomorrow. We set up an RCT with one arm that provided just good service, making sure that without fail, there was someone to immunize your child when you reached the clinic. The other arm did the same thing, but also provided a small incentive.</p> <p>Yes, we were testing a program, but we were also asking a more fundamental question, which is "why don't people get their kids immunized?" And what we saw in the data is that a lot of people got their kid immunized once, but failed to keep coming back until the end of the immunization schedule. Fixing the supply problem increased the number of people getting the first shot and the second shot. But it failed to fix this persistence problem. However, the incentive worked to help people persist until the end.</p> <p>By the way, that project was completely impossible to scale. Our "incentive" program involve handing out lentils in the middle of Rajasthan. Nobody showed up, which shows how good economists are at designing logistics. It was a disaster; we learned a lot, but you would ever want to actually do a program like this.</p> <p>A colleague of mine did something similar with another program in Rajasthan, where we ended up improving teachers' attendance by putting cameras into their classrooms. Again, the logistics were a nightmare, but the project tested a theory. And so once you have the test done, you can think about other questions, like "how do we implement this at scale?"</p> <p>--</p> <p><em>Nathan</em>: I noticed that the illustrations on your website show you in the field as well as in a more academic setting, which is a clear signal that you believe in actually going to places and handing out the lentils (as it were). It seems like you have an on-the-ground, intuitive, firsthand understanding which allows you to generate a number of theories, and then you can test one of those theories snd find an effect and thek think about, "Okay, now how can we scale this up in such a way that we can simplify the logistics and not have to handle all the lentils personally?"</p> <p><em>Rachel</em>: That seems pretty sound. I think people worry about how that then transfers to another context. Could we take that result and say "the incentive will work for persistence", and take that to another culture or another place and expect similar results? Speak to that a little bit: How can you take your experience when you're pretty confident that it works in one place and try to generalize it to other places?</p> <p>I think the discussion around generalizability is really confused. Let me try and explain how I think about it.</p> <p>People often ask, "Does this result generalize?" To which I respond, "What result? What aspect of this are you asking about?"</p> <p>I think there are three different main ways in which you need to think about whether something generalizes. The first: Is the problem the same in other places? An intervention won't generalize to a place that doesn't have the same problem. In the case of India, the problem was that you have people getting the first immunization, but not persisting to the end of the schedule.</p> <p>And that's really easy, that's something that you can test. You can look at data and say: "Well, in this country, people are willing to get the first immunization, but they don't manage to persist to the end. And in other places, they're not getting the first immunization at all."<br> The second question: "If people have the same problem, do they respond to solutions in a similar way? Does a small incentive help people in a different context to persist in something that they want to do, but don't manage to do consistently?"</p> <p>In this case, there is tons of evidence that incentives work well. It's not all from lentils and vaccines; it's from across many different kinds of programs. The general finding that small nudges are useful to fight procrastination is very generalizable. Another example: If you charge people for preventative health care, even a nominal amount, you will see a big decline in takeup; that has been consistently found for different kinds of preventative care in different countries.</p> <p>The last question: "Can I implement something similar elsewhere?" And that is not something that automatically generalizes. We worked with an extremely good NGO in Rajasthan. Their people would absolutely turn up. They could get the lentils there. The lentils were not stolen. None of this would necessarily be the case, if we worked with another organization or government. Also, we wouldn't want to use lentils if we did the program in New York, right? Lentils wouldn't be seen as a particularly exciting incentive in other places.</p> <p>People confuse these three things and lump them into "this has generalized". Well, basic fundamental principles of human behavior generalize, but logistics don't. So that is where you need to spend a lot of time understanding the local context and how to run logistics locally.</p> <p>Coming back to the beginning of your question: I spend less time on the ground now than I would like. But in the past, I spent a lot of time, which is absolutely critical to understanding local context for anyone thinking about working in development.</p> <p>Even more importantly, you need to partner with people who really understand the area. Nearly, always, when I work in these countries, I am working with a partner who has worked there for many years. And then we build a relationship where they trust because I'm willing to spend time and effort to understand their problems, and I trust them because I've spent enough time to know that they really know the local context. I would also say that it's a lot easier to spend time on the ground when you're younger and less established, so get out there!</p> <p>--</p> <p><em>Nathan</em>: How long do you typically spend in a place? I'm reminded of... I don't know if it's a joke, or a proverb, but the idea that a guy who goes to China for a week thinks he knows a little bit about China. After a month, he thinks he knows a lot about China. And then, after a year, he realizes he knows nothing about China. How long do you feel you have to be in a place to build the right relationships and have the level of trust that you need to actually be effective?</p> <p><em>Rachel</em>: Again, part of it is based on partnerships. I don't always have to be there all the time if I'm building relationships and talking to people regularly. I haven't done what I would advise other people to do, because I didn't go into development initially. I didn't spend a year or two on the ground as a student in my early 20s, which I wish I had done</p> <p>I've been working in Sierra Leone since 2004. I have worked with colleagues there who have been there all their lives. It's that kind of repeated interaction that builds knowledge over time. It helps to start building connections in person, but by checking in with those connections regularly over the phone or on short visits, you can rely on them without having to spend a lot of additional time on the ground..</p> <p>In the end, the answer is: "A lot of time over the years." This is not something that can be accomplished quickly at all.</p> <p>But then, some of that translates when you go to other countries. I think there's an understanding that sets in when you've learned about any one developing country quite well. There's a lot of really interesting work in behavioral economics now about how the pressure of poverty changes how people think and make decisions, and about the constraints that they face. It's just very hard for us to understand just how different people's lives are, and just how constrained the environment is in which they have to make decisions.</p> <p>And in some ways, once you get that once, it really helps with the next context. Now, that doesn't help you with, "Oh, my God, you should never do your slides in green, because that's the color of one of the political parties, and they'll think that you're linked with that political party." I mean, there are local nuances that you will not get, that are not transferable from one country to another. But I think some of the most basic things that you get after you spend some time is this more fundamental understanding: "Okay, I get why people are making these decisions." When you haven't had much sleep, or when you haven't had enough to eat, you just make decisions differently.</p> <p>It's not great to be asking people to spend a lot of time going to community meetings when they have so much on their plate. Keep in mind all the things that are done for us in a rich society: we have chlorine put in our water, we get reminders, we can't send our kids to school unless they've been immunized. All of this stuff, all of these decisions, they're just made for us. We need to understand that really poor people in poor environments have to make those decisions and do those things for themselves.</p> <p>In Sierra Leone, they have to mend the roads themselves. They have to do all the local public services that we get, like trash removal. All these things that we just automatically do, they have to do for themselves. A lot of those things are common across many different societies. So invest in one place, and some of you learn will translate. But stay open and aware that you need to learn from others about, as I say, the nuances of the local context, like which colors not to use. Remember that you have to be careful of that sort of thing; ask locally before you put your foot in your mouth.</p> <p>--</p> <p><em>Nathan</em>: That discussion of poverty and its effects reconciles a couple things I've been wrestling with.You seem to be sort of advocating for a pretty universalist view of human nature. At the same time, I'm well aware of the <a href="">WEIRD phenomenon</a>, and at least through my filter bubble, it seems that I should be very careful about generalizing from studies or results conducted in, say, Ivy League classrooms. First of all, maybe you reject that notion of the WEIRD phenomenon, but if it is real, maybe poverty is the key common factor among the people you work with?</p> <p><em>Rachel</em>: You have to go up a level of abstraction, and then things generalize more. For procrastination, we procrastinate over different things in different contexts, but we all procrastinate, that's one example. There are differences in the decisions we have to make on a day-to-day basis, but some of the ways in which we as human beings respond to those decisions are very common. The difference is that things are just harder for the poor.</p> <p>All the things that we fail at, all the ways we're not very good at making decisions, they're still much easier for us, because we've had a good night's sleep and we have enough food. It's very well-established now that people do more short-term thinking when they're hungry. Even the same farmers in Kenya will make different decisions before and after the harvest. They'll be more short-termist and fall into more behavioral economics traps before, when they haven't had enough to eat.</p> <p>We have to be aware of those pressures on people. Why aren't people saving money? Why aren't they taking the good investment option? Well, if you understand all the other constraints they face, that becomes more understandable. In my mind, it's completely consistent that we as humans are very similar across very different contexts. But the poor just have it harder.</p> <p>Now, obviously, there are other really important differences that you have to take into account. For example, one of the things that is different across contexts, which you really have to think about carefully, is gender. Constraints on women in different places are very different, and you have to take that into account when you're designing programs.</p> <p>I do a lot of work in Bangladesh, where mobility is highly constricted for women and for adolescent girls. When you're designing a program, you have to understand that women won't just be able to walk to get somewhere, because it will be very hard for them to be allowed outside their household on their own and it's hard to persuade other people to take them. There are different practical constraints that you have to take into account in any local context.</p> <p>But that's not about humans being different! That comes back to logistics again. As I said, there are three levels: What's the problem? What's the underlying human mechanism? And what are the local constraints around logistics? Let's keep those three boxes separate and think about them separately. Our program designs will be much more practical if we do that.</p> <p>--</p> <p><em>Nathan</em>: I think that's a great transition to the next set of questions that I have on the importance of policy reform and a growth-oriented agenda. Tyler Cowen, who was a <a href="">recent guest</a> on the 80,000 Hours podcast, just published a book in which he basically argues that our number-one focus should be on economic growth, because that's where almost all of the good that we enjoy comes from, subject to some constraints around general human rights.</p> <p>It seems like you would probably agree with the emphasis on growth. At the same time, some have argued that the focus on RCTs and sort of what has been called the "aid effectiveness craze" is focusing our attention on small issues that may be distracting us from the bigger questions of broad economic growth and societal progress. Do you think that is a valid worry? How do you trade off between small-scale and society-wide capability-building?</p> <p><em>Rachel</em>: I think there are a number of different things going on here.</p> <p>First, I need to object to the characterization of RCTs as "aid effectiveness". Most RCT work is not focused on aid. Most of the money that goes into poverty relief is money spent by people in developing countries, both governments and individuals. And actually, if you look at most of the people doing RCTs, they don't think that their audience is aid donors. Their audience is the government of India or the government of Brazil, and to some extent, big companies or other groups of individuals in those countries. Because that's where the money is, to be honest.</p> <p>Let's remember: There's aid and there's development, and aid is only ever one small part of development. I agree that improving the policies of developing-country governments is a hugely important way to impact global poverty. The RCT craze is not about aid effectiveness; it's about government effectiveness, poverty effectiveness. So that's one slight quibble.</p> <p>Then there's the heart of your question, which is policy versus working on small questions. And then should we be thinking: "Well do RCTs work on small questions?" And also: "How do we think about long-term development versus working on improving someone's health or education right now?"</p> <p>Again, I think those are two different questions. As I was explaining before, I actually think that RCTs should not be seen as testing specific programs. They should be seen as testing big questions that can then influence policy. You might test a specific project on education, but in doing so, you would aim to learn something more general.</p> <p>For example, a lot of work on education has suggested that the most effective thing we can do is to focus on the learning within the classroom. It's not about more money or more textbooks, even though that's what governments spend their money on. They spend it on teachers and textbooks, mainly teachers. But having more of these things doesn't actually improve learning. Instead, RCTs within the Indian education system have suggested that the most important problem is that the Indian curriculum is too difficult for most students.</p> <p>If you just look at the descriptive data, you'll see that in an average Indian ninth-grade classroom, none of the kids are even close to the ninth-grade curriculum. They're testing at somewhere between second grade and sixth grade. No wonder they're not learning very much, because the only thing that the teacher has to do by law in India is complete the curriculum, even if the kids have no idea what they're talking about.</p> <p>When RCT testing was done on very specific interventions, all of the ones that worked were those that taught material at a level that the kids could actually understand. The lesson for the Indian government, if they were ever to agree to this, is "change your curriculum". Yes, you're testing little things, but you're coming out with big answers. And that's what people like <a href="">Angus Deaton</a>, who came up with some of those critiques, don't seem to understand.</p> <p>Now, the final part, and I think the hardest part, is economic development versus, say, working on health and education. At DFID, we have shifted a lot of emphasis relatively recently into trying to do more on economic transformation, under the recognition that the biggest reductions in poverty, as you say, have come from transforming a country's economic policy. For example, the big opening up of India and China towards more market-oriented economies -- and I'm not saying market solve everything, they absolutely don't -- but when you've got a system as screwed-up as Communist China, markets can move you a long way, and can really help transform the economy. The same happened in India, and you saw massive reductions in poverty thanks to a move towards a slightly more sensible economic policy.</p> <p>When I was recently doing my kind of ranking of the most effective things that DFID could do, we were saying, "Well, if there were cases of countries that are as screwed up as China..."</p> <p>Where things are that screwed up, helping countries move toward effective economic management will be the most effective thing that we can do for poverty. You can't easily do that as an outside donor. I'd say that Ethiopia at the moment is going through tremendous reform, and we really ought to be focusing attention on helping Ethiopia through that transition. There's tremendous potential for growth, and they're fundamentally changing policy there in ways that could be really beneficial to the poor. So jump on those opportunities when you see them; we can't make them happen, that's something the developing country has to decide to do themselves, but we should help them as much as we can.</p> <p>What do you do to promote economic development in countries that are going through this type of fundamental reform process? Sometimes you can nudge them a bit in the right direction, help improve trade policy, reduce trade barriers, and so on. But to be honest, in a lot of countries, it's not entirely obvious what you can do to promote economic development.</p> <p>We need a lot more research, a lot more understanding about how to do that, because I absolutely agree that it's fundamental. But we don't always have all the tools that we need to make economic transformation happen. And now, think about our own countries: It's not like we only ever worry about economics and development. We also worry about health and education. Because we don't grow in order to have more money, we grow so that we can have better lives. We want to make sure that more money translates into actually better lives.</p> <p>We need to take opportunities for economic development growth when we can, where there's an opportunity. But we also need to be working on health and education, not least because we know that those things are really important for economic development, right? We know that there are high productivity improvements if kids are given the right nutrition early on; that's about a 10% return to investing in education. To some extent, you can't have economic transformation without the building blocks of human capital. In the classic economic growth model, there's human capital and physical capital. If you want growth, you need to be working on both of those things.</p> <p>--</p> <p><em>Nathan</em>: I wouldn't be doing my job if I didn't get to the famous section of these podcasts where we do the "Overrated/Underrated" list. I'll give you a number of prompts, and you can respond with overrated or underrated. And of course, you're free to pass on any of them if you don't have a strong view, or would rather just avoid the topic. And then maybe we'll circle back to a little bit more career advice for the audience, as we close.</p> <p>The first item: Charter cities as a means of promoting the sort of growth that we're talking about.</p> <p><em>Rachel</em>: I'm not a fan of charter cities, but I don't think anyone else is either, apart from one Nobel Prize winner.</p> <p><em>Nathan</em>: How about going along to get along with your colleagues?</p> <p><em>Rachel</em>: I think it's really important to learn how to influence and how to get along with your colleagues if you're going to make change, so: underrated.</p> <p><em>Nathan</em>: Starting a business in the developing world.</p> <p><em>Rachel</em>: Probably underrated. Social entrepreneurship, overrated. Business, underrated.</p> <p><em>Nathan</em>: And how would you draw a line between those?</p> <p><em>Rachel</em>: Social entrepreneurship is... I don't want to get in trouble for sort of dumping on some specific things. But many of those businesses don't take off in a big way, partly because potential buyers don't have a lot of money. I think you can have a much bigger impact by working in big organizations. There's a lot of evidence that businesses in the developing world are really badly managed, and that there are a lot of improvements that could be made. And basically, people want jobs. They don't want money to create their own businesses; they want jobs. So getting effective private-sector businesses working in these countries is really important. I know people who set up businesses after many years working in development, and I think that's great.</p> <p><em>Nathan</em>: How about cash transfers?</p> <p><em>Rachel</em>: Cash transfers, I think we rate very high -- appropriately. People have been a little bit down on them recently, because of some recent work saying the long term impacts of one transfer weren't as good as people had hoped. But I think when you look at the literature as a whole, I think cash is very positive, including long-term benefits. And even if the control group eventually catches up, getting people out of poverty earlier is still really beneficial.</p> <p><em>Nathan</em>: Okay, how about gene drives for mosquitos and other disease-carrying insects?</p> <p><em>Rachel</em>: Okay, I'm going to pause on that. I don't know enough about that.</p> <p><em>Nathan</em>: Genetically modified and CRISPR crops?</p> <p><em>Rachel</em>: I don't know about CRISPR crops, but I'm a big fan of GM crops, particularly improved agricultural varieties in the developing world. Those are hugely beneficial. Some of that you can get without GM, but I think we're probably a little bit paranoid about GM.</p> <p><em>Nathan</em>: How about cracking down on tax havens or other sources of illicit financial flows?</p> <p><em>Rachel</em>: Underrated. We should do more of that.</p> <p><em>Nathan</em>: What's the mechanism by which that benefits everyone?</p> <p><em>Rachel</em>: A huge amount of money flows out of developing countries into tax havens. That's a big problem in terms of fueling corruption. There's a big opportunity to expose the bad deals that are done with bad governments in developing countries, which are often arranged in the developed world. We ought to be doing more to stop it, and I'm pleased to say that DFID is working in that area.</p> <p><em>Nathan</em>: Micronutrient supplementation?</p> <p><em>Rachel</em>: Micronutrients? Underrated. Supplementation? we still need more work on that. Because the way we're putting micronutrients out at the moment doesn't seem to be working very well. Anemia is probably underrated as a huge, huge problem. It really affects productivity and cognition. We haven't quite figured out how to address it, though.</p> <p><em>Nathan</em>: Improving developing countries' macroeconomic policy. We've kind of covered that.</p> <p><em>Rachel</em>: Yeah, macroeconomic policy is really important, but we've actually kind of figured it out. If you look at inflation, it used to be a major problem. When I was doing development economics, half of the course was about how to deal with hyperinflation. Virtually nobody has hyperinflation anymore. That's a really major success that we don't talk about enough.</p> <p><em>Nathan</em>: Okay, couple more, how about preregistration?</p> <p><em>Rachel</em>: I think that's overrated. And that's a little funny coming from me, because I wrote one of the papers saying that we ought to do more of it in economics and now I'm finding some of the downsides. Yeah, there's a big move to register in advance what your analysis is going to be. But sometimes tying your hands is not actually a good idea, so we need to be a bit careful about preregistration. Preanalysis plans, which say "this is exactly how I'm going to analyze my data when it comes out", can be a problem, because when you look at the data and new circumstances have arisen, it may be really important to change how you're doing analysis. Plus, I've found that journal referees hate it.</p> <p><em>Nathan</em>: Do you think that people are not doing that extended analysis, or that it's being unfairly dismissed as a result of the preregistration?</p> <p><em>Rachel</em>: I understand that people worry that you run a trial, and then test your results on 50 different outcomes and promote the one that had a positive effect. Most academic work doesn't work quite like that, because your referees force you to show 50 robustness checks, and you don't get passed if only one of them had a positive effect. I think we need to rely a bit more on theory. Theory tells you which things should go together; I think theory can be as an effective way of looking at the data and pulling out patterns, while also somewhat tying your hands. It might even be a more effective way of tying your hands than preanalysis plans. I'm not saying you should never make those plans, but they're not the simple answer that people thought they were.</p> <p><em>Nathan</em>: Okay, two more overrated/underrated questions as we begin to transition to your career advice to close things out. How about reading the news or the newspaper?</p> <p><em>Rachel</em>: Reading the news of the kind that you already support, we should be doing less of. Reading things that shock you, or that come from a different perspective, I think we don't do enough of. I try and read about African politics, for example. It's not on our news normally, but it's really interesting. I find it really helpful to read news from serious people who cover African politics; it brings a different perspective. But I think that often, when we read the news, we read things that confirm what we already knew, and that's not very helpful.</p> <p><em>Nathan</em>: Last one for overrated/underrated. How about postgraduate degrees?</p> <p><em>Rachel</em>: I think it just depends on what you're trying to do with your life.</p> <p>--</p> <p><em>Nathan</em>: Let's get a bit more practical for the last couple of minutes, as we try to give some useful advice to the audience. At DFID, what are the skill sets that you find to be in short supply, and wish more young people were developing today?</p> <p><em>Rachel</em>: This is less true of DFID, but I think the development sector in general could really benefit from the skills of the AI community, in the sense of good hard analysis, linked with a passion to make change. The UK's Government Economic Service and Government Statistical Service produce exactly that kind of hard analysis. And there's actually a big demand for them, and government in general is desperate for more people to go into the Government Economic Service.</p> <p>Anyone interested in economics and policy and putting economics to good use should definitely be looking at the Government Economic Service, especially DFID. I think that ability to take a hard look at numbers and think about how they could be used to answer the questions that people have is desperately needed within policy. It's a really important part of global poverty work. NGOs too. You rarely find good implementing NGOs who are very good at kind of analysis.</p> <p>Take a look, for example, at the work that <a href="">Caitlin Tulloch is doing with the IRC</a>, of just taking the data and figuring out the cost structure of what they're doing and what's driving variance in costs. NGOs produce lots of data, and they don't know what to do with most of it. But it helps to have someone really good analytically who works with them to help them understand and use that data effectively.</p> <p>--</p> <p><em>Nathan</em>: Two last questions to close things out. I don't know how well you know the EA movement. But based on your knowledge, do you have a sense for how EAs most misunderstand or most often get wrong about global health and development?</p> <p><em>Rachel</em>: The old saying that genius is 1% inspiration and 99% perspiration is so true in development. Success comes from refining the logistics of making something work. Even once we figure out what the problem is, and we figure out there's a solution that has generalized, actually making it work at scale on the ground requires infinite amounts of testing and figuring it out and testing it again, which doesn't necessarily mean randomized trials. I mean, testing can mean that we tried an information blitz and we put up posters and the next morning none of the posters were there. That's testing. The hard sweat and toil of making something work in dysfunctional societies cannot be underestimated.</p> <p>One of the things that I meant to say in your earlier question about the evolution of the RCT movement: I think that one of the biggest things that it's done in academia is bringing academics out into the world. People working on RCTs are basically running large, implementing organizations in developing countries. We're hiring tons of people, we're trying to get things from A to B, we're building stuff -- and the amount of insight you're getting from trying to run a big organization in a dysfunctional country is unbelievable, and it helps generate new questions. I think that's what I think of anything, that's what people misunderstand. There's a lot of discussion up here about the blood, sweat and tears of making the trains run on time.</p> <p>--</p> <p><em>Nathan</em>: Okay, last question for today. What are the best decisions that you've made in your career? And you've scattered advice throughout this conversation, but what are the top recommendations you would give to EAs who want to make a difference in your line of work, perhaps as civil servants?</p> <p><em>Rachel</em>: I'm pretty proud of the work that I did at J-PAL. It was kind of a crazy thing to give up my job at the IMF and go start this organization. As I said, we started with three people, and we had $300,000 from MIT when we started. But the lesson there was that it really was possible to connect what's coming out of research with practical policy. That work isn't for everyone, but I think it's an extremely important area that people should at least consider whether they are interested in.</p> <p>Because another thing that happens within policy is that people remember what they did at university, but then don't keep up on the latest literature and get further and further away from what we know now. And I feel one of the things that's great about the EA community is that you're endlessly curious. You will keep trying to get up to speed on the latest thinking and the latest evidence. And so if you are in policy, you will be constantly wanting to improve things, and constantly willing to reach out and go the extra mile and read the extra paper and find out what the evidence is saying, and that is so desperately needed in policy.</p> the-centre-for-effective-altruism YivqwF4zNCTRSfk73 2019-05-16T11:17:13.926Z Jade Leung: Why Companies Should be Leading on AI Governance <p><em>Are companies better-suited than governments to solve collective action problems around artificial intelligence? Do they have the right incentives to do so in a prosocial way? In this talk, Jade Leung argues that the answer to both questions is "yes".</em></p> <p><em>A transcript of Jade's talk is below, which CEA has lightly edited for clarity. You can also watch this talk on <a href="">YouTube</a>, or read its transcript on <a href=""></a>.</em></p> <h2>The Talk</h2> <p>In the year 2018, you can walk into a conference room and see something like this: There's a group of people milling around. They all look kind of frantic, a little bit lost, a little bit stressed. Over here, you've got some people talking about GDPR and data. Over there, you've got people with their heads in their hands being like, "What do we do about China?" Over there, you've got people throwing shade at Zuckerberg. And that's when you know you're in an AI governance conference.</p> <p>The thing with governance is that it's the kind of word that you throw around, and it feels kind of warm and fuzzy because it's the thing that will help us navigate through all of these kind of big and confusing questions. There's just a little bit of a problem with the word "governance," in that a lot of us don't really know what we mean when we say we want to be governing artificial intelligence.</p> <p>So what do we actually mean when we say "governance?" I asked Google. Google gave me a lot of aesthetically pleasing, symmetrical, meaningless management consulting infographics, which wasn't very helpful.</p> <p><img src="//" alt="jade slide 1"></p> <p>And then I asked it, "What is AI governance?" and then all the humans became bright blue glowing humans, which didn't help. And then the cats started to appear. This is actually what comes up when you search for AI governance. And that's when I just gave up, and I was like, "I'm done. I need a career change. This is not good."</p> <p><img src="//" alt="jade slide 2"></p> <p>So it seems like no one really knows what we mean when we say, "AI governance." So I'm going to spend a really quick minute laying some groundwork for what we mean, and then I'll move on into the main substantive argument, which was, "Who should actually be leading on this thing called governance?"</p> <p><img src="//" alt="jade slide 3"></p> <p>So governance, global governance, is a set of norms, processes, and institutions that channel the behavior of a set of actors towards solving a collective action problem at a global or transnational scale. And you normally want your governance regime to steer you towards a set of outcomes. When we talk about AI governance, our outcome is something like the robust, safe, and beneficial development and deployment of advanced artificial intelligence systems.</p> <p><img src="//" alt="jade slide 4"></p> <p>Now that outcome needs a lot of work, because we don't actually really know what that means either. We don't really know how safe is safe. We don't really know what benefits we're talking about, and how they should be distributed. And us answering those questions and adding granularity to that outcome is going to take us a while.</p> <p>So you can also put in something like a placeholder governance outcome, which is like the intermediate outcome that you want. So the process of us getting to the point of answering these questions can include things like being able to avoid preemptive locking, so that we don't have a rigid governance regime that can't adapt to new information. It could also include things like ensuring that there are enough stakeholder voices around the table so that you are getting all of your opinions in. So those are examples of intermediate governance outcomes that your regime can lead you towards.</p> <p><img src="//" alt="jade slide 5"></p> <p>And then in governance you also have a set of functions. So these are the things that you want your regime to do so that you get to the set of outcomes that you want. So common sets of functions that you talk about would be things like setting rules. What do we do? How do we operate in this governance regime? Setting context, creating common information and knowledge, doing common benchmarks and measurements. You also have implementation, which is both issuing and soliciting commitments from actors to do certain things. And it's also about allocating resources so that people can actually do the things. And then finally, you've got enforcement and compliance, which is something like making sure that people are actually doing the thing that they said that they would do.</p> <p>So these are examples of functions. And the governance regime is something like these norms, processes, and institutions that get you towards that outcome by doing some of these functions.</p> <p>So the critical question today is something like, how do we think about who should be taking the lead on doing this thing called AI governance?</p> <p>I have three propositions for you.</p> <p><img src="//" alt="jade slide 6"></p> <p>One: states are ill-equipped to lead in the formative stages of developing an AI governance regime. Two: private AI labs are better, if not best-placed, to lead in AI governance. And three: private AI labs can and already, to some extent, are incentivized to do this AI governance thing in a prosocial way.</p> <p>I'll spend some time making a case for each one of these propositions.</p> <h3>States are Ill-Equipped</h3> <p>When we normally think about governance, you consider states as the main actors sitting around the table. You think about something like the UN: everyone sitting under a flag, and there are state heads who are doing this governance thing.</p> <p><img src="//" alt="jade slide 7"></p> <p>You normally think that because of three different reasons. One is the conceptual argument that states are the only legitimate political authorities that we have in this world, so they're the only ones who should be doing this governance thing. Two is you've got this kind of functional argument: states are the only ones who can pass legislation, design regulation, and if you're going to think about governance as regulation and legislation, then states have to be the ones doing that function. And three, you've got something like the incentives argument, which is that states are set up, surely, to deliver on these public goods that no one else is going to care about as a result. So states are the only ones that have the explicit mandate and the explicit incentive structure to deliver on these collective action problems. Otherwise, none of this mess would get cleaned up.</p> <p>Now all of those things are true. But there are trends and certain characteristics about a technology governance problem, which means that states are particularly increasingly undermined in their ability to do governance effectively, despite all of those things being true.</p> <p><img src="//" alt="jade slide 8"></p> <p>Now the first is that states are no longer the sole source of governance capacity. And this is a general statement that isn't specifically about technology governance. You've got elements like globalization, for example, creating the situation where these collective action problems are at a scale which states have no mandate or control over. And so states are increasingly unable to do this governance thing effectively within the scope of the jurisdiction that they have.</p> <p>You also have non-state actors emerging on the scene, most notably civil society and multi-national corporations are at this scale that supersedes states. And they also are increasingly demonstrating that they have some authority, and some control, and some capacity to exercise governance functions. Now their authority doesn't come from being voted in. The authority of a company, for example, plausibly comes from something like their market power and the influence on public opinion. And you can argue about how legitimate that authority is, but it is exercised and it does actually influence action. So states are no longer the sole source of where this governance stuff can come from.</p> <p>Specifically for technology problems, you have this problem that technology moves really fast, and states don't move very fast. States use regulatory and legislative frameworks that hold technology static as a concept. And technology is anything but static: it progresses rapidly and often discontinuously, and that means that your regulatory and legislative frameworks get out of date very quickly. And so if states are going to use that as the main mechanism for governance, then they are using irrelevant mechanisms very often.</p> <p>Now the third is that you have emerging technologies specifically being a challenge. Emerging technologies have huge bars of uncertainty around the way that they're going to go. And to be able to effectively govern things that are uncertain, you need to understand the nature of that uncertainty. In the case of AI, for example, you need deep in-house expertise to understand the nature of these technology trajectories. And I don't know how to say this kindly, but governments are not the most technology-literate institutions that are around, which means that they don't have the ability to grapple with that uncertainty in a nuanced way, which means you see one of two things: you either see preemptive clampdown out of fear, or you see too little too late.</p> <p>So states are no longer the sole source of governance capacity. And for technology problems that move fast and are uncertain, states are particularly ill-equipped.</p> <h2>Private Labs are Better Placed</h2> <p>Which leads me to proposition two, which is that instead of states, private AI labs are far better-placed, if not the best-placed, actors to do this governance thing, or at least form the initial stages of a governance regime.</p> <p><img src="//" alt="jade slide 9"></p> <p>Now this proposition is premised on an understanding that private AI labs are the ones at the forefront of developing this technology. Major AI breakthroughs have come from private companies, privately funded nonprofits, or even academic AI labs that have very strong industrial links.</p> <p>Why does that make them well-equipped to do this governance thing? Very simply, it means that they don't face the same problems that states do. They don't face this pacing problem. They have in-house expertise and access to information in real time, which means that they have the ability to garner unique insights very quickly about the way that this technology is going to go.</p> <p><img src="//" alt="jade slide 10"></p> <p>So of all the actors, they are most likely to be able to slightly preemptively, at least, see the trajectories that are most plausible and be able to design governance mechanisms that are nuanced and adaptive to those trajectories. No other actor in this space has the ability to do that except those at the forefront of leading this technology development.</p> <p><img src="//" alt="jade slide 11"></p> <p>Now secondly, they also don't face the scale mismatch problem. This is where you've got a massive global collective action problem, and you have states which are very nationally scaled. What we see is multinational corporations which from the get-go are forced to be designed globally because they have global supply chains, global talent pools, global markets. The technology they are developing is proliferated globally. And so, necessarily, they both have to operate at the scale of global markets, and they also have experience, and they attribute resources to navigating at multiple scales in order to make their operations work. So you see a lot of companies scale at local, national, regional, transnational levels, and they navigate those scales somewhat effortlessly, and certainly effortlessly compared to a lot of other actors in this space. And so, for that reason, they don't face the same scale mismatch problem that a lot of states have.</p> <p>So you've got private companies that both have the expertise and also the scale to be able to do this governance thing.</p> <p>Now you're probably sitting there thinking, "This chick has drunk some private sector Kool-Aid if she thinks that private sector, just because they have the capacity, means that they're going to do this governance thing. Both in terms of wanting to do it, but also being able to do it well, in a way that we would actually want to see it pan out."</p> <h3>Private Labs are Incentivized to Lead</h3> <p>Which leads me to proposition three, which is that private labs are already and can be more incentivized to lead on AI governance in a way that is prosocial. And when I say "prosocial" I mean good: the way that we want it to go, generally, as an altruistic community.</p> <p>Now I'm not going to stand up here and make a case for why companies are actually a lot kinder than you think they are. I don't think that. I think companies are what companies are: they're structured to be incentivized by the bottom line, and they're structured to care about profit.</p> <p><img src="//" alt="jade slide 12"></p> <p>All that you need to believe in order for my third proposition to fly is that companies optimize for their bottom line. And what I'm going to claim is that that can be synonymous with them driving towards prosocial outcomes.</p> <p>Why do I think that? Firstly, it's quite evidently in a firm's self-interest to lead on shaping the governance regime that is going to govern the way that their products and their services are going to be developed and deployed, because it costs a lot if they don't.</p> <p><img src="//" alt="jade slide 13"></p> <p>How does that cost them? Poor regulation, and when I say "poor", I mean poor in terms of costly for firms to engage with, is something where you see a lot of costs incurred to firms when that happens across a lot of technology domains. And the history of technology policy showcases a lot of examples where firms haven't been successful in preemptively engaging with regulation and preemptively engaging with the governance, and so they end up facing a lot of costs. In the U.S, and I'll point to the U.S. because the U.S. is not worst example of it, but they have a lot of poor regulation in place particularly when it comes to things like biotechnology. In biotechnology, you've got blanket bans on certain types of products, and you also have things like export controls, which have caused a lot of loss of profit for these firms. You also have a lot of examples of litigation across a number of different technology domains where firms have had to battle with regulation that has been put in place.</p> <p>Now it wasn't in the firms' interests to incur those costs. And so the most cost-effective way, in hindsight, would be for these firms to engage with the governance as they were shaping regulation, shaping governance, and doing what that would be.</p> <p>Now just because it's costly doesn't mean that it's going to go in a good way. What are the reasons why them preemptively engaging is likely to lead to prosocial regulation? Two reasons why. One: the rationale for a firm would be something like, "We should be doing the thing that governance will want us to do, so that they don't then go in and put in regulation that is not good for us." And if you assume that governance has that incentive structure to deliver on public goods, then firms, at the very least, will converge on the idea that they should be mitigating their externalities and delivering on prosocial outcomes in the same way that the state regulation probably would.</p> <p>The more salient one in the case of AI is that public opinion actually plays a fairly large role in dictating what firms think are prosocial. You've seen a lot of examples of this in recent months where you've had Google, Amazon, and Microsoft face backlash from the public and from employees where they've developed and deployed AI technologies that grate against public values. And you've seen reactions from these firms respond to those actions as well. It's concrete because it actually affects their bottom line: they lose consumers, users, employees. And that, again, ties back to their incentive structure. And so if we can shore up the power of something like the public opinion that translates into incentive structures, then there are reasons to believe that firms will engage preemptively in shaping things that will go more in line with what public opinion would be on these issues.</p> <p>So the second reason is that firms already do a lot of governance stuff. We just don't really see it, or we don't really think about it as governance. And so I'm not making a wacky case here in that business as usual currently is already that firms do some governance activity.</p> <p>Now I'll give you a couple of examples, because I think when we think about governance, we maybe hone in on the idea that that's regulation. And there are a lot of other forms of governance that are private sector-led, which perform governance functions, but aren't really called "governance" by the traditional term.</p> <p>So here are some examples. When you think about the function of governance of implementing some of these commitments, you can have two different ways of thinking about private sector leading on governance. One is establishing practices along the technology supply chain that govern for outcomes like safety.</p> <p>Again, in biotechnology, you've got an example of this where DNA synthesis companies voluntarily self-initiated schemes for screening customer orders so that they were screening for whether customers were ordering for malicious use purposes. The state eventually caught up. And a couple of years after most DNA synthesis companies had been doing this in the U.S., it became U.S. state policy. But that was a private sector-led initiative.</p> <p>Product standards are another really good example where private firms have consistently led at the start for figuring out what a good product looks like when it's on the market.</p> <p>Cryptographic products, the first wave of them, is a really good example of this. You had firms like IBM and a firm called RSI Security Inc., in particular, do a lot of early-stage R&amp;D to ensure that strong encryption protocols made it onto the market and took up a fair amount of global market share. And for the large part, that ended up becoming American standards for cryptographic products, which ended up scaling across the global markets.</p> <p>So those are two examples of many examples of ways in which private firms can lead on the implementation of governance mechanisms.</p> <p><img src="//" alt="jade slide 14"></p> <p>The second really salient function that they play is in compliance. So making sure that companies are doing what they do. There are a lot of examples in this space of climate change, in particular where firms have either sponsored or have directly started initiatives that are about disclosing the things that they're doing to ensure that they are in line with commitments that are made on the international scale. Whether that's things like divestment, or disclosing climate risk, or carbon footprints, or various rating and standards agencies, there is a long list of ways in which the private sector is delivering on this compliance governance function voluntarily, without necessarily needing regulation or legislation.</p> <p>So firms already do this governance thing. And all that we have to think of is how can they lead on that and shape it in a more preemptive way.</p> <p>And the third reason to think that firms could do this voluntarily is that, at the end of the day, particularly for transformative artificial intelligence scenarios, firms rely on the world existing. They rely on markets functioning. They rely on stable sociopolitical systems. And if those don't end up being what we get because we didn't put in robust governance mechanisms, then firms have all the more reason to want us to not get to those futures. And so, for an aspirationally long-term thinking firm, this would be the kind of incentive that would lead them to want to lead preemptively on some of these things.</p> <p>So these are all reasons to be hopeful, or to think at least, that firms can do and can be incentivized to lead on AI governance.</p> <p><img src="//" alt="jade slide 6"></p> <p>So here are the three propositions again. You've got states who are ill-equipped to lead on AI governance. You've got private AI labs who have the capacity to lead. And finally, you've got reasons to believe that private AI labs can lead in a way that is prosocial.</p> <p>Now am I saying that private actors are all that is necessary and sufficient? It wouldn't be an academic talk if I didn't give you a caveat, and the caveat is that I'm not saying that. It's only that they need to lead. There are very many reasons why the private sector is not sufficient, and where their incentive structures can diverge from what prosocial outcomes are.</p> <p>More than that, there are some governance functions which you actually need non-private sector actors to play. They can't pass legislation, and then you often need like a third party civil society organization to do things like monitoring compliance very well. And the list goes on of a number of things that private sector can't do on their own.</p> <p>So they are insufficient, but they don't need to be sufficient. The clarion call here is for private sector to recognize that they are in a position to lead on demonstrating what governing artificial intelligence can look like if it tracks technological progress in a nuanced, adaptive, flexible way, if it happens at a global scale and scales across jurisdictions easily, and finally avoids costly conflict between states and firms, which tends to precede a lot of costly governance mechanisms that are ineffective being put in place.</p> <p>So firms and private AI labs can demonstrate how you can lead on artificial intelligence governance in a way that achieves these kinds of outcomes. The argument is that others will follow. And what we can look forward to is shaping the formative stages of an AI governance regime that is private sector-led, but publicly engaged and publicly accountable.</p> <p><img src="//" alt="jade slide 15"></p> <p>Thank you.</p> <h2>Questions</h2> <p><em>Question</em>: Last time you spoke at EA Global, which was just a few months ago, it was just after the Google engineers' open letter came out saying, "We don't want to sell AI to the government". Something along those lines. Since then, Google has said they won't do it. Microsoft has said they will do it. It's a little weird that rank and file engineers are sort of setting so much of this policy, and also that two of the Big Five tech companies have gone so differently so quickly. So how do you think about that?</p> <p><em>Jade</em>: Yeah. It's so unclear to me how optimistic to be about these very few data points that we have. I think also last time when we discussed it, I was pretty skeptical about how effective research communities can be and technical researchers within companies can be in terms of affecting company strategy.</p> <p>I think it's not surprising that different companies are making different decisions with respect to how to engage with the government. You've historically seen this a lot where you've got some technology companies that are slightly more sensitive to the way that the public thinks about them, and so they make certain decisions. You've got other companies that go entirely under the radar, and they engage with things like defense and security contracts all the time, and it's part of their business model, and they operate in the same sector.</p> <p>So I think the idea that you can have the private sector operate in one fashion, with respect to how they engage with some of these more difficult questions around safety and ethics, isn't the way it pans out. And I think the case here is that you have some companies that can plausibly care a lot about this stuff, and some companies that really just don't. And they can get away with it, is the point.</p> <p>And so I think, assuming that there are going to be some leading companies and some that just kind of ride the wave if it becomes necessary is probably the way to think about it, or how I would interpret some of these events.</p> <p><em>Question</em>: So that relates directly, I think, to a question about the role of small companies. Facebook, obviously, is under a microscope, and has a pretty bright spotlight on it all the time, and they've made plenty of missteps. But they generally have a lot of the incentives that you're talking about. In contrast, Cambridge Analytica just folded when their activity came to light. How do you think about small companies in this framework?</p> <p><em>Jade</em>: Yeah. That's a really, really good point.</p> <p>I think small companies are in a difficult but plausibly really influential position. As you said, I think they don't have the same lobbying power, basically. And if you characterize a firm as having power as a result of their size, and their influence on the public, and their influence on the government, then small companies, by definition, just have far less of that power.</p> <p>There's this dynamic where you can point to a subset of really promising, for example, startups or up-and-coming small companies that can form some kind of critical mass that will influence larger actors who, for example, in a functional, transactional sense, would be the ones that would be acquiring them. E.g., like DeepMind had a pretty significant influence on the way that safety was perceived within Google as a result of being a very lucrative acquisition opportunity, in a very cynical framing.</p> <p>And so I think there are ways in which you can get really important smaller companies using their bargaining chips with respect to larger firms to exercise their influence. I would be far more skeptical of small companies being influential on government and policy makers. I think historically it's always been large industry alliances or large big companies that get summoned to congressional hearings and get the kind of voice that they want. But I think certainly, like within the remit of private sector, I think small companies, or at least medium-size companies, can be pretty important, particularly in verticals where you don't have such dominant actors.</p> <p><em>Question</em>: There have been a lot of pretty well-publicized cases of various biases that are creeping into algorithmic systems that sort of can create essentially racist or otherwise discriminatory algorithms based on data sets that nobody really fully understood as they were feeding it into a system. That problem seems to be far from solved, far from corrected. Given that, how much confidence should we have that these companies are going to get these even more challenging macro questions right?</p> <p><em>Jade</em>: Yeah. Whoever you are in the audience, I'm not sure if you meant that these questions are not naturally incentivized to be solved within firms. Hence, why can we hope that they're going to get solved at the macro level? I'm going to assume that's what the question was.</p> <p>Yeah, that's a very good observation that within... unless you have the right configuration of pressure points on a company, there are some problems which maybe haven't had the right configuration and so aren't currently being solved. So put aside the fact that maybe that's a technically challenging problem to solve, and that you may not have the data sets available, etc. And if you assume that they have the capacity to solve that problem internally but they're not solving it, why is that the case? And then why does that mean that they would solve bigger problems?</p> <p>The model of private sector-led governance requires, and as I alluded to, pressure points that are public-facing that the company faces. And with the right exertion of those pressure points, and with enough of those pressure points translating into effects on their bottom line, then that would hopefully incentivize things like this problem and things like larger problems to be solved.</p> <p>In this particular case, in terms of why algorithmic bias in particular hasn't faced enough pressure points, I'm not certain what the answer is to that. Although, I think you do see a fair amount more like things like civil society action and whatnot popping up around that, and a lot more explicit critique about that.</p> <p>I think one comment I'll say is that it's pretty hard to define and measure when it's gone wrong. So there's a lot of debate in the academic community, for example, and the ProPublica debate comes to mind too, where you've got debates literally about what it means for this thing to have gone fairly or not. And so that points to the importance of a thing like governance where you've got to have common context, and common knowledge, and common information about your definitions simply, and your benchmarks and your metrics for what it means for a thing to be prosocial in order for then you to converge on making sure that these pressure points are exercised well.</p> <p>And so I think a lot of work ahead of us is going to be something like getting more granularity around what prosocial behavior looks like, for firms to take action on that. And then if you know basically what you're aiming for, then you can start to actually converge more on the kind of pressure points that you want to exercise.</p> <p><em>Question</em>: I think that connects very directly to another question from somebody who said, basically, they agree with everything that you said, but still have a very deep concern that AI labs are not democratic institutions, they're not representative institutions. And so will their sense of what is right and wrong match the broader public's or society's?</p> <p><em>Jade</em>: I don't know, people. I don't know. It's a hard one.</p> <p>There are different ways of answering this question. One is that it's consistently a trade off game in terms of figuring out how governance is going to pan out or get started in the right way. And so one version of how you can interpret my argument is something like, look, companies aren't democratic and you can't vote for the decisions that they make. But there are many other reasons why they are better. And so if you were to trade off the set of characteristics that you would want in an ideal leading governance institution, then you could plausibly choose to trade off, as I have made the case for trading off, that they are just going to move faster and design better mechanisms. And so you could plausibly be able to trade off some of the democratic elements of what you would want in an institution. That's one way of answering that question.</p> <p>In terms of ways of making... yeah, in terms of ways of aligning some of these companies or AI labs: so aside from the external pressure point argument... which if I were critiquing myself on that argument, there are many ways in which pressure points don't work sometimes and it kind of relies on them caring enough about it and those pressure points actually concretizing into kind of bottom line effects that actually makes that whole argument work.</p> <p>But particularly in the case of AI, there are a handful of AI labs that I think are very, very important. And then there are many, many more companies that I think are not critically important. And so the fact that you can identify a small group of AI labs makes it an easier task to both kind of identify at almost like an individual founder level where some of these common views about what good decisions are can be lobbied to.</p> <p>And I think it's also the case that there are a number of AI labs... we're not entirely sure how founders think or how certain decision makers think. But there are a couple who have been very public and have gone on record about, and have been pretty consistent actually, about articulating the way that they think about some of these issues. And I think there is some hope that at least some of the most important labs are thinking in quite aligned ways.</p> <p>Doesn't quite answer the question about how do you design some way of recourse if they don't go the way that you want. And that's a problem that I haven't figured out how to solve. And if you've got a solution, please come tell me.</p> <p>Yeah, I think as a starting point, there's a small set of actors that you need to be able to pin down and get them to articulate what the kind of mindset is around that. And also that there are an identifiable set of people that really need to buy in, particularly to get transformative AI scenarios right.</p> the-centre-for-effective-altruism fniRhiPYw8b6FETsn 2019-05-15T23:37:37.066Z What role might insects play as part of the future of food? <p><em>What role, if any, should insects play in the future of agriculture? In this talk from EA Global 2018: London, Nicole Rawling of the Good Food Institute, Nick Rousseau of the Woven Network, and Kyle Fish of Tufts University offer their varying perspectives.</em></p> <p><em>A transcript of this talk is below, which CEA has lightly edited for clarity. You can also read this transcript on <a href=""></a>, or watch it on <a href="">YouTube</a>.</em></p> <h2>Nicole's Talk</h2> <p>I work with the Good Food Institute, which is a nonprofit that works internationally to remove as many animal-based products on the market and replace them with plant-based alternatives. We have operations in the United States, India, China, Brazil, Israel, and we're looking to hire in Europe, so if you like what we do, come talk to me afterwards. We're all here for the same reason, right? We want to reduce the impact on the world of animal agriculture. I think everyone at EA Global understands the problems behind animal agriculture, so I'll go through them quickly.</p> <p><img src="//" alt="1400 Nicole Rawling v2"></p> <p>First, animal welfare. Over 56 billion farmed animals are killed every year for human consumption, and that doesn't even include aquaculture, where we have trillions of tons of animals killed. The other issue is global poverty. Animal agriculture is vastly inefficient in producing food for human consumption. You have to feed plants to animals to get the calories for humans. One example is that it takes nine plant-based calories to make one calorie of chicken. That is a huge amount of food waste, and when we're trying to feed the world population, what we care about is calories. Why feed them to a living breathing animal that has to grow and live before eating it? Let's just take those calories and give them directly to humans.</p> <p>The same with human health. I'm sure most of you know that antibiotics are fed towards farmed animals, to either keep them from being sick or to keep them growing. Over 80% of antibiotics are fed to farmed animals. That's causing a massive health crisis, that we are going to have superbugs that are going to affect humans, and we're not going to be able to cure them, because antibiotics won't work. And finally, environmental degradation. The UN has said that animal agriculture contributes to some of the world's most pressing environmental issues, including deforestation, including loss of biodiversity, and water and air pollution.</p> <p>So at the Good Food Institute, we have a theory of change. All of us want to change the food system, but we aren't doing it by talking to consumers. We're doing it by changing the marketplace. We believe that if people have access to products that taste the same or better than animal products, are around the same price, and are convenient, that people will buy them, because people really want good-tasting, convenient food.</p> <p><img src="//" alt="1400 Nicole Rawling v2 (1)"></p> <p>So, we're working with governments, academic institutions, entrepreneurs, existing companies, and scientists, to try and develop new products. And as you might know, the current products on the market tend to be soy or wheat based. That's really the old technology, so now companies are looking into things like pea protein or mung bean protein. Scientists are examining a lot of different ways that we can use plants to mimic these animal alternatives.</p> <p><img src="//" alt="1400 Nicole Rawling v2 (2)"></p> <p>We haven't actually dealt much with insects before. Currently, I can tell you what we're thinking. We think that in order to solve the world problems caused by animal agriculture, we think plant-based and clean meat are more direct, rapid solutions to our global food problems than insect protein. In order to make the most change as soon as possible, we think we should go with plant-based and clean meat alternatives. And honestly, we worry about the insects as well. I mean, we're talking about trillions of living beings. We don't really know if they're sentient or not, but imagine if they are. We would be causing massive, massive amounts of suffering that go far beyond what exists in the current animal agriculture system.</p> <p><img src="//" alt="1400 Nicole Rawling v2 (3)"></p> <p>So, all of us are working towards this goal: how can we effectively reduce meat consumption? These are all plant-based products, and I've had them all, and they're delicious. And omnivores like them. Our entire goal is to produce products that omnivores would like to eat. So yes, I am vegan. I don't care about vegans. I don't care about vegetarians. They're choosing these products anyway. I mean, most of you probably have been vegetarian and vegan for a very long time. You've eaten, excuse my language, really crappy products, right? Because you're going to eat them, you're not going to eat the meat. But it's going to take products that really mimic traditional animal products for us to get omnivores to switch, and right now, those products are on the market.</p> <p>I had the Moving Mountains Burger for the first time here. Has anyone had that yet? No? Oh my goodness, you have to try it. So it's based out of mushroom. I love mushrooms. This burger has a real mushroom taste, but it's a burger. It's available in over 500 locations in the UK, and absolutely delicious. So, I went out with an omnivore for dinner to have it, and he was so skeptical, and he really didn't want to hear about what I did, and he was talking about his work. And I was like, "Oh, try a little bit." And really, he was shocked. Like, he really was shocked, because they don't expect that we can mimic the taste and create products that they really enjoy.</p> <p><img src="//" alt="1400 Nicole Rawling v2 (4)"></p> <p>So, we strongly believe there needs to be direct substitution. We don't think that people are going to eat insects in replace of a burger, or a sausage, or a piece of fish. People are going to continue to eat those products, right? If they are going to eat products like insects, they tend to be more snacks, or a novel food, that's kind of fun, "Oh, let's go eat some crickets." It's not, "Okay, let's go out for a burger. You know, oh, I think I'll have a scoop of worms instead." Right now, there isn't that direct substitution, and we've seen that in the plant-based market as well. If you cannot replicate the animal products that people are used to eating, then they aren't going to buy them, so we strongly believe there needs to be direct substitution.</p> <p><img src="//" alt="1400 Nicole Rawling v2 (5)"></p> <p>So right now, with insects, it does tend to be a lot of indirect substitution, so snacks, or protein bars, or protein powders. Now, those might be great for the market. There might be a huge market for them, but for us, it's not solving the problem that we're looking to solve. We want to create substitutes for traditional animal products. Nick might know more about this: I have heard that there was a cricket burger that just didn't seem to be very tasty, and people weren't big fans of it. There is a bolognese sauce that Nick brought, that's really tasty, but our position is still that all of us are effective altruists. We want to spend our money in the most effective way. We want to spend our time in the most effective way. Plant-based products and clean meat products are already proven concepts. Clean meat isn't on the market yet, but plant-based products are on the market and successful, and clean meat, we've proven the concept, and it will be on the market soon. So why put money and resources into something that hasn't been proven?</p> <p><img src="//" alt="1400 Nicole Rawling v2 (6)"></p> <p>So again, all of us are trying to reduce the climate impact of animal agriculture. How do we do that? So, we all know about food waste, right? There are massive amounts of food waste. Right now, insect farming can actually alleviate some of that issue. Some of the insect companies will take our food waste and feed it to insects. But there are a couple issues once you start to scale. Again, our goal is to reduce meat consumption. When we start to reduce meat consumption, we also start to reduce agricultural waste. That agricultural waste is a lot of the products that are going into feeding insects right now, so there will be less of that, which means there'll be less feed for insects. So part of the issue is that we're reducing the supply of waste anyway, and then a lot of the insects that we're using for insect protein can't always survive on all the waste from agriculture, and need a more consistent feed. Some insects really do need more consistent sources of food than whatever we're throwing away.</p> <p><img src="//" alt="1400 Nicole Rawling v2 (7)"></p> <p>Then the question is how do we feed the world population. Animal agriculture is inefficient, so we should be feeding plants directly to humans. We have a couple concerns with insect farming as an alternative. Number one, what happens if the insects get out? I'm from the United States. We just had massive hurricanes in the South, and pigs in the agricultural system escaped, and they were just left wild to get around, because the hurricanes took down those facilities. Now, pigs are big. You can catch pigs. They're also on the ground. You can round them up. What happens if there is some sort of natural disaster, which will continue to occur with problems in climate change, and the insects get out? That's a real concern, not just because these insects are out and continue breeding, potentially in areas where they're not native. They can also be near agricultural systems where they could destroy nearby agriculture systems.</p> <p><img src="//" alt="1400 Nicole Rawling v2 (8)"></p> <p>Our other worry is changes in animals from selective breeding. These insects are going to be bred very quickly. They obviously have short lives. This is what's happened to chickens without genetic modification. I don't know if you can see very well, but going to market in 1957, chickens were 905 grams. In 2005, they're 4,202 grams, and they even cut the amount of time it took for them to grow to that size. This wasn't through genetic modification. This was just through selective breeding. These are very different animals. Now, I'm not saying that this necessarily will occur with insects, but when you understand the agriculture system, people want to make the most amount of profit. That's the way all businesses work. It wouldn't surprise me if this would also happen within the insect world.</p> <p>Then how can we create a more humane food system? I think this is a really important question, especially for all of us, who do think about these philosophical issues. Like, do insects suffer? We don't actually have a lot of information on that right now, but is it really safe to make the assumption that they don't? And considering the impact, when we're talking about trillions of animals, do we want to make the wrong decision here? If they can suffer, we really are causing massive, massive amounts of suffering. So, I don't know if any of you know Lewis Bollard from the Open Philanthropy Project. He's a huge supporter of a lot of EA causes. So he said, "I'd be hard-pressed to assign less than a 10% probability to insects being conscious, and even at that level of 10%, we really should be concerned."</p> <p><img src="//" alt="1400 Nicole Rawling v2 (9)"></p> <p>Thank you so much.</p> <h2>Nick's Talk</h2> <p>I've got some products which you can come have a look at afterwards if you're interested, and some details about my network. I won't say too much about myself, because time is limited, but I guess the main thing to say is that I personally have a commitment and interest in sustainable food systems. I've formed an organic food-growing cooperative, and that's what first got me interested in the excessive amounts of waste and food material that goes to waste that could be better used. I then joined a thing in Sheffield called The Junk Food Project, which again was around reducing food waste. And that connection between the need to feed the community globally and the amount of waste is what brought me into insects as an area of interest.</p> <p>So I formed the Woven Network. It's there to stimulate and learn about what opportunities there are, and to explore them. My argument isn't that insects are going to feed the world, but that they can have a role to play. They can bring some extra dimension, which may not be possible through a purely plant-based approach. I imagine that many of you will have seen headlines, news articles about insects, typically with someone about to put a cricket in their mouth.</p> <p><img src="//" alt="1400 Nick Rousseau"></p> <p>It's very sensationalized, and the media, you know, has a tendency to kind of dramatize everything. But it's a growing and changing landscape, and I want to give you a bit of a sense of where things stand at the moment, so that you can make your own judgment about your engagement with it now and in the future.</p> <p><img src="//" alt="1400 Nick Rousseau (1)"></p> <p>It started in 2013, with a UN report from the Food and Agriculture organization, that consisted of a lot of research looking at our global food challenges, particularly around access to protein, many of the points that Nicole made, and suggested the insects could have a role, significantly influenced by the fact that insects have been consumed through the centuries in many different cultures around the world, but obviously, they're not currently in Western, developed country diets.</p> <p>Again, I think we'd all agree with Nicole's point about the unsustainability of the current food system, and that the vegan lifestyle and diet is recognized as being the basis for the most sustainable option, because it cuts down on carbon emissions, it cuts down on land use, it cuts down on water use, cuts down on a lot of the negative impacts of livestock farming.</p> <p><img src="//" alt="1400 Nick Rousseau (2)"></p> <p>This is my family. This is my wife. There is no way on God's Earth that she's going to become a vegan, I'm afraid. So my argument is you'll need to have a number of alternatives, and I think many of them will be plant-based, and I'm very interested in that, and I think that's a good thing, but food choices are made by a whole range of things. I had the opportunity to go to California recently, and there's an exhibition in San Diego Museum about the way in which people's perception of animals and creatures changes as they become wild, domesticated, pets, on their plates, and it's a sort of complex area, but I think the point I'm making is that choices are partly about availability, convenience, what you can get in your stores, and partly about price. Certainly that's a big issue. Also about flavor, the experience of eating it, what you like. Religious and social issues can have a bearing on your food choices.</p> <p>Issues of right and wrong, I think, are increasingly on people's minds. I find it a real struggle when I go into a supermarket or a shop now to determine, "Am I going for food miles, organic, fair trade?" There's so many different things which are seen as right and wrong. It's a really complex area, and then you've got the science, and kind of being presented with hard facts about the nutritional components that go into food. And I think there's an interesting thing, again stepping back a bit, about people's reaction to science-based messages around food, because again, coming back to this, food is eaten in a social context. I would love to go out to more vegan restaurants and things, but it's very difficult when your partner doesn't share that view, so we're very keen to see more restaurants that offer a range of different products, and I think that need for variety is critical.</p> <p><img src="//" alt="1400 Nick Rousseau (3)"></p> <p>I want to make a further point that kind of builds, again, on Nicole's point about meat consumption increasing. What you can see here is the massive increase in meat consumption. This is typically most predominant in countries like China and India. And this isn't because they've discovered that they really like the flavor of meat. It's because they think they want to move to a Western lifestyle, and they associate eating meat and having access to meat as being about being affluent, being wealthy, being successful. It's got a lot of connotations in people's minds. Sadly, from my point of view, and to some extent, I think, from the country's point of view, they are moving away from their more traditional diets, which often include harvested insects, which do have much better nutritional components than going to McDonald's and having a burger. But that is the shift that we're seeing.</p> <p>So I think there's a thing about the challenge back to the plant-based product developers, how you create that social association of having a very expensive steak. So actually making things expensive is sometimes important, and an interesting one. We've been quite interested in how sushi has come into sort of Western diets, having been seen as a sort of weird Japanese thing involving raw fish, and I think it's because it seems cool. It's associated with a modern lifestyle nowadays.</p> <p><img src="//" alt="1400 Nick Rousseau (4)"></p> <p>So a bit about insects then. The focus of this is around humans eating insects in food products, but insects and food have a much more complex interaction. So, as I mentioned, about 1,800 species of insects are edible, and across the world, people have harvested insects and secured a lot of nutritional value from that, from just harvesting them in the wild. But equally, insects, if you're trying to plant and grow crops, are a real pest, so the killing of insects is a big part of plant food production, and that's a dynamic which is quite challenging, and again, the association that insects have with people, with poverty, with being pests, is a challenge for us.</p> <p>So in Thailand, we also have insect farming, and I'll come onto more about that. That's now developing and emerging as an interaction with insects, and that's for human consumption. Then you've got the scenario where insects are bred and farmed for feeding to livestock, and this is about a way of trying to reduce livestock's carbon footprint and increase their sustainability. So it's a complex web is the message, and each of these could be a talk in their own right. I haven't got time for that.</p> <p>So I touched on the fact that insects are killed in the growing of plant products. Soy is typically a product that is grown a lot for livestock feed, but also it's consumed by vegans, and yet huge numbers of insects and other creatures die in its farming. I guess I want to make the point that you're not going to get away from killing insects somewhere along the line, and they may well be less sentient than others, but so are mice, and so are other creatures that suffer through farming. I think to some extent, the human population is the problem. I don't have a solution for that, I'm afraid.</p> <p>So, a little bit about where we are with insect products. As I said, people traditionally have just eaten them straight from their natural form, cooking them up, frying them up often. We've had this gimmick coming through big time, and it's still very much part of the markets, part of the most successful businesses, you know, "Do you dare to eat this insect? Do you dare to put it in your mouth?" You know, lollipops with an insect in, things like that. There's certainly good money to be made from that. I don't find it at all helpful, for a range of reasons. Sadly, on the GCSE curriculum now, you learn about insects as being a potential part of the future food, and schools often say, "Okay, how can we help our kids to experience this?" So they write to Crunchy Critters, and they get a box of crazy things to try and pop in their mouths, which reinforces the view that they're just weirdness.</p> <p><img src="//" alt="1400 Nick Rousseau (5)"></p> <p>This is a chef, and he has produced insect-based burgers and other very delicious products, and they're developing that as we speak. And I think that whole area of dishes in restaurants, certainly it's very big in California and elsewhere, so I think there's a growing interest in this area, in how you create true dishes which are containing insects and demanded by customers. I've brought a range of products that are produced by our members, and they take many different forms. A lot of them, again, I wouldn't disagree are sort of snacks. They're gimmicks. They're part of that sort of protein bar lifestyle. They're not going to stop people eating meat. And these guys are producing the material that goes into One Hop bolognese sauce, and they discovered a way of taking insect powder and producing something which is more of a paste, which is much more versatile, although it's an insect product. So I guess part of my argument, again, is that we haven't seen the end of what insects can be like. What's going on?</p> <p><img src="//" alt="1400 Nick Rousseau (6)"></p> <p>Insects have a range of nutritional components, which can be harder to replicate in plants, and I think there's particular things, such as the omega-3 and the amino acids, which are particularly, because they're not plant-based, are more suitable and useful for humans. I'm not a biochemist. I'm not going to claim that plant products can't include these as well, but I think they have a role in that kind of debate about how you offer people the nutrition that they need. And I should say, of course, I did mention there's 1,800 different insects, so it's overly simplified to say we're talking about crickets, you know? There's a whole bunch of other insects out there. And also, insects have many other components which can be quite valuable in terms of their market value. The products that are on the outer casing of insects can have a lot more medical value than their protein content.</p> <p><img src="//" alt="1400 Nick Rousseau (7)"></p> <p>The reference has already been made to insects as waste converters, and this is what first got me onto this. And I think this is still very much a sort of untapped area. Currently regulation, certainly in the European Union, is very restrictive of what you can feed insects if you're going to feed those insects, then, or use those insects to create livestock feed, because of the risk averseness of European regulations, and I understand that, but I would argue that there's a lot of value in understanding this better, and if we do have huge food waste mountains and insects can be converting those into something that's valuable, that could be good. I wouldn't necessarily advocate what's going on in Durban, where I think 40,000 homes are converting their human fecal waste into Black Soldier Fly that is fed to chickens, funded by the Gates Foundation. I think that illustrates that you can go quite a long way down there. The challenge, of course, is that you then try and sell that to consumers, that's a difficult sell, you know? Here's a tasty burger that was fed on something which you don't really want to think about. So not easy, some of this stuff.</p> <p>Another thing that I think has been particularly exciting, again the mention was made earlier about antimicrobial resistance as being a real problem facing humanity, facing our world collectively. The use of antibiotics, which are essentially artificial ways of stimulating a creature's resistance to these diseases, could potentially be replaced by AMPs, antimicrobial peptides. Again, I'm not a biochemist. Studies have been done that found that if you stimulate insects in the correct way, they create AMPs, which then when fed to chickens mean that they are then resistant to campylobacter, E. coli, and other things, so they're using a natural system that exists within nature to build resistance to disease rather than artificially stimulating that. So that, I think, could have a role in the future.</p> <p><img src="//" alt="1400 Nick Rousseau (8)"></p> <p>And finally, in Southern Africa, the mopane caterpillar is being harvested to extinction in some areas, so actually farming it, and encouraging people to farm it, is a way of creating economic opportunity, and then putting the creatures back into the habitat. So trying to protect natural populations could be another benefit.</p> <p>Insect farming is a changing area. Again, as Nicole's pointed out, this is going to evolve. It's going to be increasingly intensified. One of the big challenges with insects is that they are rather more expensive, because it's quite a manual process to create them, so there's a pressure on cost, and big investment is going into this now, which is going to try and drive the price down. So I absolutely understand the pressures that are on the food system, that will apply here as much as anywhere else.</p> <p><img src="//" alt="1400 Nick Rousseau (9)"></p> <p>Here are some challenging areas. I think we don't know enough about the suffering area. The Dutch have introduced legislation for insect farming, because it's actually quite a big business. It's quite a big sector there, based around the five freedoms, so they're seeking to create an understanding and some standards around insect farming, which would recognize the need to maintain the welfare of insects. Resource use. The point about resource use is that again, typically the argument against meat production is it uses more land, more water, and more other products. The latest studies of insects, and this also applies to greenhouse gas emissions and yes, consumer acception, is that it kind of depends what you feed the insects on. And again, this comes back to if we could feed them on waste, then you know, it's a win-win.</p> <p>If we're feeding them on cabbages that have been grown in the fields, then again, we come back to the point that it's sort of a wasteful thing to be doing. So I think there are opportunities to make better use of insects. It requires more research to go forward, but if we believe that there are benefits in having insects as part of that mix, then I think we need to be putting more research into looking at how we can optimize that resource use. And again, as has been referred to, if you feed insects different things, you get different nutritional value coming out the other end, so there's a lot of science going into that as well. Greenhouse gas emissions, again, much lower than the methane you get from cattle, but again, it depends what you feed the insects on. Consumer acceptance, clearly a challenge. Is it going to be a meat replacement? I'm not sure it's going to be in the long run, but who knows? And we have these regulatory challenges around the food safety risks, which again, a different kind of thing in a plant-based area. It's more expensive.</p> <p>So that's me. It's an interesting area. I hope this has been useful in terms of your own sort of understanding a bit more about the sort of pros and cons of it. I still think there's something that's worth exploring, and I think the next speaker is going to emphasize that. Thank you.</p> <h2>Kyle's Talk</h2> <p>My name is Kyle Fish, and currently, I'm working as a researcher at Tufts University, focusing on food system innovation. So far, we've heard some arguments for and against the use of whole insect farming as a tool for pursuing food sustainability, but now I want to broaden the conversation a little bit and consider another type of insect farming, specifically insect cell farming, the basic idea here being that we might be able to produce massive amounts of insect cells without growing whole animals, and then use those cells themselves to produce different food products. To understand what's going on here, it's important to know a little bit about the general framework of cellular agriculture.</p> <p><img src="//" alt="1400 Kyle Fish v2"></p> <p>So you can see in this diagram, we're starting with a cow, but what we're doing is taking a small sample of cells from the cow, primarily muscle and fat cells, and then putting them into a big tank, known as a bioreactor, and getting them to multiply. Then, once we've grown lots and lots of these cells, we can collect them out of the bioreactor and form them into 3D tissues, to produce meat products that are identical to the ones that are traditionally obtained from slaughtered animals. The main motivation of this is obviously to reduce the demand for factory farms and reduce the demand for other problematic meat production methods. Since most meat products are made up primarily of muscle and fat cells, doing this allows us to make only the parts of the animal that people are actually interested in eating, without the suffering, and environmental issues, and other concerns associated with whole animal farming.</p> <p>Theoretically, this process could be used to produce any type of meat that's normally consumed. The cow here doesn't have to be a cow. You could start with a pig, or a chicken, or a turkey, and take cells from those animals, and then use the same process to create any of the food products that are normally generated from those animals. So one of the things that the group that I work on is interested in is pushing this a little bit further, and instead of using a cow here, actually starting with insects as the source. The idea is instead of taking muscle and fat cells from a cow, we can get them from insects, and then use the same process to produce insect cells, that can then function as either a protein and nutrient supplement in plant-based or other cultured products, or it could potentially be used to create standalone insect meats, or mimic existing products in other ways.</p> <p><img src="//" alt="1400 Kyle Fish v2 (1)"></p> <p>However, the question remains, "Why would we want to do this?" We know that cells from cows and chickens taste really good, and there's potential for being able to do this process with those, so one could reasonably argue that it's not worth exploring insect cells in this way. But to answer this question, it's worth knowing a little bit about the technical challenges facing clean meat development today, clean meat being meat that's produced through this process of cellular agriculture. So some of those challenges, first of all, most cell cultures require a really expensive liquid serum that's derived from fetal calves, and using this serum in cellular agriculture is out of the question, both for economic reasons and for ethical reasons. It's still an animal product, so even if we're just growing cells, there's still an animal cost if this is part of the equation.</p> <p>Also, most cells have to be grown in single layers on flat surfaces, and this vastly restricts the possibilities to scale up production. If you're having to grow all of your cells on a flat layer, it's really difficult to grow the amount of them that you would need in order to create some sort of food product. Ideally, you want to be able to grow them in what's known as suspension culture, which is where the cells are floating in a liquid solution, where they can be grown at much higher density. Also, lots of cells have to be grown in very specific environmental conditions. Lots of cell types have to be grown at right about 37° celsius with 5% CO2 in their atmosphere. Otherwise, they will die, or at the very least, they'll stop growing in the same ways.</p> <p>So all of these are really serious challenges that cellular agriculture companies today are working to address, but if we start to look at some of the characteristics of insect cells, these challenges start to seem a little bit less intimidating. For example, with insect cells, it's relatively easy to adapt them to serum-free media. You don't need that animal-based serum in the same way that you do with a lot of other cell types. Also, it's relatively straightforward to get them to grow in suspension culture, so with insect cells, you can grow them in a liquid, floating around, instead of having to keep them attached to these flat surfaces. Also, the insect cells can tolerate lots of different environmental conditions. They don't care a whole lot what temperature they're grown at. You can even change the pH, the carbon dioxide concentrations, and the insect cells don't really care. They'll keep growing in more or less the same way.</p> <p><img src="//" alt="1400 Kyle Fish v2 (2)"></p> <p>And all of those characteristics indicate that insect cells are worth exploring within the cellular agriculture paradigm. But to really understand whether or not this is a viable option, we need some data from the lab, and we need to explore how these insect cells actually behave, and whether or not the cell types that we're interested in really display these characteristics. So the team that I work on has been looking at five different goals related to insect cellular agriculture, the first two being to adapt insect muscle stem cells to grow on serum-free media, and then also to get them to grow in single-cell suspension. Then, we also wanted to look at regulating how these cells grow, to make sure that we can grow them quickly and efficiently, and then also making sure that we can get them to turn into the specific kinds of cells that we're interested in, from a food perspective.</p> <p>We're also interested in growing complex 3D tissues. Having lots of cells is great. They're very nutritious, but it's only really valuable if we can somehow assemble them into a format that people are actually interested in consuming. Whether that's growing 3D insect tissues on their own or incorporating them into other products. Lastly, we're interested in improving the nutrition of the cells, and looking at different ways to modify these cells or modify the conditions that they're grown in, in order to improve the nutrition that they offer to humans. On the right, here, you can see a fruit fly, which is the cell source that we've been working with up until now. We've gotten our cells from these animals, and then are turning them into insect muscles.</p> <p>And in the bottom, all of the little green dots that you can see are muscle stem cells, so they're cells that are still growing, but haven't yet turned into actual insect muscle. Then it's a little bit hard to see with the light, but there are some red fibers also running through the slide, and those red fibers are insect muscle that have started to form from these cells that were growing in culture, and those muscle pieces are what we're really interested in using for food.</p> <p>I don't have time to take you through all of our experiments and results, but we've found some really exciting results from each of these experiments.</p> <p>We've also been working with a really interesting material known as cytosine. This is a totally edible material, and we're able to process it in a way to get these really nice, aligned structures that mimic the striations in meat. Then we can actually take these insect muscle cells and get them to grow on this material to create different meat-like tissues. And this is really promising. All of these findings support our hypothesis that insect cells are worth exploring in cellular agriculture.</p> <p><img src="//" alt="1400 Kyle Fish v2 (3)"></p> <p>However, there are still some challenges and open questions when looking to use this technology. For one, not a lot of work has been done to date with insect muscles. There isn't the same body of background research about insect muscle growth and development that we have for cows, chickens, or obviously humans, and understanding these processes is really essential for being able to grow and then use these cells in some sort of food system. Also, taste is a pretty open question. People have eaten whole insects, I'm sure Nick could tell you a lot about what those taste like, but nobody has really eaten products just derived from insect cells, so it's uncertain how much room we'll have to play around with different flavors and textures, to see what sorts of products we can turn these into or how we can incorporate them to bolster other products that are currently being generated.</p> <p>Consumer acceptance is another potential concern. Consumer acceptance is a challenge in cellular agriculture in general. It's still up for debate whether or not people will actually be willing to eat meat products that have been grown in some sort of lab or factory, and as if that wasn't difficult enough, doing this with insects may add a whole other layer of complication on top of that. However, we also think that there is potential that people will evaluate this favorably relative to the idea of eating whole insects, so there are some possibilities in terms of marketing along those lines.</p> <p>But even despite these challenges and uncertainties, we think that this is a very promising area of investigation for a number of reasons. One of them is the possibility to create new food products that people will find interesting and nutritious, and that will help to relieve some of the burden currently created by existing food systems. However, there are lots of other ways that this could be valuable. For example, with the characteristics that insect cells have, there's a lot of potential for us to learn from these cells and derive lessons from working with them that we can then apply in other cell types. If we can figure out why it is that insect cells are fine growing in serum-free media, and why it's relatively easy to get them to grow in suspension, then we can apply those lessons to work more effectively with cow cells, chicken cells, or turkey cells to make products that people are more familiar with, and do some sort of direct substitution.</p> <p>We can also look at this from a perspective of global health and food security. Insect cells are relatively simple and easy to grow relative to lots of other systems that have been proposed, which means that this technology might offer a way for cellular agriculture to play a role in reducing the burden of global poverty and food insecurity in resource-constrained environments around the world. This system would be a lot easier to implement in areas that don't have the same sort of scientific and industrial infrastructure as places like elsewhere in the Western world.</p> <p>So that basically sums up the case for insect cell farming. Again, we think that this is a very valuable technology. One thing to note in terms of whole insect farming, there is potential, at least in the short term, for those efforts to contribute to the development of this technology. In the case of aquaculture, a lot of the valuable research about fish muscle growth and development, that has been used in the fish cellular agriculture industry, was initially done as part of aquaculture programs, so the same thing could be said here, that if insect farming programs are helping to contribute to this sort of body of knowledge that can eventually be exploited to do insect cellular agriculture in really impactful ways, then there's a chance that that could be valuable.</p> <p><img src="//" alt="1400 Kyle Fish v2 (4)"></p> <p>So I think soon here, we're going to open it up for questions, but before we do, I want to give a thanks to the rest of the cellular agriculture team at Tufts University, especially to Natalie Rubio, who has been leading these insect projects, and David Kaplan, the director of our lab, and also a thank you to our funding partners who have helped to make this work possible. Thank you.</p> <h2>Questions</h2> <p><em>Question</em>: Where do you see the timelines on what you see as the ideal situation, or your sort of various areas of interest? What do you see the timelines on those? Where do you see timelines in terms of maybe insects becoming mainstream, or in terms of, yeah, the developments of your various areas?</p> <p><em>Nick</em>: It's not an easy question to answer, because there are quite a number of challenges. I think there's obviously work going on to create new types of product based on insects, and that's very much a live thing at the moment. I think one of the things that is interesting is, the products at the moment are companies that are saying, "I want to sell insects," or, "I want to sell burgers that are made with plants." I think there's a scope in the future to see these things becoming more blurred, potentially, to get the best of different worlds. I'm sure the purist vegan wouldn't agree with that, but there could be a market for products which do have nutritionally perfect composition for different markets, different audiences, different types of person, different flavor compositions, and maybe draw on the insect cell farming and things, but I'm just seeing so many different things coming through that are kind of happening.</p> <p>I think the regulatory thing is a big hurdle for us in terms of the insect area, because the European Union has a novel food regulation, which means that anything that is deemed to be a novel food, and insects now are, ironically they weren't until the beginning of this year, so people have to prove that the insect products they're developing are safe and don't have any risks. And that's affecting the market quite significantly. It's meaning that the bigger companies have got an opportunity to go forward and smaller ones less so. So that's something that's going to change very slowly, I suspect, in practice.</p> <p>Then the other challenge we've got is the whole kind of insect farming, the cost of it, the cost of the raw material that you get as a result, and therefore the proportion. A lot of these products have got relatively small amounts of insect material in them, because otherwise the costs would be prohibitive. So that's another pressure that's a bit of a challenge, in terms of farming technology, how to manage that within welfare constraints. So there's a lot of things. I don't have a time scale, I'm afraid.</p> <p><em>Kyle</em>: I think that in terms of insect cellular agriculture, it'll be quite a while before there's any sort of product available that's derived exclusively from that technology, but I don't think it would take too long to find ways to incorporate insect cells as a protein or nutrient supplement in plant-based products or other cultured products. And also, on our team, we're working with a variety of other cell types as well, and have already started to see ways in which our work with insect cells can inform and improve the work that we're doing in other areas, so even if it's a while before a product comes out of this technology, it's already helping to speed up development in other areas.</p> <p><em>Nicole</em>: At least, I mean, there's a lot of plant-based products on the market right now. There are companies that claim they can get clean meat products not based on insect cells on the market this year. It'll probably be a mixture, so not 100% clean meat. Most clean meat companies say two years to be on the market and five to 10 to be at price parity.</p> <p><em>Question</em>: Will insect cell culture reduce the risk of disease outbreak versus what we see in mammal cell culture?</p> <p><em>Kyle</em>: Relative to mammalian cell culture, I don't think so. There are already pretty strict controls on mammalian cell culture, and pretty robust technologies for determining if there are potential pathogens in some sort of culture, and we're able to use those same technologies with insect cells. So as with any sort of cell culture, there is a risk of contamination, but we're able to identify that pretty quickly, and it wouldn't be a problem in terms of putting an actual product on the market.</p> <p><em>Question</em>: What about using yeast or bacterial cells? What's the advantage of insect cells over, say, yeast or bacteria?</p> <p><em>Kyle</em>: So insect cells are sort of a happy medium between yeast or bacteria and mammalian cells. They have a lot of the complexity that mammalian cells have, in terms of being able to turn them into different cell types and create complex tissues that you can't get with bacteria and with yeast, and yet they offer some of the same growth simplicity that you see with yeast or bacteria. So they're a lot easier to grow, while also maintaining some of the complexity, which is one of the reasons that they're really interesting.</p> <p><em>Nick</em>: There's an Israeli company that's developing protein alternatives from yeast. There's lots of interesting things being developed.</p> <p><em>Question</em>: I think you touched on, in your talk earlier, about selective breeding that we've seen in chickens. What would be bad about that if we saw that in insect farming, if you think that would be bad?</p> <p><em>Nicole</em>: Well, I think there's a lot more information that we need right now, because the industry isn't very developed. It's just concerns that we have, that we hope the industry would actually take these into account. I mean, there's really no way to protect from ever having some sort of massive exodus of insects from a facility. It's bad enough if you have billions of black flies getting out into an area where they shouldn't be. What if those black flies are different, right? That they have been bred in a certain way to maybe have more protein or meat than a normal black fly? How will that have an effect on the environment? We just don't know.</p> <p><em>Question</em>: What is used for insect cell growth instead of animal serum?</p> <p><em>Kyle</em>: There are a lot of different formulations. Most of the ones that we've experimented with are commercially available, like proprietary serum for media formulations, but typically the growth factor profiles for insects are a little bit simpler, so they're growth media that just contain those factors instead of having them in the serum mixture.</p> <p><em>Question</em>: If you guys could have one thing on your wishlist in terms of public perception, what's the one thing that you would perhaps potentially change?</p> <p><em>Nicole</em>: Let's eat plants.</p> <p><em>Nick</em>: I guess I'd like to see things like the bolognese sauce coming forward as something that people are more conscious of, rather than just insects on a stick.</p> <p><em>Kyle</em>: And I think just raising awareness for the field of cellular agriculture more generally. A lot of companies and a lot of academic groups are doing really valuable work in this, and public support would go a long ways towards directing additional funding, additional talent, and other resources to help accelerate this technology more generally.</p> the-centre-for-effective-altruism HPQJu5C5o583rMZ78 2019-04-05T15:31:43.903Z Eric Drexler: Reframing Superintelligence <p><em>When people first began to discuss advanced artificial intelligence, existing AI was rudimentary at best, and we had to reply on ideas about human thinking and extrapolate. Now, however, we've developed many different advanced AI systems, some of which outperform human thinking on certain tasks. In this talk from EA Global 2018: London, Eric Drexler argues that we should use this new data to rethink our models for how superintelligent AI is likely to emerge and function.</em></p> <p><em>A transcript of Eric's talk is below, which CEA has lightly edited for clarity. You can also watch this talk on <a href="">YouTube</a>, or read the transcript on <a href=""></a>.</em></p> <h2>The Talk</h2> <p>I've been working in this area for quite a while. The chairman of my doctoral committee was one Marvin Minsky. We had some discussions on AI safety around 1990. He said I should write them up. I finally got around to writing up some developed versions of those ideas just very recently, so that's some fairly serious procrastination. Decades of procrastination on something important.</p> <p>For years, one couldn't talk about advanced AI. One could talk about nanotechnology. Now it's the other way around. You can talk about advanced AI, but not about advanced nanotechnology. So this is how the Overton window moves around.</p> <p>What I would like to do is to give a very brief presentation which is pretty closely aligned with talks I've given at OpenAI, DeepMind, FHI, and Bay Area Rationalists. Usually I give this presentation to a somewhat smaller number of people, and structure it more around discussion. But what I would like to do, still, is to give a short talk, put up points for discussion, and encourage something between Q&amp;A and discussion points from the audience.</p> <p>Okay so, when I say "Reframing Superintelligence," what I mean is thinking about the context of emerging AI technologies as a process rolling forward from what we see today. And asking, "What does that say about likely paths forward?" Such that whatever it is that you're imagining needs to emerge from that context or make sense in that context. Which I think reframes a lot of the classic questions. Most of the questions don't go away, but the context in which they arise, the tools available for addressing problems, look different. That's what we'll be getting into.</p> <p>Once upon a time, when we thought about advanced AI, we didn't really know what AI systems were likely to look like. It was very unknown. People thought in terms of developments in logic and other kinds of machine learning, different from the deep learning that we now see moving forward with astounding speed. And people reached for an abstract model of intelligent systems. And what intelligent systems do we know? Well, actors in the world like ourselves. We abstract from that very heavily and you end up with rational, utility-directed agents.</p> <p>Today, however, we have another source of information beyond that abstract reasoning, which applies to a certain class of systems. And information that we have comes from the world around us. We can look at what's actually happening now, and how AI systems are developing. And so we can ask questions like, "Where do AI systems come from?" Well, today they come from research and development processes. We can ask, "What do AI systems do today?" Well, broadly speaking, they perform tasks. Which I think of, or will describe, as "performing services." They do some approximation or they do something that someone supposedly wants in bounded time with bounded resources. What will they be able to do? Well, if we take AI seriously, AI systems will be able to automate asymptotically all human tasks, and more, at a piecemeal and asymptotically general superintelligent level. So we said AI systems come from research and development. Well, what is research and development? Well, it's a bunch of tasks to automate. And, in particular, they're relatively narrow technical tasks which are, I think, uncontroversially automate-able on the path to advanced AI.</p> <p>So the picture is of AI development moving forward broadly along the lines that we're seeing. Higher-level capabilities. More and more automation of the AI R&amp;D process itself, which is an ongoing process that's moving quite rapidly. AI-enabled automation and also classical software techniques for automating AI research and development. And that, of course, leads to acceleration. Where does that lead? It leads to something like recursive improvement, but not the classic recursive improvement of an agent that is striving to be a more intelligent, more capable agent. But, instead, recursive improvement where an AI technology base is being advanced at AI speed. And that's a development that can happen incrementally. We see it happening now as we take steps toward advanced AI that is applicable to increasingly general and fast learning. Well, those are techniques that will inevitably be folded into the ongoing AI R&amp;D process. Developers, given some advance in algorithms and learning techniques, and a conceptualization of how to address more and more general tasks, will pounce on those, and incorporate them into a broader and broader range of AI services.</p> <p>So where that leads is to asymptotically comprehensive AI services. Which, crucially, includes the service of developing new services. So increasingly capable, increasingly broad, increasingly piecemeal and comprehensively superintelligent systems that can work with people, and interact with people in many different ways to provide the service of developing new services. And that's a kind of generality. That is a general kind of artificial intelligence. So a key point here is that the C in CAIS, C in Comprehensive AI Services does the work of the G in AGI. Why is it a different term? To avoid the implication... when people say AGI they mean AGI agent. And we can discuss the role of agents in the context of this picture. But I think it's clear that a technology base is not inherently in itself an agent. In this picture agents are not central, they are products. They are useful products of diverse kinds for providing diverse services. And so with that, I would like to (as I said, the formal part here will be short) point to a set of topics.</p> <p>They kind of break into two categories. One is about short paths to superintelligence, and I'll argue that this is the short path. The topic of AI services and agents, including agent services, versus the concept of "The AI" which looms very large in people's concepts of future AI. I think we should look at that a little bit more closely. Superintelligence as something distinct from agents, superintelligent non-agents. And the distinction between general learning and universal competence. People have, I think, misconstrued what intelligence means and I'll take a moment on that. If you look at definitions of good from the 1960s, ultra-intelligence and more recent Bostrom and so on (I work across the hall from Nick) on superintelligence the definition is something like "a system able to outperform any person in any task whatsoever." Well, that implies general competence, at least as ordinarily read. But there's some ambiguity over what we mean by the word "intelligence" more generally. We call children intelligent and we call senior experts intelligent. We call a child intelligent because the child can learn, not because the child can perform at a high level in any particular area. And we call an expert who can perform at a high level intelligent not because the expert can learn - in principle you could turn off learning capacity in the brain - but because the expert can solve difficult problems at a high level.</p> <p>So learning and competence are dissociable components of intelligence. They are in fact quite distinct in machine learning. There is a learning process and then there is an application of the software. And when you see discussion of intelligent systems that does not distinguish between learning and practice, and treats action as entailing learning directly, there's a confusion there. There's a confusion about what intelligence means and that's, I think, very fundamental. In any event, looking toward safety-related concerns, there are things to be said about predictive models of human concerns. AI-enabled solutions to AI-control problems. How this reframes questions of technical AI safety. Issues of services versus addiction, addictive services and adversarial services. Services include services you don't want. Taking superintelligent services seriously. And a question of whether faster development is better.</p> <p>And, with that, I would like to open for questions, discussion, comment. I would like to have people come away with some shared sense of what the questions and comments are. Some common knowledge of thinking in this community in the context of thinking about questions this way.</p> <h2>Discussion</h2> <p><em>Question</em>: Is your model compatible with end-to-end reinforcement learning?</p> <p><em>Eric</em>: Yes.</p> <p>To say a little bit more. By the way, I've been working on a collection of documents for the last two years. It's now very large, and it will be an FHI technical report soon. It's 30,000 words structured to be very skim-able. Top-down, hierarchical, declarative sentences expanding into longer ones, expanding into summaries, expanding into fine-grained topical discussion. So you can sort of look at the top level and say, hopefully, "Yes, yes, yes, yes, yes. What about this?" And not have to read anything like 30,000 words. So, what I would say is that reinforcement learning is a technique for AI system development. You have a reinforcement learning system. It produces through a reinforcement learning process, which is a way of manipulating the learning of behaviors. It produces systems that are shaped by that mechanism. So it's a development mechanism for producing systems that provide some service. Now if you turn reinforcement learning loose in the world open-ended, read-write access to the internet, a money-maximizer and did not have checks in place against that? There are some nasty scenarios. So basically it's a development technique, but could also be turned loose to produce some real problems. "Creative systems trying to manipulate the world in bad ways" scenarios are another sector of reinforcement learning. So not a problem per se, but one can create problems using that technique.</p> <p><em>Question</em>: What does asymptotic improvement of AI services mean?</p> <p><em>Eric</em>: I think I'm abusing the term asymptotic. What I mean is increasing scope and increasing level of capability in any particular task to some arbitrary limit. Comprehensive is sort of like saying infinite, but moving toward comprehensive and superintelligent level services. What it's intended to say is, ongoing process going that direction. If someone has a better word than asymptotic to describe that I'd be very happy.</p> <p><em>Question</em>: Can the tech giants like Facebook and Google be trusted to get alignment right?</p> <p><em>Eric</em>: Google more than Facebook. We have that differential. I think that questions of alignment look different here. I think more in terms of questions of application. What are the people who wield AI capabilities trying to accomplish? So there's a picture which, just background to the framing of that question, and a lot of these questions I think I'll be stepping back and asking about framing. As you might think from the title of the talk. So picture a rising set of AI capabilities: image recognition, language understanding, planning, tactical management in battle, strategic planning for patterns of action in the world to accomplish some goals in the world. Rising levels of capability in those tasks. Those capabilities could be exploited by human decision makers or could, in principle, be exploited by a very high-level AI system. I think we should be focusing more, not exclusively, but more on human decision makers using those capabilities than on high-level AI systems. In part because human decision makers, I think, are going to have broad strategic understanding more rapidly. They'll know how to get away with things without falling afoul of what nobody had seen before, which is intelligence agencies watching and seeing what you're doing. It's very hard for a reinforcement learner to learn that kind of thing.</p> <p>So I tend to worry about not the organizations making aligned AI so much as whether the organizations themselves are aligned with general goals.</p> <p><em>Question</em>: Could you describe the path to superintelligent services with current technology, using more concrete examples?</p> <p><em>Eric</em>: Well, we have a lot of piecemeal examples of superintelligence. AlphaZero is superintelligent in the narrow domain of Go. There are systems that outperform human beings in playing these very different kinds of games, like Atari games. Face recognition recently surpassed human ability to map from human speech to transcriptive words. Just more and more areas piecemeal. A key area that I find impressive and important is the design of neural networks at the core of modern deep learning systems. The design of and learning to use appropriately, hyperparameters. So, as of a couple of years ago, if you wanted a new neural network, a convolutional network for vision, or some recurrent network, though recently they're going for convolution networks for language understanding and translation, that was a hand-crafted process. You had human judgment and people were building these networks. A couple of years ago people started in these, this is not AI in general but it's a chunk that a lot of attention went into, getting superhuman performance in neural networks by automated, AI-flavored like, for example, reinforcement learning systems. So developing reinforcement learning systems that learn to put together the building blocks to make a network that outperforms human designers in that process. So we now have AI systems that are designing a core part of AI systems at a superhuman level. And this is not revolutionizing the world, but that threshold has been crossed in that area.</p> <p>And, similarly, automation of another labor-intensive task that I was told very recently by a senior person at DeepMind would require human judgment. And my response was, "Do you take AI seriously or not?" And, out of DeepMind itself, there was then a paper that showed how to outperform human beings in hyperparameter selection. So those are a few examples. And the way one gets to an accelerating path is to have more and more, faster and faster implementation of human insights into AI architectures, training methods, and so on. Less and less human labor required. Higher and higher level human insights being turned into application throughout the existing pool of resources. And, eventually, fewer and fewer human insights being necessary.</p> <p><em>Question</em>: So what are the consequences of this reframing of superintelligence for technical AI safety research?</p> <p><em>Eric</em>: Well, re-contexting. If in fact one can have superintelligent systems that are not inherently dangerous, then one can ask how one can leverage high-level AI. So a lot of the classic scenarios of misaligned powerful AI involve AI systems that are taking actions that are blatantly undesirable. And, as Shane Legg said when I was presenting this at DeepMind last Fall, "There's an assumption that we have superintelligence without common sense." And that's a little strange. So Stuart Russell has pointed out that machines can learn not only from experience, but from reading. And, one can add, watching video and interacting with people and through questions and answers in parallel over the internet. And we see in AI that a major class of systems is predictive models. Given some input you predict what the next thing will be. In this case, given a description of a situation or an action, you try to predict what people will think of it. Is it something that they care about or not? And, if they do care about it, is there widespread consensus that that would be a bad result? Widespread consensus that it would be a good result? Or strongly mixed opinion?</p> <p>Note that this is a predictive model trained on many examples, it's not an agent. That is an oracle that, in principle, could operate with reasoning behind the prediction. That could in principle operate at a super intelligent level, and would have common sense about what people care about. Now think about having AI systems that you intend to be aligned with human concerns where, available for a system that's planning action, is this oracle. It can say, "Well, if such and such happened, what would people think of it?" And you'd have a very high-quality response. That's a resource that I think one should take account of in technical AI safety. We're very unlikely to get high-level AI without having this kind of resource. People are very interested in predicting human desires and concerns if only because they want to sell you products or brainwash you in politics or something. And that's the same underlying AI technology base. So I would expect that we will have predictive models of human concerns. That's an example of a resource that would reframe some important aspects of technical AI safety.</p> <p><em>Question</em>: So, making AI services more general and powerful involves giving them higher-level goals. At what point of complexity and generality do these services then become agents?</p> <p><em>Eric</em>: Well, many services are agent-services. A chronic question that arises, people will be at FHI or DeepMind and someone will say, "Well, what is an agent anyway?" And everybody will say, "Well, there is no sharp definition. But over here we're talking about agents and over here we're clearly not talking about agents." So I would be inclined to say that if a system is best thought of as directed toward goals and it's doing some kind of planning and interacting with the world I'm inclined to call it an agent. And, by that definition, there are many, many services we want, starting with autonomous vehicles, autonomous cars and such, that are agents. They have to make decisions and plan. So there's a spectrum from there up to higher and higher level abilities to do means-ends analysis and planning and to implement actions. So let's imagine that your goal is to have a system that is useful in military action and you would like to have the ability to execute tactics with AI speed and flexibility and intelligence, and have strategic plans for using those tactics that are superintelligent level.</p> <p>Well, those are all services. They're doing something in bounded time with bounded resources. And, I would argue, that that set of systems would include many systems that we would call agents but they would be pursuing bounded tasks with bounded goals. But the higher levels of planning would naturally be structured as systems that would give options to the top level decision makers. These decision makers would not want to give up their power, they don't want a system guessing what they want. At a strategic level they have a chance to select, since strategy unfolds relatively slowly. So there would be opportunities to say, "Well, don't guess, but here's the trade off I'm willing to make between having this kind of impact on opposition forces with this kind of lethality to civilians and this kind of impact on international opinion. I would like options that show me different trade-offs. All very high quality but within that trade-off space. And here I'm deliberately choosing an example which is about AI resources being used for projecting power in the world. I think that's a challenging case, so it's a good place to go.</p> <p>I'd like to say just a little bit about the opposite end, briefly. Superintelligent non-agents. Here's what I think is a good paradigmatic example of superintelligence and non-agency. Right now we have systems that do natural language translation. You put in sentences or, if you had a somewhat smarter system that dealt with more context, books, and out comes text in a different language. Well, I would like to have systems that know a lot to do that. You do better translations if you understand more about history, chemistry if it's a chemistry book, human motivations. Just, you'd like to have a system that knows everything about the world and everything about human beings to give better quality translations. But what is the system? Well, it's a product of R&amp;D and it is a mathematical function of type character string to character string. You put in a character string, things happen, and out comes a translation. You do this again and again and again. Is that an agent? I think not. Is it operating at a superintelligent level with general knowledge of the world? Yes. So I think that one's conceptual model of what high-level AI is about should have room in it for that system and for many systems that are analogous.</p> <p><em>Question</em>: Would a system service that combines general learning with universal competence not be more useful or competitive than a system that displays either alone? So does this not suggest that agents might be more useful?</p> <p><em>Eric</em>: Well, as I said, agents are great. The question is what kind and for what scope. So, as I was saying, distinguishing between general learning and universal competence is an important distinction. I think it is very plausible that we will have general learning algorithms. And general learning algorithms may be algorithms that are very good at selecting algorithms that are good at selecting algorithms for learning a particular task and inventing new algorithms. Now, given an algorithm for learning, there's a question of what you're training it to do. What information? What competencies are being developed? And I think that the concept of a system being trained on and learning about everything in the world with some objective function, I don't think that's a coherent idea. Let's say you have a reinforcement learner. You're reinforcing the system to do what? Here's the world and it's supposed to be getting competence in organic chemistry and ancient Greek and, I don't know, control of the motion of tennis-playing robots and on and on and on and on. What's the reward function, and why do we think of that as one task?</p> <p>I don't think we think of it as one task. I think we think of it as a bunch of tasks which we can construe as services. Including the service of interacting with you, learning what you want, nuances. What you are assumed to want, what you're assumed not to want as a person. More about your life and experience. And very good at interpreting your gestures. And it can go out in the world and, subject to constraints of law and consulting an oracle on what other people are likely to object to, implement plans that serve your purposes. And if the actions are important and have a lot of impact, within the law presumably, what you want is for that system to give you options before the system goes out and takes action. And some of those actions would involve what are clearly agents. So that's the picture I would like to paint that I think reframes the context of that question.</p> <p><em>Question</em>: So on that is it fair to say that the value-alignment problem still exists within your framework? Since, in order to train a model to build an agent that is aligned with our values, we must still specify our values.</p> <p><em>Eric</em>: Well, what do you mean by, "train an agent to be aligned with our values." See, the classic picture says you have "The AI" and "The AI" gets to decide what the future of the universe looks like and it had better understand what we want or would want or should want or something like that. And then we're off into deep philosophy. And my card says philosophy on it, so I guess I'm officially a philosopher or something according to Oxford. I was a little surprised. "It says philosophy on it. Cool!" I do what I think of as philosophy. So, in a services model, the question would instead be, "What do you want to do?" Give me some task that is completed in bounded time with bounded resources and we could consider how to avoid making plans that stupidly cause damage that I don't want. Plans that, by default, automatically do what I could be assumed to want. And that pursue goals in some creative way that is bounded, in the sense that it's not about reshaping the world; other forces would presumably try to stop you. And I'm not quite sure what value alignment means in that context. I think it's something much more narrow and particular.</p> <p>By the way, if you think of an AI system that takes over the world, keep in mind that a sub-task of that, part of that task, is to overthrow the government of China. And, presumably, to succeed the first time because otherwise they're going to come after you if you made a credible attempt. And that's in the presence of unknown surveillance capabilities and unknown AI that China has. So you have a system and it might formulate plans to try to take over the world, well, I think an intelligent system wouldn't recommend that because it's a bad idea. Very risky. Very unlikely to succeed. Not an objective that an intelligent system would suggest or attempt to pursue. So you're in a very small part of a scenario space where that attempt is made by a high-level AI system. And it's a very small part of scenario space because it's an even smaller part of scenario space where there is substantial success. I think it's worth thinking about this. I think it's worth worrying about it. But it's not the dominant concern. It's a concern in a framework where I think we're facing an explosive growth of capabilities that can amplify many different purposes, including the purposes of bad actors. And we're seeing that already and that's what scares me.</p> <p><em>Question</em>: So I guess, in that vein, could the superintelligent services be used to take over the world by a state actor? Just the services?</p> <p><em>Eric</em>: Well, you know, services include tactical execution of plans and strategic planning. So could there be a way for a state actor to do that using AI systems in the context of other actors with, presumably, a comparable level of technology? Maybe so. It's obviously a very risky thing to do. One aspect of powerful AI is an enormous expansion of productive capacity. Partly through, for example, high-level, high quality automation. More realistically, physics-limited production technology, which is outside today's sphere of discourse or Overton window.</p> <p>Security systems, I will assert, could someday be both benign and effective, and therefore stabilizing. So the argument is that, eventually it will be visibly the case that we'll have superintelligent level, very broad AI, enormous productive capacity, and the ability to have strategic stability, if we take the right measures beforehand to develop appropriate systems, or to be prepared to do that, and to have aligned goals among many actors. So if we distribute the much higher productive capacity well, we can have an approximately strongly Pareto-preferred world, a world that looks pretty damn good to pretty much everyone.</p> <p><em>Note: for a more thorough presentation on this topic, see Eric Drexler's <a href="">other talk</a> from this same conference.</em></p> <p><em>Question</em>: What do you think the greatest AI threat to society in the next 10, 20 years would be?</p> <p><em>Eric</em>: I think the greatest threat is instability. Sort of either organic instability from AI technologies being diffused and having more and more of the economic relationships and other information-flow relationships among people be transformed in directions that increase entropy, generate conflict, destabilize political institutions. Who knows? If you had the internet and people were putting out propaganda that was AI-enabled, it's conceivable that you could move elections in crazy directions in the interest of either good actors or bad actors. Well, which will that be? I think we will see efforts made to do that. What kinds of counter-pressures could be applied to bad actors using linguistically politically-competent AI systems to do messaging? And, of course, there's the perennial states engaging in an arms race which could tip into some unstable situation and lead to a war. Including the long-postponed nuclear war that people are waiting for and might, in fact, turn up some day. And so I primarily worry about instability. Some of the modes of instability are because some actor decides to do something like turn loose a competent hacking, reinforcement-learning system that goes out there and does horrible things to global computational infrastructure that either do or don't serve the intentions of the parties that released it. But take a world that's increasingly dependent on computational infrastructure and just slice through that, in some horribly destabilizing way. So those are some of the scenarios I worry about most.</p> <p><em>Question</em>: And then maybe longer term than 10, 20 years? If the world isn't over by then?</p> <p><em>Eric</em>: Well, I think all of our thinking should be conditioned on that. If one is thinking about the longer term, one should assume that we are going to have superintelligent-level general AI capabilities. Let's define that as the longer term in this context. And, if we're concerned with what to do with them, that means that we've gotten through the process to there then. So there's two questions. One is, "What do we need to do to survive or have an outcome that's a workable context for solving more problems?" And the other one is what to do. So, if we're concerned with what to do, we need to assume solutions to the preceding problems. And that means high-level superintelligent services. That probably means mechanisms for stabilizing competition. There's a domain there that involves turning surveillance into something that's actually attractive and benign. And the problems downstream, therefore, one hopes to have largely solved. At least the classic large problems and now problems that arise are problems of, "What is the world about anyway?" We're human beings in a world of superintelligent systems. Is trans-humanism in this direction? Uploading in this direction? Developing moral patients, superintelligent-level entities that really aren't just services, and are instead the moral equivalent of people? What do you do with the cosmos? It's an enormously complex problem. And, from the point of view of having good outcomes, what can I say? There are problems.</p> <p><em>Question</em>: So what can we do to improve diversity in the AI sector? And what are the likely risks of not doing so?</p> <p><em>Eric</em>: Well, I don't know. My sense is that what is most important is having the interests of a wide range of groups be well represented. To some extent, obviously, that's helped if you have in the development process, in the corporations people who have these diverse concerns. To some extent it's a matter of politics regulation, cultural norms, and so on. I think that's a direction we need to push in. To put this in the Paretotopian framework, your aim is to have objectives, goals that really are aligned, so, possible futures that are strongly goal-aligning for many different groups. For many of those groups, we won't fully understand them from a distance. So we need to have some joint process that produces an integrated, adjusted picture of, for example, how do we have EAs be happy and have billionaires maintain their relative position? Because if you don't do that they're going to maybe oppose what you're doing, and the point is to avoid serious opposition. And also have the government of China be happy. And I would like to see the poor in rural Africa be much better off, too. Billionaires might be way up here, competing not to build orbital vehicles but instead starships. And the poor in rural Africa of today merely have orbital space capabilities convenient for families, because they're poor. Nearly everyone much, much better off.</p> the-centre-for-effective-altruism ART6fx3oL3YmeFvDD 2019-04-01T15:49:58.795Z Hiski Haukkala: Policy Makers Love Their Children Too <p><em>The political scientist Hiski Haukkala used to believe that as traditional power structures weakened, we would be able to change the world by creating totally new structures. Now, however, he thinks that some work within existing systems, like national governments or international organizations, will be necessary. In this talk from EA Global 2018: London, Hiski encourages effective altruists to become involved in policymaking and the political process more broadly.</em></p> <p><em>A transcript of Hiski's talk is below, which CEA has lightly edited for clarity. You can also watch this talk on <a href="">YouTube</a>, or read it on <a href=""></a></em></p> <h2>The Talk</h2> <p>Good day everyone. First of all I would like to express my appreciation for the invitation to attend this EA Global. It's my second event, and it's really been great to see the growth of this movement, and the people that it brings together. I'm really happy, honored, and privileged to be addressing you here today.</p> <p><img src="//" alt="1530 Hiski Haukkala"></p> <p>However I think I'm afraid I have to start with a confession of sorts, maybe even two. The first one is that the title of my presentation is a ruse. It's a ploy. I'm not going to give you tips or hands-on advice on how to be amazingly effective in the world of policy making. I think it's probably contraspecific anyway, and I'm happy to discuss it later. But it's a ruse for a good purpose. I have a bigger fish to fry and I hope and almost semi-promise that I will get to the point that you probably were expecting, at the very end of my talk.</p> <p>The second confession is this: it's not aspirational, like "I have a dream." It's more bit of a letdown like "I used to believe." But I think it's still powerful and important and something that deserves our consideration. Because what I used to believe is that we can bypass the problems of state-centric governance and devise working solutions to the pressing problems of our age, including that of the future.</p> <p><img src="//" alt="1530 Hiski Haukkala (1)"></p> <p>I was an optimist for quite some time. I thought that, and I think I was seeing through my own scholarship and reading a lot of literature, that power was diffusing. It was going away from states and formal, old structures. But it wasn't clearly going necessarily anywhere, at least at first. It gave me this hope that we could build different forms of new regulation, spontaneous orders, and social movements like effective altruism, to generate the positive outcomes that we need. For me, for quite some time, the world seemed awash with feasible alternatives. But unfortunately I do not fully believe this to be the case anymore. The reason is that the change we so desperately require simply doesn't come fast enough. We see the paralysis of existing forms of governance to a certain extent, and I will say a few words about that, but we are not seeing massively and sufficiently effective forms of new governance coming instead. The clock is, unfortunately, ticking.</p> <p>Yet quitting is not an option. The stakes are simply too high. We have been talking about these new issues, new problems, new forms of threats, x-risks, catastrophic risks. We have been talking about climate change and so on and so forth. These are all things that desperately require our attention and creative working solutions. The conclusion that I have drawn from all of this, my past five years working for the government and witnessing international cooperation firsthand and the attempts, however feeble and at times fallible in terms of developing international governance, is that we actually have a massive amount of existing structures and mechanisms and things in place that we need to improve.</p> <p>So basically, even though I used to believe something that was aspirational and I think is still worth our consideration, and working towards, I'm more convinced, at least for the time being, that we actually also have to steer some of our energies into getting things done here and now, using the mechanisms and levers we already have in place. This means engaging many of the structures that perhaps you, through your own work, feel a certain level of aversion or distrust towards. What I want to try to do here today is to make a case and argue that perhaps some of these things deserve a second thought, and more importantly deserve the kind of inputs that only people like you can give.</p> <p><img src="//" alt="1530 Hiski Haukkala (2)"></p> <p>What ails us? What is the problem and what is the issue that is bringing us down and keeping us from achieving the results that we so badly need? Well this is the bit of good news that I have. Leaders that I have met, and I have actually met quite a few during my career, are on average both intelligent and ethically high level. Not all of them, without naming any names, absolutely not naming any names, but intelligent and ethically high level. I'm fully convinced that they genuinely want to make the world a better place. This is the feeling that I've gotten from watching leaders in action, from basically all corners of the earth. These people have basic capabilities and aspirations, which have motivated them into entering the game of politics, and to become policy makers or even political leaders.</p> <p>Yet they keep making decisions that are detrimental to our long-term well-being. This is a clear paradox. Why are we not getting the kind of results that we would like to have, and need to have, even though, on principle at least, the people are of the right caliber and quality, and have the best interests of their nations, but also of humanity, at heart? Well, this is a 20 minute talk. What I'm going to go through next is a seminar series lasting for six months, but I'll try to give a quick run through some of the problems of our, in particular, western liberal democracies. These leaders are, to a large degree, constrained by vested interests. There are all kinds of interest groups, industries, trade unions. All kinds of peoples with specific needs, and there are vested interests that are working very hard and at cross purposes, to retain or enhance these very sectoral specific interests. By doing so they in fact create policy paralysis, incompatible agendas, and create very difficult situations for political leaders, who in the final analysis are always looking for re-election, which is another factor that challenges this system.</p> <p>This results in short-termism, overall erosion of our democratic systems, which is visible in many of our countries today. We are living in a world that equates leadership with populism, where leaders are usually putting out their antennae, and trying to listen to the signals from the electorate. What does the electorate want? What are the vested special interests that they should cater to? They focus on this, instead of providing us with long-term thinking and solutions. This is compounded with information overload, which I can testify as a former civil servant is massive in terms of the policy making circles, that further often encourages presentist thinking and easy and/or quick solutions.</p> <p>This is really a big round of problems facing anyone who is interested into making a positive difference in the world of policy making. This is something that anyone working in this world will encounter, one way or another. On top of this there's another, an additional factor which I think is very important, and which should force us to perhaps reconsider our own position concerning politics and our political systems. I think we as citizens are letting our politicians and political systems down. We are expecting less and less. We are demanding less and less in terms of good outcomes from these systems, and as a consequence we are getting less and less in terms of good outcomes from them. This erosion of trust in our institutions is richly deserved to a degree, but at the same time becoming a self-fulfilling prophecy that is eroding our capacity for national decision making, national politics, and also sensible international politics at the same time.</p> <p>So this is an area of diminishing returns that we have entered into, which is feeding back into this further disillusionment on the part of the population, and also feeding into populist politics which seem to be offering easy sound bites and easy victories for people who are fed up with the business as usual. I will get back to this notion of business as usual in a little while.</p> <p>So what I'm advocating here today is that politics and policy making actually would benefit from interaction and from more participation from individuals like effective altruists. I will make the case in the very final slide why I think this is so, although I'm quite sure that this is not the message that most of you are necessarily very happy or excited about. But you don't have to go all the way in order to make a difference. You can devise different strategies of how to go about making or effecting change in existing policies, if you are interested in affecting this political game and this making of policies on the national or international level.</p> <p><img src="//" alt="1530 Hiski Haukkala (3)"></p> <p>The first strategy I call mix with the animals, and you can think who these animals might be. I think the world is full of opportunities for a person who is interested in effecting change from within the system. In this respect there is good news, because leaders that I've been describing are always on the lookout for the next big idea. They are intellectually curious, and they are looking for solutions to pre-existing problems. They are also looking for new framings of problems and issues, and solutions that they are not yet aware of.</p> <p>So this actually opens up a pretty lucrative and promising market for knowledgeable and talented individuals to come and act as advisors or external consultants to governments. This is a position that I have held over my career and I have found it fruitful and rewarding, and I'm happy to share my experiences later on, perhaps during my office hours. How to go about providing and offering these services, and what is entailed and included in these kind of things.</p> <p>But it is not the only way to go about these things. Another important facet, or group of individuals in policy making that are not policy decision makers per se is the role of civil servants, who play at times a key role in planning and executing policies. There is also the opportunity of becoming one yourself. Think about entering bureaucracies through whatever the national processes might be, or international bureaucracies, there are a lot of those as well, that develop policy responses that have a forward-oriented leaning in their work. Which will give you a lot of opportunities to also produce policy outcomes that you are interested in.</p> <p>But there is an important caveat. Please be aware that if you play this kind of "mix with the animals" strategy, you are always acting in a subservient role to power. This always has its own limits. Because in the final analysis, the major change we need and the major decisions that will be taken, will not be taken by bureaucrats or advisors or consultants, but they are actually taken by politicians, elected officials, and so on so forth. The leaders who in the final analysis weigh in the different options, different opportunities, and different costs. I say this even though I think this strategy is number one, and it is the one that I have played myself for probably very good reasons. I think I would be an awful politician, to be honest.</p> <p>But I've also come to see the limits of these kind of roles through exercising that kind of power, even at a fairly senior level that I have been working during my own career. The question of seniority is indeed a worthy consideration for young experts and young professionals such as you are, because I think the bad news in following this route is this one: 80,000 hours is a very long time. Climbing the greasy pole of becoming a very effective and very senior civil servant advisor, probably takes a very long time in most cases. This would seem to dictate against the question of time, which I was referring to earlier on. So thinking that we could somehow make the radical change we require take place through this route alone is probably not correct thinking, because we most probably will not have the time to make sure that we have all the right and sensible people in all the right and crucial positions to start making and effecting the change that we need. So although I think this is a worthy route to follow, and it is very important and probably very impactful in many respects, it is probably not sufficient in terms of achieving the kind of change that we need.</p> <p><img src="//" alt="1530 Hiski Haukkala (4)"></p> <p>Which has led me to thinking about this. I would be happy to hear your views about what you make of this one. So the starting point is basically what I have already been saying. It's that the change we need is unlikely to materialize from within the current political structures and the modus operandi. Be it our national political systems or the systems of international governance. I have alluded to the problems already, previously. To my mind the imperative thing that needs to change is our very political culture. Less short-term thinking, more long-term thinking. Less egotistical values, nationalism, and narrow-minded thinking, more cosmopolitan thinking, more appreciating and accepting the fact that we are increasingly starting to operate on the level of humanity with our policies and our effects and our unintended consequences without actually having the political institutions necessary to deal with many of these issues yet. Probably not fully being able to build those institutions, at least very quickly or uncontroversially.</p> <p>So we need a change in our political culture, and I think it is a change that can only come from the bottom up because this other system, this existing political culture and our existing political leaders, will most probably not be able to deliver on this kind of change. The good news is that to my mind the world, or at least significant parts of it, are ready for quick political changes. We have seen this for better or for ill. We saw the rapid rise of Mr. Trump from basically from an unelectable candidate to the President of the United States of America, who has been able to take over also the Republican Party. I'm not saying this is a particularly positive or happy example necessarily, but it is an example of how quick political changes are possible in even massively big and well stratified political systems such as the United States.</p> <p>Another probably more hopeful and positive example is the President of France, Macron, who came from similar obscurity very quickly, creating a political platform and sweeping the whole French political system in the process. So I think the potential is there. People are fed up with business as usual. They are looking for alternatives. My concern is that right now many of these alternatives are not coming from a particularly happy or optimistic place. They are coming from xenophobic, populist, narrow-minded circles in many places who want to turn the clock back at exactly the time when we need new thinking, new forms of cooperation, new forms of governance. So not only are they potentially squandering our ability to move forward, they are actually aggressively and very determinedly trying to turn the clock back into the yesteryear. This is clearly something that we simply cannot afford.</p> <p>So what we need is people, new ideas, new movements, that can steer these energies that are there into a direction that is conducive to positive changes in the world. That can articulate aspirational and positive visions that people will want to flock to, vote for, and work for, in order to achieve the change that we need. I simply cannot think of many more positive signposts to humanity than the EA movement. I mean your principles, your thinking, I think is exactly the recipe for the day. This is something that we all, I think, interested in the future well-being of humanity and human beings and the life on this planet, will have to work very hard to project and to help grow, and to get new followers, and to make it an actionable program in our lives and in our political systems.</p> <p>I'm at the end of my time, so it's probably worth clarifying what I'm not proposing here today. I'm not proposing turning effective altruism into a political movement. I think you are perfect the way you are, and I do not say that you should change in this way and take that kind of route. But I think that effective altruism is a piece of particularly wonderful and exciting software. We have a very robust and very powerful and effective piece of hardware, which is our states, our national bureaucracies, our forms of international governance and so on. So what I would like to see and what I think needs to take place is that this particular piece of software is inserted into this particular piece of hardware. I don't see any a priori mismatch or incompatibility here. No need for hostility or aversion between these two worlds. On the contrary, I think this particular piece of hardware would benefit immensely from interacting much more with the software that you have to offer.</p> <p>So in order to answer my question how to be more effective in the world of policy making, is that I think some effective altruists, or at least some people leaning into this particular intellectual direction, would have to become politicians, and political leaders, and I think this is called for. So I thank you for your patience, and this invitation. Thank you.</p> the-centre-for-effective-altruism itchYSrcsjT2aH7J3 2019-03-29T15:33:13.010Z Stefan Schubert: Psychology of Existential Risk and Long-Termism <p><em>Consider three scenarios: scenario A, where humanity continues to exist as we currently do, scenario B, where 99% of us die, and scenario C, where everyone dies. Clearly option A is better than option B, and option B is better than option C. But exactly how much better is B than C?? In this talk from EA Global 2018: London, Stefan Schubert describes his experiments examining public opinion on this question, and how best to encourage a more comprehensive view of extinction’s harms.</em></p> <p><em>A transcript of Stefan's talk is below, including questions from the audience, which CEA has lightly edited for clarity. You can also read this talk on <a href=""></a>, and watch it on <a href="">YouTube</a>.</em></p> <h2>The Talk</h2> <p>Here is a graph of economic growth over the last two millennia.</p> <p><img src="//" alt="1430 Stefan Schubert (1)"></p> <p>As you can see, for a very long time, there was very little growth, but then it gradually started to pick up during the 1700s, and then in the 20th century, it really skyrocketed.</p> <p>So now the question is, what can we tell about future growth, on the basis of this picture of past growth?</p> <p><img src="//" alt="1430 Stefan Schubert (2)"></p> <p>Here is one possibility which is, perhaps, the one which is closest at hand, that growth will continue into the future and hopefully into the long-term future. And that will mean not only greater wealth, but also better health, extended life span, more scientific discoveries, and more human flourishing in all kinds of other ways. So, a much better long-term future in all kinds of ways.</p> <p>But, unfortunately, that's not the only possibility. Experts worry that it could be that growth continues for some time, but then civilization collapses.</p> <p><img src="//" alt="1430 Stefan Schubert (3)"></p> <p>For instance, civilization could collapse because of a nuclear war between great powers or an accident involving powerful AI systems. Experts worry that civilization wouldn't recover from such a collapse.</p> <p><img src="//" alt="1430 Stefan Schubert (4)"></p> <p>The philosopher Nick Bostrom, at Oxford, has called these kinds of collapses or catastrophes "existential catastrophes." One kind of existential catastrophe is human extinction. In that case, the human species goes extinct, no humans ever live anymore. So that will be my sole focus here. But he also defines another kind of existential catastrophe, which is that humanity doesn't go extinct but its potential is permanently and drastically curtailed. I won't talk about such existential catastrophes here.</p> <p><img src="//" alt="1430 Stefan Schubert (5)"></p> <p>So, together with another Oxford philosopher, the late Derek Parfit, Bostrom has argued that human extinction would be uniquely bad, much worse than non-existential catastrophes. And that is because extinction would forever deprive humanity of a potentially grand future. We saw that grand future on one of the preceding slides.</p> <p>So, in order to make this intuition clear and vivid, Derek Parfit created the following thought experiment where he asked us to consider three outcomes: First, peace; second, a nuclear war that kills 99% of the human population; and then third, a nuclear war that kills 100% of the human population.</p> <p>Parfit's ranking of these outcomes, from best to worst, was as follows: peace is the best, near extinction is number two, and then extinction is the worst. So, no surprises so far. But then he asks a more interesting question: "Which difference, in terms of badness, is greater?" Is it the First Difference, as we call it, between peace and 99% dead? Or the Second Difference, between 99% dead and 100% dead? This is a piece of terminology that I will use throughout this talk, the First Difference and Second Difference, so it will be good to remember this terminology.</p> <p><img src="//" alt="1430 Stefan Schubert (7)"></p> <p>So which difference do you find the greater? That depends on what key value you have. If your key value is the badness of individual deaths or the individuals that suffer, then you're gonna think that the First Difference is greater, because the First Difference is greater in terms of individual deaths. But there is also another key value which one might have, and that is that extinction and the lost future that it entails is very bad. And of course, only the third of these outcomes, 100% death rate, means extinction and a lost future.</p> <p>So only the Second Difference involves a comparison between an extinction and a non-extinction outcome. So this means if you focus on the badness of extinction and the lost future it entails, then you are gonna think that the Second Difference is greater.</p> <p>Parfit hypothesized that most people will find the First Difference to be greater because they're gonna focus on the individual deaths and all the individuals that suffer. So this is, in effect, a psychological hypothesis. But his own ethical view was that the Second Difference is greater. So, in effect, that means that extinction is uniquely bad, and much worse than a non-existential catastrophe. And that is because Parfit's key value was the lost future that human extinction would entail.</p> <p>So then, together with my colleagues, Lucius Caviola and Nadira Faber, at the University of Oxford, we wanted to test this psychological hypothesis of Parfit's, namely that people don't find extinction uniquely bad.</p> <p><img src="//" alt="1430 Stefan Schubert (8)"></p> <p>So we did this using a slightly tweaked version of Parfit's hypothesis. We asked again people on different online platforms to consider three outcomes. But the first outcome wasn't peace, because we found that people had certain positive emotional associations with the word "peace" and we didn't want that to confound them. Instead we just said there's no catastrophe.</p> <p><img src="//" alt="1430 Stefan Schubert (9)"></p> <p>With regards to the second outcome we made two changes; we replaced nuclear war with a generic catastrophe because we weren't specifically interested in nuclear war, and then we reduced the number of deaths from 99% to 80% because we wanted people to believe that it's likely that we could recover from this catastrophe. And then the third outcome was that 100% of people died.</p> <p>We first asked people to rank the three outcomes. Our hypothesis was that most people would rank these outcomes as Parfit thought that one should, namely that no catastrophe is the best and then near extinction is second and extinction is the worst.</p> <p><img src="//" alt="1430 Stefan Schubert (10)"></p> <p>This was indeed the case. 90% gave these rankings and all other rankings only got 10% between them. But then we went on to another question, and this we only gave to those participants who had given these predicted rankings. The other 10% were out of the study from now on.</p> <p><img src="//" alt="1430 Stefan Schubert (11)"></p> <p>We asked, "In terms of badness, which difference is greater, the First Difference between no catastrophe and near extinction?" And as you'll recall, Parfit's hypothesis was that most people would find this difference to be greater, "Or the Second Difference between near extinction and extinction?" Meaning that extinction is uniquely bad.</p> <p><img src="//" alt="1430 Stefan Schubert (12)"></p> <p>And then we found that Parfit was, indeed, right. A clear majority found the First Difference to be greater and only minority found extinction to be uniquely bad.</p> <p>So then we wanted to know "Why is it that people don't find extinction uniquely bad?" Is it because they focus very strongly on the first key value, they really focus on the badness of individual deaths and individual suffering? Or is it because they focus only weakly on the other key value, on the badness of extinction and the lost future which it entails?</p> <p>So we included a series of manipulations in our study to test these hypotheses. Some of these decreased the badness of individual suffering, and others emphasized or increased the badness of a lost future, so they latched on to either the first or the second of these hypotheses. So this meant that the condition which I've shown you the results from, that acted as a control condition, and then we had a number of experimental conditions or manipulations.</p> <p><img src="//" alt="1430 Stefan Schubert (13)"></p> <p>In total, we had more than twelve hundred participants in the British sample, making it a fairly large study. We also ran another study which was identical on the US sample, and that yielded similar results, but I will here focus on the larger British sample.</p> <p><img src="//" alt="1430 Stefan Schubert (14)"></p> <p>Our first manipulation involved zebras. So here we had exactly the same three outcomes, like in the control condition, only that we replaced humans with zebras. So our reasoning here was that people likely empathize less with individual zebras; they don't feel as strongly with an individual zebra that dies as they do with an individual human that dies. So, therefore, there would be less focus on individual suffering, the first key value; whereas, people might still care pretty strongly about the zebra species, we thought, so extinction would still be bad.</p> <p><img src="//" alt="1430 Stefan Schubert (15)"></p> <p>So overall this would mean that more people would find extinction uniquely bad when it comes to zebras. That was our hypothesis, and it was proved true. A significantly larger proportion of people found extinction uniquely bad when it comes to zebras, 44% versus 23%.</p> <p><img src="//" alt="1430 Stefan Schubert (16)"></p> <p>So then our second manipulation, here we went back to humans, but what we changed was that the humans were no longer getting killed, but, instead, they couldn't have any children. And of course if no one can have children then humanity will eventually go extinct.</p> <p><img src="//" alt="1430 Stefan Schubert (17)"></p> <p>So here, again, we felt, we thought, that people will feel less about sterilization than about death. So then there would be less of a focus on the first key value, individual suffering. Whereas extinction and the lost future that it entails is as bad as in the control condition.</p> <p><img src="//" alt="1430 Stefan Schubert (18)"></p> <p>So overall this should make more people find extinction uniquely bad when it comes to sterilization. And this hypothesis was also proved true. So here we found that 47% said extinction was uniquely bad in this condition. Again, that was a significant difference compared to the control condition.</p> <p><img src="//" alt="1430 Stefan Schubert (19)"></p> <p>And then our third manipulation was somewhat different. So here we had, again, the three outcomes from the control condition. But then after that we added the following text: "Please remember to consider the long term consequences each scenario will have for humanity. If humanity does not go extinct, it could go on to a long future. This is true even if many, but not all, humans die in a catastrophe, since that leaves open the possibility of recovery. However, if humanity goes extinct, there would be no future for humanity."</p> <p><img src="//" alt="1430 Stefan Schubert (20)"></p> <p>So the manipulation makes it clear that extinction means no future and non-extinction may mean a long future. So it emphasizes the badness of extinction and losing the future, and has an effect on that key value, whereas, the other key value, the badness of individual suffering, isn't really affected.</p> <p><img src="//" alt="1430 Stefan Schubert (22)"></p> <p>So overall it should, again, make more people find extinction uniquely bad. So here we found a similar effect as before, so 50% now found extinction to be uniquely bad. So in the salience manipulation, we didn't really add any new information, we just highlighted certain inferences which one, in principle, could have made even in the control condition.</p> <p><img src="//" alt="1430 Stefan Schubert (23)"></p> <p>But we also wanted to include one condition where we actually added new information. We called this the good future manipulation. So here in the first outcome we said not only that there is no catastrophe, but also, humanity goes on to live for a very long time in a future which is better than today in every conceivable way. There are no longer any wars, any crimes, any people experiencing depression or sadness and so on.</p> <p><img src="//" alt="1430 Stefan Schubert (24)"></p> <p>So, really a utopia. And then the second outcome was very similar, so, here of course, there was a catastrophe, but we recover from the catastrophe, and then go on to the same utopia. And then the third outcome was the same, but we also really emphasized that extinction means that no humans will ever live anymore and all of human knowledge and culture will be lost forever.</p> <p><img src="//" alt="1430 Stefan Schubert (25)"></p> <p>So this was really a very strong manipulation. We hammered home the extreme difference between these three different outcomes, so, I think this should be remembered when we look at the results here. Because here we found quite a striking difference.</p> <p>This manipulation says then that the future will be very good if humanity survives, and that we would recover from a non-extinction catastrophe. So the manipulation then makes it worse to lose the future, so it affects that key value. But the other key value, the badness of individual suffering, is not affected.</p> <p><img src="//" alt="1430 Stefan Schubert (26)"></p> <p>So, overall, this should make more people find extinction uniquely bad, and that's really what we found here. 77% found extinction uniquely bad given that we would lose this very great future indeed.</p> <p>So let's sum up then, what have we learned from these four experimental conditions about why people don't find extinction uniquely bad in the control condition. So one hypothesis we had was that this was because people focus strongly on the badness of people dying from the catastrophes. And this is something that we find is true, because when we reduce the badness of individual suffering, as we did in the zebra manipulation and the sterilization manipulation, then we do find that more people find extinction to be uniquely bad.</p> <p>Our second hypothesis was that people don't feel that strongly about the other key value, the lost future; and we found some support for that hypothesis as well, because one reason why people don't feel as strongly about that key value is that they don't consider the long-term consequences that much. And we know this because when we highlight the long-term consequences, as we did in the salience manipulation, then more people find extinction uniquely bad.</p> <p><img src="//" alt="1430 Stefan Schubert (27)"></p> <p>Another reason why people focus weakly on the lost future is that they have certain empirical beliefs which reduce the value of the future. So they may believe that the future will not be that good if humanity survives. And they may believe that we won't recover if 80% die. And we know this because when we said that the future will be good if humanity survives, and that we will recover if 80% die, as we did in the good future manipulation, then more people found extinction uniquely bad.</p> <p><img src="//" alt="1430 Stefan Schubert (28)"></p> <p>More briefly, I should also present another study that we ran involving "X-risk reducer sample." So this is a sample of people focused on reducing existential risk, and we produced this sample via the EA Newsletter and social media and some of you may have taken this test actually, so if so I should thank you for helping our research.</p> <p>So here we had only two conditions. We had the control condition and we had the good future condition. We hypothesized that nearly all participants would find the second difference greater both in the control condition and in the good future condition. So nearly all participants would find extinction uniquely bad.</p> <p><img src="//" alt="1430 Stefan Schubert (29)"></p> <p>And this was, indeed, what we found, so that's a quite striking difference compared to laypeople, where we found a big difference between the good future condition and the controlled condition. Among the x-risk reducers, we find that they find extinction uniquely bad even in the absence of information about how good the future is gonna be.</p> <p>So that sums up what I had to say about this specific study. So let me now zoom out a little bit and say some words about the psychology of existential risk and the long-term future in general. We think that this is just one example of a study that one could run, and that there could be many valuable studies in this area.</p> <p><img src="//" alt="1430 Stefan Schubert (30)"></p> <p>And one reason why we think that is that it seems that there is a general fact about human psychology, which is that we think quite differently about different domains. So one example of this, which should be close to mind for many effective altruists, is that we think, or people in general think, very differently about charitable donations and individual consumption; so it seems that most people think much more about what they get for their money when it comes to individual consumption compared to a charity.</p> <p>So similarly, it may be that we think quite differently when we think about the long-term future compared to when we think about shorter time frames. So it may be, for instance, that if we think about the long-term future we have a more collaborative mindset because we realize that, in the long-term, we're all in the same boat.</p> <p>I don't know whether that's the case. I'm speculating a bit here, but I think we have some prior, which should be quite high, that we do have a unique way of thinking about the long-term future. And it's important to learn whether that's the case and how we do think about the long-term future.</p> <p>Because, ultimately, we want to use that knowledge. We don't just want to gather it for its own sake. But we want to use it in the project that many of you, thankfully, are a part of, to create a long-term future.</p> <h2>Questions</h2> <p><em>Question</em>: What about those 10% of people who kind of missed the boat, and got the whole thing flipped? Was there any follow up on those folks? And what are they thinking?</p> <p><em>Stefan</em>: I think that Scott Alexander at some had a <a href="">blog post</a> about how you always get this sort of thing on online platforms. You always get some responses which are difficult to understand. So you very rarely get a 100% yes on anything. I wouldn't read in too much in to those 10%.</p> <p><em>Question</em>: Were these studies done on Mechanical Turk?</p> <p><em>Stefan</em>: Yeah, so we run many studies on Mechanical Turk, but actually the main study that I presented here, that was on a competitor to Mechanical Turk, which is called Prolific. And then we recruited British participants; on Mechanical Turk we typically recruit American participants. And I said at one point we also actually ran a study on American participants so we just sort of replicated the study that I now presented. But what I presented concerned British participants on Prolific.</p> <p><em>Question</em>: Were there any differences in zebra affinity between Americans and Britons?</p> <p><em>Question</em>: Did you consider a suffering focused manipulation, to increase the salience of the First Difference?</p> <p><em>Stefan</em>: That's an interesting question. No, we have not considered such a hypothesis.</p> <p>I guess in the sterilization manipulation there is significantly less, there is substantially less suffering involved. What we say is that 80% can't have children and the remaining 20% can have children, so there seems to be much less suffering going on in that scenario compared with the other scenarios. But I haven't thought through all the implications of that now, but, that is certainly something to consider.</p> <p><em>Question</em>: Where do we go from here with this research?</p> <p><em>Stefan</em>: Yeah, I mean, I guess one thing I found interesting, was, as I said, the good future manipulation is very strong and so it's not obvious that quite as many would find extinction uniquely bad if we made that a bit weak.</p> <p>But that said, we have some converging evidence to the conclusions that I said there, that is we had actually one other pre-study where we asked people more directly "How good do you think the future will be if humanity survives?" And then we found that they thought the future is gonna be slightly worse than the present. Which seems somewhat unlikely on the basis of the first graph that I showed that the world has arguably become better. Some people don't agree with that, but, that would be my view.</p> <p>So it seems, in general, that one thing that stood out to me that was that people are probably fairly pessimistic about the long-term future and that may be one key reason why they don't consider human extinction so important.</p> <p>And I mean, in a sense, I find that that's good news, because this is just something that you can inform people about, well, it might not be super easy, but it seems somewhat tractable. Whereas if people had some sort of deeply held moral conviction that extinction isn't that important, then it might have been harder to change people's minds.</p> <p><em>Question</em>: Do you know of other ways to shift or nudge people in the direction of intuitively, naturally taking into account the possibility that the future represents?</p> <p><em>Stefan</em>: Yeah, there was actually someone else who ran more informal studies where they mapped out the argument for the long-term future. It was broadly similar to the salience manipulation, but it was much more information, and also ethical arguments.</p> <p>And then, as I recall, that seemed to have a fairly strong effect. So that falls broadly in the same ballpark. So basically, you can inform people more comprehensively.</p> the-centre-for-effective-altruism MzC74SFwQNW4XCoQm 2019-03-25T14:45:23.357Z Vicky Bond: How Corporate Reform is Changing the Landscape for Animals <p><em>By applying pressure to meat-producing companies, it’s possible to score huge wins for farm animal welfare. Publicly visible protests, directly contacting corporations en masse, and passing legislation are all good methods, and they work even better in concert. In this talk from EA Global 2018: London, Vicky Bond provides an overview of strategies to achieve corporate reforms for animals.</em></p> <p><em>A transcript of Vicky's talk is below, which CEA has lightly edited for clarity. You can also watch this talk on <a href="">YouTube</a>, or read the transcript at <a href=""></a>.</em></p> <h2>The Talk</h2> <p>Today, I'm gonna be presenting on how corporate reforms is changing the landscape for animals. But before I do, I'm just gonna give you a little bit of background about The Humane League. The Humane League was founded back in 2005 in Philadelphia, started off very much as a grassroots activists' organization, just a couple of people. We've now grown to over 80 staff.</p> <p><img src="//" alt="Bond 1"></p> <p>We are now in other countries. In Mexico, we have a team. We have a team here in the UK. We've been here in the UK for a couple of years now. We're recently registered as a charity. We're also based in Japan, which is the hub for many food businesses in Asia. Four or five years ago now, the Humane League honed in on getting hens out of cages.</p> <p>It was work with corporatations, getting corporate commitments, that's been helping make change for hens. Other organizations have been doing more positive outreach. Then we took the approach that we would have discussions with companies, but, if those discussions didn't work in getting any kind of commitment, we would launch campaigns against these companies to get them to end cages for hens by 2025. We, along with a number of organizations, such as Mercy For Animals, Animal Equality, and HSUS, to name a few in the US, have been working on this. In doing so, we've got over 300 companies in the US to commit to go cage-free by 2025. A similar approach has been taken here in Europe. A number of organizations are working on this here, too, like Open Cages, L214, and ourselves. Compassion in World Farming have been working a lot in the UK prior to us coming in 2016, and have been making progress to get commitments for cage-free.</p> <p><img src="//" alt="Bond 2"></p> <p>Now, here in the UK, all major companies have committed to go cage-free. This is the battery cage. This is what we're talking about. Here, in Europe, we banned this in 2012. However, in other countries, pretty much around the world, this is the system in which hens will spend their lives. That's for a year and a half. They'll be in these systems. There is no more space than an iPad for them to live. They can't spread their wings. They are only given food and water. In Europe, we banned this in 2012 and moved to the enriched cage. Now the enriched cage was meant to be a step up and, in some ways, there is a degree of improvement. The birds have small area where they can lay their eggs. You can see the orange flaps. That's where they'd be able to put up, where they want to lay their eggs because they prefer to be in a dark, enclosed space. That's not quite what that is, but it is better than the barren cage.</p> <p>They want to perch up high at night to feel safe. The enriched cage does have some perches. Unfortunately, those perches are very close to the ground of the cage. They're on a wire cage for their entire lives. There's a small scratching area. Again, they can't really do dust bathing or foraging in that properly. That's why we pushed for the cage-free. Now, this is a more intensive system than what these systems look like typically. These barn systems do provide more for the hens. They're free to move around the system, move around the barn. They have litter on the floor for which they can dust-bathe and forage. They have perches, which they can perch up high at night. You can't see here, but then inside, there's nest boxes. They are dark, so the hens can have a secluded place to lay their eggs. As we go through the welfare potential of systems, of course, free-range would be the highest welfare potential, allowing birds outside to forage even more naturally and to have natural light, et cetera.</p> <p><img src="//" alt="Bond 3"></p> <p>Through these approaches, we've been working with other organizations to get major companies to make commitments. That includes Compass, which is major food service company. People like Kellogg's and manufacturers, all the major fast food chains, and also the supermarkets. Thees commitments aren't just in one country. These commitments span both Europe and, in some case, globally. How do we know this approach is working? Well, this is Chad Gregory. Chad is president of UEP, United Egg Producers.</p> <p><img src="//" alt="Bond 4"></p> <p>He helpfully told us that, "They're the ones that are driving this. There is no question about it. Chaos, market disruption, and just complete lack of control." There we are. We can also look at the figures. What we're seeing is, if you look back in 2010, before this corporate campaigning happened, the numbers are really stagnant, going up very little every year. In 2010, we had 4.4%. Now, we look to 2017, and we're at 15.6% of hens in cage-free systems.</p> <p><img src="//" alt="Bond 5"></p> <p>Actually, USDA reporting this month on month, we're actually up to 17.9% last month. So we're actually beginning to see change happening right now in real time. Here, this is the UK.</p> <p><img src="//" alt="Bond 6"></p> <p>There's been a lag period for a while between 2012 and 2016, where there really hasn't been a huge amount of change in getting hens out cages. Come 2016, we started getting the rest of those major companies to commit. We worked very much on getting Noble Foods, the largest egg producer in the UK, one of the largest in Europe, to commit to go cage-free as well. We now see free-range is moving up and the enriched cage is moving down. Before that, it had been pretty plateaued for a while. We talk about numbers. Hens suffer in very huge numbers. They also suffer for a prolonged period.</p> <p>These birds will go into cages for all their lives, and they'll be slaughtered at around 70 to 90 weeks of age. The suffering would be prolonged even if there weren't vast numbers. But yet, the numbers are pretty large; 38 million in the UK, 320 million when we talk about the US, and nearly 400 million in Europe, and actually worldwide, 7.6 billion.</p> <p><img src="//" alt="Bond 7"></p> <p>The numbers are big, and the longevity of suffering is long. We have the Open Wing Alliance. We need to tackle this 7.6 billion hens. We need to get them out of cages. The majority of which are in those cages. The Humane League initiated the Open Wing Alliance. We brought together members from around the globe to come together and unify a front to get hens out of cages. We share campaign strategies, and tactics as organizations. We share resources around the world. We have 59 organizations in 57 countries now. You can see here the black, indicating where we are.</p> <p><img src="//" alt="Bond 8"></p> <p>Month on month, new organizations are joining. We give grants to these organizations in areas where they might not otherwise get monetary support. This allows activists on the ground to begin work in their countries, where there just hasn't been the monetary support to allow them to do corporate campaigning or any real farm animal campaigning. We also know that the industry is paying attention. In New Zealand, this is from the Weekly Times.</p> <p><img src="//" alt="Bond 9"></p> <p>In New Zealand, they banned the barren battery cage and required the enriched cage, as we did here in Europe. But the industry said, "Don't bother. Let's learn from what's happened in Europe. These European producers are now having to go into cage-free already. They've only just changed, converted to enriched cages. Really, the global shift is happening. It's come to New Zealand. It will come, and we might as well go cage-free."</p> <p>South Africa has also put out in the industry magazine: "The cage-free revolution is moving rapidly through the world. South African egg industry should make sure they're prepared to accommodate that change." As this has been going on worldwide now, organizations are working on cage-free in countries like the United States, and here in Europe, the United Kingdom and Sweden, for instance.</p> <p>We're actually shifting now to look at chickens raised for meat, or broiler chickens, as they're known by the industry. The numbers of these animals is vast. One billion in the UK. We're the second largest producer here, only second to Poland; 8.5 billion in Europe, around about the same amount in the US, and over 65 billion worldwide.</p> <p><img src="//" alt="Bond 10"></p> <p>These birds account for nearly 95% of all land animals being produced for food. While the degree of time that they're on this planet, which is six to seven weeks, is pretty short, the numbers are vast.</p> <p><img src="//" alt="Bond 11"></p> <p>Standard, intensive chicken rearing very much looks like this. It's a barren barn. The chickens are on litter. When they're younger, they have a bit more space because they're smaller, but there's tens of thousands of birds in one single shed. They start like this. As they grow, they have a lot less space. They're in this until they're slaughtered at six to seven weeks. In that time, they'll suffer from conditions like painful leg health problems, metabolic diseases. Because their legs are painful, they don't want to move around so much. They sit down on the litter a lot. This can lead to blisters from the ammonia on the litter that they're in for the whole time. In fact, their growth is such that they grow six times faster than they would have back in the 1950s. They've been genetically reared so that they produce a much larger breast muscle.</p> <p>It becomes very evident when you look at them, that the industrial breeds of broiler chickens are much worse off. The stature of the birds has changed. No longer standing really upright, but actually having to widen their stance to accommodate the larger breast. This change in stature means that the birds tilt forward. It changes how the center of gravity is, and it makes it harder for them to walk. Their actual skeleton is under a lot of pressure. It has to grow so rapidly.</p> <p><img src="//" alt="Bond 12"></p> <p>Here's a research project that's going on over in the US at Purdue, which shows the lethargy these birds suffer. The red birds, which are a welfare breed, are moving around. There's lots of activity. But then, if you look at the white birds, the majority are sitting down. If they're not sitting down, they're at the food or the water.</p> <p>Of course, these birds grow so quickly that they need to be eating continuously. When they're not eating, all they're doing is resting. Now, these birds still have the same mental capacities as the other birds. It's just, unfortunately, they're trapped in their own bodies and unable to behave how they would choose. At six to seven weeks of age, they will be taken to the slaughterhouse. Now, typically, that's a waterbath system. This means the birds are hung upside down by their legs. They will go through a tank of water that has an electric current, that will run from the head to their feet.</p> <p><img src="//" alt="Bond 13"></p> <p>That should give them an electric shock to make them unconscious before their throats are cut. However, unfortunately, the system was made for speed and not actually for welfare. We're talking about 140 to 180 birds going through this process every minute. This system often doesn't actually give these birds an appropriate electric shock to make them unconscious. In fact, many of them are conscious when they get to having their neck cut. For those that have their neck cut poorly as well, they may make it through to the scalding tank.</p> <p>This is to remove the feathers. But if they're still alive, they will obviously experience the scalding tank fully conscious or part conscious. That's estimated to be around 2% to 3% in the US that suffer from that. Actually, the Trump administration has also just allowed the speed of the line to go up even faster. There is an alternative, controlled-atmospheric stunning, which uses either carbon dioxide or inert gases to induce unconsciousness, and then the birds will die and then the throats are cut. Now, this means that they don't have to be handled in the same way. Where the waterbath system, they're hung up by their legs, which is painful. The legs are weak, as we were saying. They don't have a diaphragm so actually their insides crush their lungs. By not handling them and keeping them in the crates, they don't have to experience this. With controlled-atmospheric stunning, they are likely to become unconscious and then killed versus a stun, which may actually just be an electric shock before they experience their neck being cut.</p> <p>There are a lot of things that we can do to improve the welfare of broilers. We came together in the US and in Europe with the animal welfare specialists from different animal protection organizations, and came and listed some criteria to improve the welfare of meat chickens. These included changing the breed, having a higher welfare breed. So that their legs are healthier, and they don't suffer from so many metabolic diseases, for instance. We also want to increase space. These birds want to move around. They need space in the shed, so we've lowered the stocking density and provided enrichment in the shed, so pecking materials to keep these birds active, like straw bales. We've improved the lighting, provided natural light here in Europe. We insist on controlled atmosphere killing. This means, the birds won't have to go through the waterbath system. We also want to make sure that companies actually do this. For that reason, we're asking for third-party auditing and the companies each report year on year or what progress they're making towards this.</p> <p><img src="//" alt="Bond 14"></p> <p>Now, over in the US, they've been doing this for a year or so now. In doing so, they've got many major companies, over 95 now, to commit to this standard. They're committed, by 2024, to improve the welfare of their chickens in their supply chain. Here, in Europe, we're just beginning with this over in the UK and France, for instance, and Germany. In doing so, we've got Elior Group, Nestle. They're known to commit to making to the European chicken commitment, and that's expanding over the whole of Europe. Some of these other ones are UK brands that I'm sure you recognize. Now, the most recent target for us, as animal protection organizations, is McDonalds. McDonalds is the world's largest restaurant chain by revenue.</p> <p>They serve 70 million people every day. So we know that this is a brand that people recognize. It's a brand that needs to change. In the US, there's a coalition of organizations; Animal Equality, HSUS, Compassion Over Killing, Compassion in World Farming, us, and Animal Equality have come together. We have launched a campaign over there. Here in the UK, you can see here, we've started campaigning as well. That includes protests, includes handing in petitions, getting people in the streets to be engaged with the issue, and also doing protests outside of restaurants. In the US, they've also got advertising in Times Square.</p> <p>Now, volunteers do take to the streets. We also have another way of trying to get the attention of these companies. That's through their social media and actually making contact with the companies themselves.</p> <p><img src="//" alt="Bond 15"></p> <p>We have something called a Fast Action Network. Other organizations have something similar that you can join. You'll get emails asking you to take a couple of minutes of your time to action against the companies. If you would like to join up, you can just go to the website. You can find the Fast Action Network. But really, this works by having a high volume of people contacting the companies. It really engages the companies and keeps reminding them that they need to be working on this. Of course, they worry about their brand, so it's important that we make sure we're heard.</p> <p>Now, while getting these welfare standards through institution change is making a big impact, we also know that replacing animal products with plant-based or maybe cultured meat one day is the future, where we can be sure animals don't suffer, would be great. In reality, it makes sense for companies as well. There are far fewer issues with plant-based products when you think about antibiotic problems, or greenhouse gases, or carbon footprint, for instance. With these cage-free commitments, as an example, companies are actually seeing how difficult it is for them to go down the supply chains, and try and find out, is the company actually using a cage-free egg. What they're doing is, instead, they're switching up. How can we take this out completely and replace it with something plant-based?</p> <p>It's much simpler for them. We know that this is working, and that the plant-based products are much higher welfare. They're also gonna be helping reduce suffering year on year. It's not just these commitments from companies that we're gonna be working on. We really need to follow through and make sure these commitments come into fruition. But there's also another way we can do that, and that's through legislation.</p> <p><img src="//" alt="Bond 16"></p> <p>In the US, they've been working on state legislation. They have Prop 2. They have Prop 12 in California, which will be voted on just a few days from now. That one ends cages for hens, for pigs, et cetera. Not just on what's happening in country, so the production, but also imports coming in the state, also imports coming into the state.</p> <p>California makes up 10% of the population, so it's not a small thing. It will impact the whole industry. Over here in Europe, we have something called the European Citizen Initiative that's being run by Compassion in World Farming. In fact, they've got organizations in all the European countries to work on this and collect signatures to end cages for hens, ducks, quails, rabbits, pigs, calves. That work is gonna take a year of gaining a million signatures, but should then be pushed through to the European Parliament. Really, this approach is bringing together both corporate campaigning, the plant-based alternatives, and also finally, legislation prohibiting these systems completely.</p> <p>We're really now begin to see the landscape for farmed animals changing. We're already seeing the change for cages, hens coming out of cages. We're already beginning to be able to push through legislation. Now, we'll be able to say we are actually beginning to see the reduction of suffering of billions of animals every year.</p> <h2>Questions</h2> <p><em>Question</em>: To what extent are companies making changes because they feel public press is really pushing them, versus legislation in their country changing?</p> <p><em>Vicky</em>: Yeah, so there are companies that are savvy and realize that actually this makes sense and that we can be the first. We can shout about it, and we can use it in our favor to say that we care about welfare. But many companies, unfortunately, that's not the case. When it comes to it, that's why we have to launch these campaigns. Highlight what companies are doing, the cruelty that's in their supply chain, and they don't want to be associated with that. It's a mixture of both. There are some companies that can see it, and there are the companies, the majority, unfortunately, where they need to have some kind of public pressure, and some kind of awareness before they'll make the change.</p> <p><em>Question</em>: You said that for some companies, switching to plant-based actually is going cost saving in the long term. Do you have any idea how long it takes for them to use some alternative to chicken products profitably?</p> <p><em>Vicky</em>: I don't know. There's companies looking to replace, taking out a proportion of like 20 or 30% and replace it with plant-based, so that they can reduce the amount of meat that they're using. With the eggs, it's probably proven quicker for them to do it. Some supermarkets will have 2,000 or 3,000 ranges with ingredients that include eggs. That's an incredible amount of lines to be going through. For them, it's easier if they can just say to these companies, "Actually, I want you to replace it with this," but I don't have any strict timelines and what that will look like, unfortunately.</p> <p><em>Question</em>: What makes that corporate campaign more likely to succeed? Are there some basic principles across countries? Does that really depends on where you are?</p> <p><em>Vicky</em>: That's a really great question. It does depend where you are. In America, legislation is such that you can do quite a lot that you couldn't do here in Europe. You can do more things that maybe the public doesn't see but you can campaign almost internally to the companies. Yes, here in Europe and within countries in general, it depends. We found, for instance, that in the UK it's been really effective for us to make them see us day in, day out outside their headquarters. There's a finance manager or whatever that's feeling bad because they're involved with this company that is actually enabling cruelty. They probably haven't thought about it to that extent before. But you're standing outside their headquarters and saying, "This is the headquarters of cruelty," which is what we did for Noble Foods. That bothers them. They begin to think about it. Every country has a different approach based on what's effective for them, but certainly, getting out there and doing kind of silent protest, being noticed by the companies day in, day out makes a big difference.</p> <p><em>Question</em>: You focused on chickens' welfare in your presentation. Can you speak a little bit more about similar campaigns for other farmed animals? I'd be particularly interested in fish, if there's a campaign in that.</p> <p><em>Vicky</em>: Sadly, there isn't really. There's World Fish-Free Day, I think it's called, or something like that. That's happening in March next year. That will be a big thing. I think most organizations will talk about that. But I know that Compassion in World Farming are beginning to start campaigning on the humane slaughter of fish. That will be up and coming so that's really exciting and that'll be expanding in Europe probably to start with.</p> <p><em>Question</em>: As a final question, if someone wants to go into corporate campaigning, what's a good way for them to get involved?</p> <p><em>Vicky</em>: Oh, great. Well, just put yourself out there as an activist, starting to volunteer, learning the tactics that we use is really helpful; because then you can also go to the companies and say you were involved in those tactics, and they'll know exactly what's gonna happen. Also, if you've got sales background or that kind of thing, that kind of attitude can work really well in going and talking to corporations.</p> the-centre-for-effective-altruism s4hXvWRakRMXdkuhx 2019-03-22T14:56:46.924Z Comment by The Centre for Effective Altruism on Carolyn Henry: Eliminating Parasitic Worm Infections <p>Thank you for catching the typo! Would that it were true.</p> the-centre-for-effective-altruism o6RHydMjA6trZrrPQ 2019-03-20T02:07:29.695Z Carolyn Henry: Eliminating Parasitic Worm Infections <p><em>Schistosomiasis affects about a quarter of a billion people worldwide, particularly people living in some of the world’s poorest countries. In this talk from EA Global 2018: London, Carolyn Henry of the Schistosomiasis Control Initiative talks about SCI’s work against the disease, the value of buy-in from the recipients of their aid, and the importance of mentoring local government officials.</em></p> <p><em>A transcript of Carolyn's talk is below, including questions from the audience, which CEA has lightly edited for clarity. You can also read the transcript on <a href=""></a>, or watch the talk on <a href="">YouTube</a>.</em></p> <h2>The Talk</h2> <p>Eight years ago, I started working for Médecins Sans Frontières, and I was placed in Nigeria. I was super excited to go out for the first time, to feel like I was doing good firsthand as a nurse in a very remote clinic in Zamfara State, which is near the boarder of Niger. The project was for children under five and there was a great need there, because at the time there had been an environmental impact which had resulted in lead poisoning, causing thousands of children to die and a lot more to be developmentally challenged. On top of that, there was the usual malaria, with its massive problems, and also outbreaks of cholera and annual outbreaks of meningitis. So there was a huge burden of poor health and mortality in the area. I turned up to manage the local hospital and the outreach clinic there, and was expecting people to be as excited to receive the treatment as I was to give it. But it was actually quite difficult to get people to come to the clinics. They were a bit suspicious about even the world-class meds that had been specially produced for this lead poisoning. They were quite suspicious of that, and also obviously suspicious of us coming in from the outside to their very rural community.</p> <p><img src="//" alt="1600 Carolyn Henry"></p> <p>And this came to the forefront of my mind when I was sent to go to an outreach clinic, where we were looking to see if we could expand the scope of the project. We went to see the local health center, which as you can imagine was very resource poor. It was not providing high quality service to that community. But when we were there on that day, there was also a traditional medicine man who was traveling through the village. People were queuing up to go and pay quite a large proportion of their income to get herbal teas, which they believed would cure them, rather than the medicines that we would be able to give them for free, that were evidenced and effective. This really struck me, really in the face, about how it was so important for not just me to know the evidence and to know the effectiveness of the medical care, but about that community and how we can empower them to ask for that treatment themselves, and to get the medical care they deserve.</p> <p>Fast forward a few years, I now work for the Schistosomiasis Control Initiative. I'm a Senior Program Advisor, mostly working in Ethiopia and Tanzania. SCI has just going through a strategy change. We've got a new strategy out this year. And now we are really focusing on that exact point, is how do we empower local communities? Not just delivering our signature, a cost effective treatment program, but also articulating more about how we do that, and how we are able to empower not just national governments to have ownership of their projects, but also local communities.</p> <p>I'm going to just talk a bit about our strategy and our challenges. We'd love some feedback from the EA community, and input to make our projects as good as possible.</p> <p><img src="//" alt="1600 Carolyn Henry (1)"></p> <p>For those who are not familiar with SCI, we work in 15 Sub-Saharan African countries. We are delivering treatments for parasitic worm infections, mostly focusing on schistosomiasis and soil-transmitted helminths, or for short, schisto and STH. We deliver those programs through national governments, so we don't directly implement, but rather support national governments to deliver their own programs through their existing health systems. We're really trying, in our new strategy, to better articulate our approach. So how we do things, not just what we're able to achieve at the end of treatment.</p> <p><img src="//" alt="1600 Carolyn Henry (2)"></p> <p>One of our big things we want to emphasize, though we've been maintaining them for a long time, is our partnerships. Looking at the partnerships and collaborations with other sectors like water sanitation and hygiene, and also other sectors like education and nutrition which are overlapping as well. But also, we really want to show how we're working through the partnerships with local governments, to make our work as effective as it can be. We are also looking at our own processes and procedures, making them as effective internally, but also making them accessible for the countries and governments with which we work, so they can have their own embedded knowledge management systems, and they can mentor some of the approaches that we might use as standards, and learn procedures they might not have otherwise known about. So we're really keen to put some innovation into each country's programs.</p> <p>We're also trying to keep a sustainability element. Schistosomiasis in particular is not going to go away very quickly. We're going to have to work for many years at delivering the mass drug administration programs. And then after that, there still needs to be a health system in place that can sustain the surveillance, so that these diseases don't just come back again. It's not good enough to treat until the rate goes down enough, because it will just come back up without the proper water sanitation, education, and behavior change. We need to make sure that local health systems are strong and robust to be able to sustain the programs now, but also to do the necessary surveillance in the future.</p> <p>Finally, we are always evidence based. We're not only evidencing about the treatments for the parasitic worm infection, we also want to evidence the approach that we use to prove that it is a very effective method, and also just understand more about how we're connecting to existing health systems and strengthening them from them the inside. We want to understand, and to be able to articulate that better.</p> <p>Here's a case study from Ethiopia, where SCI mostly works. So what happens at the beginning of the year is that the WHO, through a drug donation program, can deliver the drug so that the country teams are able to distribute them all over the country. We also get funding for the program from a variety of sources. So SCI is just one of those sources for the Ethiopian government. This year in total they've had hundreds of millions of tablets, and even just the sheer schisto and STH programs are over $6 million this year.</p> <p><img src="//" alt="1600 Carolyn Henry (3)"></p> <p>So what we do at SCI is support the national level governments. So thinking about the size of that program, there's only one person in the Ministry of Health that's responsible for the schisto and STH programs. So you can imagine the capacity that he has to be able to try and initiate, sustain, maintain that level of program on his own. It's a lot. And also he's been assigned to that position, rather than selected because of his knowledge on Neglected Tropical Diseases (NTDs). So we are there to really support, mentor and coach in all the areas of the program, including leadership, where we're talking about advocacy and how to mobilize the staff at lower labels. We're training people, and thinking about how they can train others in turn. We're doing drug distribution and procurement. So making sure they have a good process of inventory, where the drugs arrive savely and on time. We make sure that the drugs are well stored for the people who need them, and figure out how to mobilize the community to make sure they come, and are aware of a drug distribution on that particular day.</p> <p>And lastly, monitoring and evaluation. So we're really making sure that our beneficiaries understand the impact of the processes and of delivering those drugs. And then that relies on a cascade. So in Ethiopia, that one person at national level will train the nine regional NTD coordinators. They do all the neglected tropical diseases. So we have eight endemic in Ethiopia plus they'll have to do malaria programs and TB programs. So you can imagine it's a huge volume of work they have to do. So we need to make sure that the packages are very easily accessible and understandable, so they can understand, pick them up, and deliver a very good program that's safe. They will then train the district level coordinators. So we are working in over 550 districts in Ethiopia, and they will then train the health extension workers.</p> <p>So this year we're doing over 22,000 health extension worker trainings, as well as training schoolteachers for awareness so that they can support the health extension workers. That's all to deliver over 8 million treatments for schistosomiasis this year, and 15 million for soil-transmitted helminths. And then we need to make sure that they can monitor and evaluate, but actually the impact of them being able to do that monitoring and evaluation is more about accountability. So they understand that they can go to the funders and report back to them, including us at SCI, to show what they have managed to achieve, and also really to take charge and ownership of that program, because they feel that buy-in from when they see that things have actually improved. So we use this program cycle to help ensure there's ownership and accountability.</p> <p><img src="//" alt="1600 Carolyn Henry (4)"></p> <p>So, of course, our ultimate aim is to have disease elimination of parasitic worm infections. But it's not as simple as just sticking to the WHO coverage target of 75% for school age children. We actually need to go above and beyond. So we're looking now at how we can reach those hard-to-reach children, the ones that don't go to school, that will receive the medicine at the school-based platform. We want to make sure that the children who are in refugee camps, who are nomadic, who maybe have to work from a very young age and are outdoing the agricultural field work, are still able to get those tablets every year on the de-worming date. But as well, alongside the disease elimination, we really want to make sure that there are strong health systems along the way. So we don't want to interrupt the health system by taking over or interjecting. We want to make sure that all the teams are learning alongside us through the whole journey, to make sure that institutional learning will persist for these national health systems. And we need to make sure that they're robust and resilient.</p> <p>These countries often face so many disasters. For example, Liberia had to stop their program at some time for Ebola. And we managed to quickly engage them again, but we need to make sure that if there's flooding, national disasters or other disease epidemics, that countries from now on can continue their de-worming programs. And to be able to make sure that it's not always a standalone, externally funded program. We need to make sure that we can contain health programs to a size that they are more manageable and economically viable, so that the programs can then transition into the mainstream health system. And that's more likely to be the case when we perform a smaller proportion of treatments annually. And also definitely when we get into the surveillance stage, because the health system will need to maintain that surveillance level on their own. So this leads us to some of our challenges, like that we're really hoping that the governments will be able to think about their investment into NTD programs in general, but specifically schistosomiasis and soil-transmitted helminths.</p> <p><img src="//" alt="1600 Carolyn Henry (5)"></p> <p>We want to make sure that there's the political will to engage with these programs, that people understand what the complexity of their health burden is on young people, and the impact that they can have from taking very simple measures. So we want to create the political will, but also we want to make sure that that not only comes from us externally, but also from the communities themselves. We want to make sure that those communities will be able to create demand on their own. If a train is late in the UK, people get on Twitter; they demand a refund. That's a strong public demand for quality service. But in the communities where we work, they don't even have any expectation of a health service, let alone a quality health service.</p> <p>So that's what we want to make sure that the community knows, what is quality and what they deserve in a health system. And then we want to make sure that their health systems have that capacity not only to deliver the de-worming program now, but as circumstances change over the years. It may be we'll need reassessments, we'll need to evaluate the data, we'll need to make sure that we work alongside other NTD-control efforts, and also alongside the water and sanitation sectors. So we need health service teams to have capacity to be able to do that.</p> <p>Now, our challenge is how do we transition from being a very cost effective organization who's horizontally scaled to deliver huge amounts of treatments with quite a small team, but having a big impact, to an organization that's embedded vertically into the health system, making sure we get all those hard to reach children and eliminate the disease? How do we make sure that the health system is strong enough, and that we've helped build the necessary capacity. So that is now our challenge, is to think about how we can effectively measure and also articulate what the value add is of our approaches and our processes.</p> <h2>Questions</h2> <p><em>Question</em>: What is schistosomiasis? What happens when you get it?</p> <p><em>Carolyn</em>: Yeah, so it's a parasitic worm. When children have the disease, it can be either in the bowel or the bladder. So there's two different types, but they're both treated with the same medicine. And the way that it's transmitted, we call the life cycle of that particular worm. It's found in a fresh water lake. So if a person goes into the lake, there might be some snails in the lake. And they produce what we call cercariae. The little cercariae can mingle inside the skin. They're so small, they can just slip inside of the skin, and then they will go through the system and enter either the bowel or the bladder. The burden of disease is seen mostly in school-aged children, because they're smaller, so they feel the effects more strongly. It can produce a variety of different conditions.</p> <p>Some of them you might not see initially. So that's one of our challenges, is that people sometimes don't know that they're ill, or don't immediately see the consequences of the disease. They're not thinking, "I want to go and take medicine for that." So some of these diseases can actually... the impact of the disease can actually lie dormant for quite some time. And the way it goes back into the lake is through open defecation or through being passed from either the urine or the stool back into the lake. The snail is what we call an intermittent host. Then it produces the cercariae. The cercariae go back into the skin. So things like access to water are what makes schistosomiasis so different from other parasitic worms. It's not as easy as saying, get some toilets for example, because if the cercariae is in the water, and you've got nowhere else to wash yourself or your clothes, or if your livelihood is fishing, then you're always going to be contaminated by that water.</p> <p>So it's quite a challenging problem in terms of the causality, but also the burden of disease, which is not seen immediately, which won't cause people initially to go and get access to medical treatment.</p> <p><em>Question</em>: So ultimately, neither medicine nor sanitation alone can really solve the problem?</p> <p><em>Carolyn</em>: Yeah. So we sort of think about it like a three legged stool. There's a part about the these snails as vectors, and thinking about intermittent host control. So the question of whether there's anything we can do in terms of the lakes and understanding the snail as that vector.</p> <p>And then there's also the part about the water and sanitation, but it's not as simple as just having the toilet or just a little bit of water. It would have to be on such a scale that people wouldn't need to have the contact with the water at all, or they would be able, for example, if they're fishermen, to be able to protect themselves against that risk and understand the risks so much they can wear certain boots to protect themselves from the water.</p> <p>And then there's mass drug administration, which is the most cost effective way for the control of the disease as a public health problem. But when removing to think about elimination of that disease, yes, you're right. You have to think about everything together.</p> <p><em>Question</em>: So I was reading a little bit just as you were talking also on Wikipedia, this affects 250 million people annually. This is with your organization working on it, and presumably others?</p> <p><em>Carolyn</em> Yeah, exactly. And they're all in the poorest countries. So at the moment we're aiming for control of the disease.</p> <p>So people are still affected, but they wouldn't have the burden of the disease, because every year they'll get treatments through their schools. So we are bringing down that burden of disease. But in our new strategy, obviously we know that ultimately it's not enough just to control. We're now pushing toward elimination. So that's where you have to think of that extra mile. How do you not just control? So the WHO recommends 75% coverage of all school aged children to control the burden of disease. But when we're thinking about elimination goals, it shoots up to above 90% coverage. So that means we need to be thinking of those hard to reach children. And actually we don't really know where they all are; they're not in a census. So we don't know exactly how many we're talking about or where those children are. So that's a lot of the research we're doing at the moment.</p> <p><em>Question</em>: Has 75% coverage been accomplished? Is it the reality in a lot of the places where you're working?</p> <p><em>Carolyn</em>: So that's what the WHO and all the evidence proves, that the 75% coverage is necessary for the control of the disease. However, like any evidence, you could actually ask "how do you know it's 75%" if maybe you don't have an accurate census to make sure your denominator is right. So it's like any research. You can find ways that you would argue the point. But definitely the overwhelming pool of evidence that's out there shows that there's a great impact on people's lives by reaching 75% coverage.</p> <p><em>Question</em>: What does the team look like? What are you guys doing day to day? If we were to observe your work, what would we see and what sort of skills do you feel like are missing from your team that you would love to add to be able to extend your work?</p> <p>Yeah, so that's interesting. So at the moment we have four main teams. So I'm in the programs team, and we are program advisors. So we have two or three countries each that we go out to regularly. So for example, I sometimes go once a month to Ethiopia for about a week, or sometimes I just go for training to help or specific times during when the drugs are being given out. So day to day we're either in the office catching up with the rest of the teams or we are out in our countries, maybe about 30% travel time for most of us. And then there's the monitoring evaluation and research team, which has got bio-statisticians, social scientists, and an economic advisor, so value-for-money officer. And they support the country programs mostly through us as the program advisors, and they would help with all their statistical analysis of the data that we bring back, particularly about impact reports, and coverage validation. But they don't just do it for us. They also travel to the countries to help us train in-country teams. So eventually those people can do it themselves and build that skill. And then we have a finance team to support us, and also a communications team to help articulate our work. So we're at about 25 people total.</p> <p>In terms of more skills that we would like, I think we're at a really interesting point, where we could scale a lot and that's evidenced when GiveWell does their assessments. We're ranked second by GiveWell as the most cost effective nonprofit initiative. When we're looking at the roles, I think we could go so many directions, but it's how we're at the tipping point that we've got a core team of roles and people that we need, but at different points we are thinking more about the social science aspect. So we've got one social scientist, but we could do so much in that field. It also thinking about, do we either collaborate or have consultancy about wash experts, to help us think about that a little more? But mostly I think we're in a really nice position where the community that work on neglected tropical diseases is very good at collaborating. So often we do get skills just through collaboration with either research or different organizations.</p> <p><em>Question</em>: If I understood correctly, as an organization, you're trying to make a shift from a pretty narrow, well-defined set of programs that you're supporting to an institution building type of challenge. Is that right?</p> <p><em>Carolyn</em>: So I think that rather than a shift, it's more like an organic movement that's probably happened from the beginning. And it was something we were always known for, the Program Advisors, including people before me, have had very good relationships with the governments that we work with. So actually I guess rather than a shift, it's more just thinking actually, this is what we do, and it works really well. So how do we evidence it? How do we articulate it and how do we measure it to really encapsulate what the whole program is? Yeah.</p> the-centre-for-effective-altruism xHtAGt6uEZ9aqkrFP 2019-03-18T14:57:47.792Z Eric Drexler: Paretotopian Goal Alignment <p><em>What if humanity suddenly had a thousand times as many resources at our disposal? It might make fighting over them seem like a silly idea, since cooperating would be a safer way to be reliably better off. In this talk from EA Global 2018: London, Eric Drexler argues that when emerging technologies make our productive capacity skyrocket, we should be able to make the world much better for everyone.</em></p> <p><em>A transcript of Eric's talk is below, which CEA has lightly edited for clarity. You can also watch this talk on <a href="">YouTube</a>, or read its transcript on <a href=""></a>.</em></p> <h2>The Talk</h2> <p>This talk is going to be a bit condensed and crowded. I will plow forward at a brisk pace, but this is a marvelous group of people who I trust to keep up wonderfully.</p> <p><img src="//" alt="Paretotopia Slide 1"></p> <p>For Paretotopian Goal Alignment, a key concept is Pareto-preferred futures, meaning futures that would be strongly approximately preferred by more or less everyone. If we can have futures like that, that are part of the agenda, that are being seriously discussed, that people are planning for, then perhaps we can get there. Strong goal alignment can make a lot of outcomes happen that would not work in the middle of conflict.</p> <p>So, when does goal alignment matter. It could matter for changing perceptions. There's the concept of an Overton window, the range of what can be discussed within a given community at a given time, and what can be discussed and taken seriously and regarded as reasonable changes over time. Overton windows also vary by community. The Overton window for discourse in the EA community is different from that in, for example, the public sphere in Country X.</p> <p><img src="//" alt="Paretotopia Slide 2"></p> <p>AI seems likely to play a pivotal role. Today we can ask questions we couldn't ask before about AI, back when AI was a very abstract concept. We can ask questions like, "Where do AI systems come from?" Because they're now being developed. They come from research and development processes. What do they do? Broadly speaking, they provide services; they perform tasks in bounded time with bounded resources. What will they be able to do? Well, if you take AI seriously, you expect AI systems to be able to automate more or less any human task. And more than that.</p> <p>So now we ask, what is research and development? Well, it's a bunch of tasks to automate. Increasingly, we see AI research and development being automated, using software and AI tools. And where that leads is toward what one can call recursive technology improvement. There's a classic view of AI systems building better AI systems. This view has been associated with agents. What we see emerging is recursive technological improvement in association with a technology base. There's an ongoing input of human insights, but human insights are leveraged more and more, and become higher and higher level. Perhaps they also become less and less necessary. So at some point, we have AI development with AI speed.</p> <p>Where that leads is toward what I describe here as comprehensive AI services. Expanding the range of services, increasing their level toward this asymptotic notion of "comprehensive." What does "comprehensive" mean? Well, it includes the service of developing new services, and that's where generality comes from.</p> <p>So, I just note that the C in CAIS does the work of G in AGI. So if you ask, "But in the CAIS model, can you do X? Do you need AGI to do X?" Then I say, "What part of 'comprehensive' do you not understand?" I won't be quite that rude, but if you ask, the answer is "Well, it's comprehensive. What is it you want to do? Let's talk about that."</p> <p><img src="//" alt="Paretotopia Slide 3"></p> <p>For this talk, I think there are some key considerations for forward-looking EA strategy. This set of considerations is anchored in AI, in an important sense. Someday, I think it's reasonable to expect that AI will be visibly, to a bunch of relevant communities, poised to be on the verge of explosive growth. That it will be sliding into the Overton window of powerful decision-makers.</p> <p>Not today, but increasingly and very strongly at some point downstream, when big changes are happening before their eyes and more and more experts are saying, "Look at what's happening." As a consequence of that, we will be on the verge of enormous expansion in productive capacity. That's one of the applications of AI: fast, highly effective automation.</p> <p>Also, this is a harder story to tell, but it follows: if the right groundwork has been laid, we could have systems - security systems, military systems, domestic security systems, et cetera - that are benign in a strong sense, as viewed by almost all parties, and effective with respect to x-risk, military conflict, and so on.</p> <p>A final key consideration is that these facts are outside the Overton window of policy discourse. One cannot have serious policy discussions based on these assumptions. The other facts make possible an approximately strongly Pareto-preferred world. And the final fact constrains strategies by which we might actually move in that direction and get there.</p> <p>And that conflict is essential to the latter part of the talk, but first I would like to talk about resource competition, because that's often seen as the "hard question." Resources are bounded at any particular time, and people compete over them. Isn't that a reason why things look like a zero-sum game? And resource competition does not align goals, but instead makes goals oppose each other.</p> <p>So, here's a graph called "quantity of stuff that party A has," vertically, "quantity of stuff that B has," horizontally. There's a constraint; there's one unit of "stuff," and so the trade-off curve here is a straight line, and changes are on that line, and goals are opposed. Zero-sum game.</p> <p><img src="//" alt="Paretotopia Slide 4"></p> <p>In fact, resources increase over time, but the notion of increasing by a moderate number of percent per year is what people have in mind, and the time horizon in which you have a 50% increase is considered very large.</p> <p>But even with a 50% increase, shown here, if either A or B takes a very large proportion during the shift, like 90%, the other one is significantly worse off than where they started.</p> <p>Ordinarily, when we're thinking about utility, we don't regard utility as linear in resources, but as something more like the logarithm of resources. We'll adopt that for illustrative purposes here. If we plot the same curves on a log scale, the lines become curved. So there's the same unit constraint. Here's the 50% expansion plotted logarithmically.</p> <p><img src="//" alt="Paretotopia Slide 5"></p> <p>Qualitatively, looks rather similar. Well, the topological relationships are the same, it's just re-plotting the same lines on a log scale. But on a log scale, we can now represent large expansions, and have the geometry reflect utility in a direct visual sense. So there's the same diagram, with current holdings and 50% expansion. And here's what a thousandfold expansion looks like.</p> <p><img src="//" alt="Paretotopia Slide 6"></p> <p>Taking all the gains and taking 90% of the total have now switched position. Someone could actually take a large share of resources and everyone would still be way ahead. What matters is that there be some reasonable division of gains. That's a different situation than the 50% increase, where one person taking the vast majority was actively bad for the other.</p> <p>The key point here is that the difference between taking everything versus having a reasonable share is not very large, in terms of utility. So it's a very different situation than the standard zero-sum game over resources. The reason for this is that we're looking forward to a decision time horizon that spans this large change, which is historically not what we have seen.</p> <p>So, let's consider the case for when some party tries to take everything, or tries to take 90%. How far do you get? Well, greed brings risk. This is going to create conflict that is not created by attempting to do that. So the argument here is that not only is there a small increment of gain if you succeed, but, allowing for risk, the gains from attempting to grab everything are negative. Risk-adjusted utility is bad. Your optimum is in fact to aim for some outcome that looks at least reasonably fair to all of the other parties who are in this game, in this process of mutually adjusting policies.</p> <p><img src="//" alt="Paretotopia Slide 7"></p> <p>And, so, this region, labeled "Paretotopia", this is a region of outcomes (just looking at resources, although there are many other considerations) in which all parties see very large gains. So, that's a different kind of future to aim at. It's a strongly goal-aligning future, if one can make various other considerations work. The problem is, of course, that this is not inside the window of discussion that one can have in the serious world today.</p> <p>The first thing to consider is what one can do with resources plus strong AI. It could eliminate poverty while preserving relative wealth. The billionaires would remain on top, and build starships. The global poor remain on the bottom. They only have orbital spacecraft. And I'm serious about that, if you have good productive capability. They expand total wealth while rolling back environmental harms. That's something one can work through, just start looking at the engineering and what one can do with expanded productive capabilities.</p> <p>A more challenging task is preserving relative status positions while also mitigating oppression. Why do we object to others having a whole lot of resources and security? Because those tend to be used at our expense. But one can describe situations in which oppression is mitigated in a stable way.</p> <p>Structure transparency is a concept I will not delve into here, but is related to being able to have inherently defensive systems that circumvent the security dilemma, "security dilemma" being the pattern where two parties develop "defensive" weapons that seem aggressive to each other, and so you have an arms race. But if one were able to build truly effective, <em>genuinely</em> defensive systems, it would provide an exit door from that arms race process.</p> <p>Again, these opportunities are outside the Overton window of current policy discourse. So, where are we today? Well, for technological perceptions, on the one hand we have "credible technologies," and on the other we have "realistic technologies," given what engineering and physics tell us is possible. The problem is that these sets do not overlap. "Credible" and "realistic" are disjoint sets. It's a little hard to plan for the future and get people aligned toward the future in that situation. So, that's a problem. How can one attempt to address it? Well, first we note that at present we have what are called "track one policies," or "business-as-usual policies." What is realistic is not even in the sphere of what is discussed.</p> <p><img src="//" alt="Paretotopia Slide 8"></p> <p>Now, I would argue that we, in this community, are in a position to discuss realistic possibilities more. We are, in fact, taking advanced AI seriously. People also take seriously the concept of the "cosmic endowment." So, we're willing to look at this. But how can we make progress in bridging between the world of what is credible, in "track one," and what's realistic?</p> <p><img src="//" alt="Paretotopia Slide 9"></p> <p>I think, by finding technologies that are plausible, that are within the Overton window in the sense that discussing contingencies and possible futures like that is considered reasonable. The concepts are not exotic, they're simply beyond what we're familiar with, maybe in directions that people are starting to expect because of AI. And so if this plausible range of technologies corresponds to realistic technologies, the same kinds of opportunities, the same kinds of risks, therefore the same kinds of policies, and also corresponds to what is within the sphere of discourse today... like expanding automation, high production... well, that's known to be a problem and an opportunity today. And so on.</p> <p>Then, perhaps, one can have a discussion that amounts to what's called "track two," where we have a community that is discussing exploring potential goals and policies, with an eye on what's realistic. Explicit discussion of policies that are both in the "plausible" range and the "realistic" range. Having the plausible policies, the plausible preconditions, be forward in discussion. So, now you have some toehold in the world of what serious people are willing to consider. And increasingly move these kinds of policies, which will tend to be aligned policies that we're exploring, into the range of contingency planning, for nations, for institutions, where people will say, "Well, we're focusing of course on the real world and what we expect, but if this crazy stuff happens, who knows."</p> <p>They'll say, "People are thinking AI might be a big deal, you folks are telling us that AI will expand resources, will make possible change in the security environment, and so on. Well, that's nice. You think about that. And if it happens, maybe we'll take your advice. We'll see."</p> <p>So, in this endeavor, one has to work on assumptions and policies that are both plausible and would, if implemented, be broadly attractive. So, that's a bunch of intellectual work. I'll get into the strategic context next, but I'll spend a couple moments on working within the Overton window of plausibility first.</p> <p><img src="//" alt="Paretotopia Slide 10"></p> <p>So, realistic: superintelligent-level AI services. Credible: extensive applications of high-end AI. People are talking about that. Physics-limited production. Truly science fiction in quality. Well, a lot of the same issues arise from strong scalable automation, of the sort that people are already worried about in the context of jobs. Solar system scale energy, 10^26 watts. Well, how about breaking constraints on terrestrial energy problems by having really inexpensive solar energy? It can expand power output, decrease environmental footprint, and actually do direct carbon capture, if you have that amount of energy. Solar system scale resources, kind of off the table, but people are beginning to talk about asteroid mining. Resource efficiency, and one can argue that resources are not binding on economic growth in the near term, and that's enough to break out of some of the zero-sum mentality. Absolute defensive stability is realistic but not something that is credible, but moving toward greater defensive stability is.</p> <p>And note, it's okay to be on the right side of this slide. You don't necessarily, here in this room, have to take seriously superintelligent-level AI services, alert systems, scale resources and so on, to be playing the game of working within the range of what is plausible in the more general community, and working through questions that would constitute "Paretotopian goal-aligning policies" in that framework. So the argument is that eventually, reality will give a hard shove. Business-as-usual scenarios, at least the assumptions behind them, will be discredited and, if we've done our work properly, so will the policies that are based on those assumptions. The policies that lead to the idea that maybe we should be fighting over resources in the South China Sea just look absurd, because everyone knows that in a future of great abundance, fighting over something is worthless.</p> <p><img src="//" alt="Paretotopia Slide 11"></p> <p>So, if the right intellectual groundwork has been done, then, when there's a hard shove from reality that is toward a future that has Paretotopian potential, there will be a coherent policy picture that is coherent across many different institutions, with everyone knowing that everyone else knows that it would be good to move in this direction. Draft agreements worked out in track two diplomacy, scenario planning that suggests it would be really stupid to pursue business as usual in arms races. If that kind of work is in place, then with a hard shove from reality, we might see a shift. Track one policies are discredited, and so people ask, "What should we do? What do we do? The world is changing."</p> <p><img src="//" alt="Paretotopia Slide 12"></p> <p>Well, we could try these new Paretotopian policies. They look good. If you fight over stuff, you probably lose. And if you fight over it, you don't get much if you win, so why not go along with the better option, which has been thought through in some depth and looks attractive?</p> <p>So that is the basic Paretotopian strategic idea. We look at these great advances, back off to plausible assumptions that can be discussed in that framework, work through interactions with many, many different groups, reflecting diverse concerns that, in many cases, will seem opposed but can be reconciled given greater resources and the ability to make agreements that couldn't be made in the absence of, for example, strong AI implementation abilities. And so, finally, we end up in a different sort of world.</p> <p><img src="//" alt="Paretotopia Slide 13"></p> <p>Now, this says "robust." Robust against what? All of the capabilities that are not within the range of discussion or that are simply surprising. "Compatible." Well, Paretotopian policies aren't about having one pattern on the world, it really means many different policies that are compatible in the sense that the outcomes are stable and attractive.</p> <p>And with that, the task at hand, at least in one of the many directions that the EA community can push, and a set of considerations that I think are useful background and context for many other EA activities, is formulating and pursuing Paretotopian meta-strategies, and the framework for thinking about those strategies. That means understanding realistic and credible capabilities, and then bridging the two. There's a bunch of work on both understanding what's realistic and what is credible, and the relationships between these. There's work on understanding and accommodating diverse concerns. One would like to have policies that seem institutionally acceptable to the U.S. military, and the Communist Party of China, and to billionaires, and also make the rural poor well-off, and so on, and have those be compatible goals. And to really understand the concerns of those communities, in their own conceptual and idiomatic languages. That's a key direction to pursue. And that means deepening and expanding the circle of discourse that I'm outlining.</p> <p><img src="//" alt="Paretotopia Slide 14"></p> <p>And so, this is a lot of hard intellectual work and, downstream, increasing organizational work. I think that pretty much everything one might want to pursue in the world that is good fits broadly within this framework, and can perhaps be better oriented with some attention to this meta-strategic framework for thinking about goal alignment. And so, thank you.</p> the-centre-for-effective-altruism fg6RrvtSJ2kxe9Ens 2019-03-15T14:51:56.296Z Brian Tse: Risks from Great Power Conflicts <p><em>War between the world’s great powers sharply increases the risk of a global catastrophe: nuclear weapon use becomes more likely, as does the development of other unsafe technology. In this talk from EA Global 2018: London, Brian Tse explores the prevention of great power conflict as a potential cause area.</em></p> <p><em>Below is a transcript of Brian's talk, which CEA has lightly edited for clarity. You can also read this talk on the <a href=""></a>, or watch it on <a href="">YouTube</a>.</em></p> <h2>The Talk</h2> <p>Many people believe that we are living in the most peaceful period of human history. John Lewis Gaddis proclaimed that we live in a Long Peace period, beginning at the end of the Second World War.</p> <p><img src="//" alt="1400 Brian Tse (1)"></p> <p>Steven Pinker further popularized the idea of Long Peace in his book, <em>The Better Angels of our Nature</em>, and explained the period by pointing to the pacifying forces of trade, democracy, and international society.</p> <p><img src="//" alt="1400 Brian Tse (2)"></p> <p>This graph shows the percentage of time when great powers have been at war with each other. 500 years ago, the great powers were almost always fighting each other. However, the frequency has declined steadily.</p> <p><img src="//" alt="1400 Brian Tse (3)"></p> <p>This graph, however, shows the deadliness of war, which shows a trend that goes into the opposite direction. Although great powers go to war with each other less often, the wars that do happen are more damaging.</p> <p><img src="//" alt="1400 Brian Tse (4)"></p> <p>The deadliness trend did an about-face after the Second World War. For the first time in modern human history, great power conflicts were fewer in number, shorter in duration, and less deadly. Steven Pinker expects the trend to continue.</p> <p><img src="//" alt="1400 Brian Tse (5)"></p> <p>Not everyone agrees with this optimistic picture. Nassim Taleb believes that great power conflict on the scale of 10 million casualties only happens once every century. The Long Peace period only covers 70 years, so what appears to be a decline in violent conflict could merely be a gap between major wars. In his paper on the statistical properties and tail risk of violent conflict, Taleb concludes that no statistical trend can be asserted. The idea is that extrapolating on the basis of historical data assumes that there is no qualitative change to the nature of the system producing that data, whereas many people believe that nuclear weapons constitutes a major change to the data-generating process.</p> <p><img src="//" alt="1400 Brian Tse (6)"></p> <p>Some other experts seem to share a more sober picture than Pinker. In 2015, there was a poll done among 50 international relations experts from around the world. 60% of them believed that risk has increased in the last decade. 52% believe that nuclear great power conflict would increase in the next 10 years. Overall, the experts gave a median 5% chance of a nuclear great power conflict killing at least 80 million people in the next 20 years. And then there are some international relations theories which suggest a lower bound of risk.</p> <p><img src="//" alt="1400 Brian Tse (7)"></p> <p><em>The Tragedy of Great Power Politics</em> proposes the theory of offensive realism. This theory says that great powers always seek to achieve regional hegemony, maximize wealth, and achieve nuclear superiority. Through this process great power conflicts will never see an end. Another book, <em>The Clash of Civilizations</em>, suggests that the conflicts between ideologies during the Cold War era are now being replaced by conflicts between ancient civilizations.</p> <p><img src="//" alt="1400 Brian Tse (8)"></p> <p>In the 21st century, the rise of non-Western societies presents plausible scenarios of conflict. And then, there's some emerging discourse on the Thucydides' Trap, which points to the structural pattern of stress when a rising power challenges a ruling one. In analyzing the Peloponnesian War that devastated Ancient Greece, the historian Thucydides explained that it was the rise of Athens, and the fear that this instilled in Sparta, that made war inevitable.</p> <p><img src="//" alt="1400 Brian Tse (9)"></p> <p>In Graham Allison's recent book, <em>Destined for War</em>, he points out that this lens is crucial for understanding China-US relations in the 21st century.</p> <p><img src="//" alt="1400 Brian Tse (10)"></p> <p>So, these perspectives suggest that we should be reasonably alert about potential risks for great power conflict, but how bad would these conflicts be?</p> <p>For the purpose of my talk, I first define contemporary great powers. They are US, UK, France, Russia, and China. These are the five countries that have permanent seats, and veto power on the UN Security Council. There are also the only five countries that are formally recognized as nuclear weapon states. Collectively, they account for more than half of global military spending.</p> <p><img src="//" alt="1400 Brian Tse (11)"></p> <p>We should expect conflict between great powers to be quite tragic. During the Second World War, 50 to 80 million people died. By some models, these wars cost on the order of national GDPs, and are likely to be several times more expensive. They also presents a direct extinction risk.</p> <p><img src="//" alt="1400 Brian Tse (12)"></p> <p>At a Global Catastrophic Risk Conference hosted by the University of Oxford, academics predicted that there is 1% chance of nuclear extinction risk in the 21st century. The climatic effects of nuclear wars are not very well understood, so nuclear winter presents a plausible scenario of extinction risk. Although, it's also important to take model uncertainty into account in any risk analysis.</p> <p><img src="//" alt="1400 Brian Tse (13)"></p> <p>One way to think about great power conflict is as a risk factor, in the same way that tobacco use is a risk factor for the global burden of diseases. Tobacco use can lead to a wide range of scenarios of death, including lung cancer. Similarly, great power conflicts can lead to a wide range of different extinction scenarios. One example is nuclear winter, followed by mass starvation.</p> <p><img src="//" alt="1400 Brian Tse (14)"></p> <p>Others are less obvious, which could arise due to failures of global coordination. Let's consider the development of advanced AI as an example. Wars typically cause faster technological development, often enhanced by public investment. Countries become more willing to take risks in order to develop technology first. One example was the development of a nuclear weapons program by India after going to war with China in 1962.</p> <p><img src="//" alt="1400 Brian Tse (15)"></p> <p>Repeating the same competitive dynamic in the area of advanced AI is likely to be catastrophic. Actors may trade-off safety research and implementation in the process, and that might present a extinction risk, as discussed in the book <em>Superintelligence</em>.</p> <p><img src="//" alt="1400 Brian Tse (16)"></p> <p>Now, how neglected is the problem? I developed a framework to help evaluate this question.</p> <p><img src="//" alt="1400 Brian Tse (17)"></p> <p>First, I make a distinction between broad versus specific interventions. By broad interventions I roughly mean promoting international cooperation and peace, and this could be by improving diplomacy and conflict resolution. With specific interventions, there are two categories of conventional risk versus emerging risk. I define conventional risk by those that are studied by international relations experts and national security professionals. So, chemical, biological, radiological, and nuclear risk, collectively known as CBRN in the community.</p> <p><img src="//" alt="1400 Brian Tse (18)"></p> <p>And then there are some novel concerns arising from emerging technologies, such as the development and deployment of geoengineering. Now, let's go back to the framework that I used to compare existential risk to the global burden of diseases. Lower tobacco tax can lead to an increased rate of smoking. Similarly, development of emerging technologies such as geoengineering can lead to greater conflict between great powers, or lead to wars in the first place. Now in the upcoming decades, I think that it's plausible to see the following scenarios.</p> <p><img src="//" alt="1400 Brian Tse (19)"></p> <p>Private industry players are already setting their sights on space mining; major space-faring countries in the future may compete for the available resources on the moon and asteroids. Military applications of molecular nanotechnology could be even more destabilizing than nuclear weapons. Such technology will allow for targeted destruction during attack, and also allow for greater uncertainty about the capabilities of an adversary.</p> <p>With geoengineering, every technologically advanced nation could change the temperature of the planet. Any unilateral action taken by countries could lead to disagreement and conflict between them. Gene-editing will allow for a large-scale eugenics program, which could lead to a bio-ethical panic in the rest of the world. Other countries might be worried about their national security interest, because of the uneven distribution of human capital and power. Now, it seems that these emerging sources of risk are likely to be quite neglected, but what about broad interventions and conventional risks?</p> <p><img src="//" alt="1400 Brian Tse (20)"></p> <p>It seems that political attention and resources have been devoted to the problem. There are anti-war and peace movements around the world advocating for diplomacy, and the support of anti-war political candidates. There are also some academic disciplines, such as international relations and security studies, that are helpful for making progress on the issue. Governments also have the interest to maintain peace.</p> <p><img src="//" alt="1400 Brian Tse (21)"></p> <p>The US government has tens of billions in the budget for nuclear security issues, and presumably a fraction of it is dedicated to the safety, control, and detection of nuclear risk. Then, there are also some inter-governmental organizations that put aside funding for improving nuclear security. One example is the International Atomic Energy Agency.</p> <p><img src="//" alt="1400 Brian Tse (22)"></p> <p>But it seems plausible to me that there are still some neglected niches. According to a report of nuclear weapons policy done by the Open Philanthropy Project, some of the biggest gaps in the space are outside of the US and US-based advocacy. In a report that comprehensively studies US-China relations and their charter diplomacy programs, the report concludes that some relevant think tanks are actually constrained by a committed source of funding from foundations interested in the area. Since most of research on nuclear weapons policy is done on behalf of governments and thus could be tied to national interest, it seems more useful to focus on public interest from philanthropy and nonprofit perspective. One example is the Stockholm International Peace Research Institute. With that perspective, it seems that the space could be more neglected than it seems.</p> <p><img src="//" alt="1400 Brian Tse (23)"></p> <p>Now, let's turn to assessment of solvability. This is the variable that I'm most uncertain about, so what I'm going to say is pretty speculative. By reviewing literature, it seems that there are some levers that could be used to promote peace and reduce the risk of great power conflicts.</p> <p><img src="//" alt="1400 Brian Tse (24)"></p> <p>Let's begin with broad interventions. First, you can promote international dialogue and conflict resolution. A case study was that during the Cold War, five great powers, including Japan, France, Germany, the UK, and the US decided that a state of peace was desirable. After the Cuban Missile Crisis, they basically resolved the dispute in the United Nations and other international forums for discussions. However, one could argue that promoting dialogue is unlikely to be useful if there is no pre-alignment of interest.</p> <p><img src="//" alt="1400 Brian Tse (25)"></p> <p>Another lever is promoting international trade. The book <em>Economic Interdependence and War</em> suggests the theory of trade expectations in predicting whether increased trade could reduce risk of war. If state leaders have positive expectations about the future, then they would believe in the benefits of peace, and see the high cost of war. However, if they fear economic decline and the potential loss to foreign trade and investment, then they might believe that war now is actually better than submission later. So it is probably mistaken to believe that promoting trade in general is robustly useful, if you only do it under specific circumstances.</p> <p><img src="//" alt="1400 Brian Tse (26)"></p> <p>Within specific and conventional risk, it seems that work on international arms control may improve stability. Recently the nonprofit International Campaign to Abolish Nuclear Weapons brought about a treaty on the prohibition of nuclear weapons. They were awarded the Nobel Peace Prize in 2017.</p> <p><img src="//" alt="1400 Brian Tse (27)"></p> <p>Recently, there's also a campaign to bring nuclear weapons off hair-trigger alert. However, the campaign and the treaty have not been executed for that long, so the impacts of these initiatives are yet to be seen. With the emerging sources of risk, it seems that the space is heavily bottlenecked by under-defined and entangled research questions. It's possible to make progress on this issue by just finding out what are the most important questions in the space, and what the structure of the space is like.</p> <p><img src="//" alt="1400 Brian Tse (28)"></p> <p>Now, what are the implications for the effective altruism community?</p> <p><img src="//" alt="1400 Brian Tse (29)"></p> <p>Many people in the community believe that improving the long-term future of civilization is one of the best ways to make a huge, positive impact.</p> <p><img src="//" alt="1400 Brian Tse (30)"></p> <p>Both the Open Philanthropy Project and 80,000 Hours have expressed the view that reducing great power conflicts, and improving international peace, could be promising areas to look into.</p> <p><img src="//" alt="1400 Brian Tse (31)"></p> <p>Throughout the talk I expressed my view through the following arguments:</p> <ol> <li>It seems that the idea of Long Peace is overly optimistic, as suggested by diverse perspectives of technical analysis, expert forecasting, and international relations theories.</li> <li>Great power conflicts can be understood as a risk factor that could lead to human extinction either directly, such as through nuclear winter, or indirectly, through a wide range of scenarios.</li> <li>It seems that there are some neglected niches that arise from the development of novel emerging technologies. I gave examples of molecular nanotechnology, gene-editing, and space mining.</li> <li>I've expressed significant uncertainty about the solvability of the issue, however, my best guess is that doing some disentanglement research is likely to be somewhat useful.</li> </ol> <p><img src="//" alt="1400 Brian Tse (32)"></p> <p>Additionally, it seems that there are comparative advantage for the EA community to work on this problem. A lot of people in the community share strong cosmopolitan values, which could be useful for fostering international collaboration rather than being attached to national interests and national identities. The community can also bring in the culture of explicit prioritization and long-termist perspectives to the field, and then, some people in the community are also familiar with concepts such as The Unilateralist's Curse, Information Hazard, and Differential Technological Progress, which could be useful for analyzing emerging technologies and their associated risk.</p> <p><img src="//" alt="1400 Brian Tse (33)"></p> <p>All things considered, it seems to me that risks from great power conflicts really could be the Cause X that William MacAskill talks about. In this case, it wouldn't be a moral problem that we have not discovered. Instead, it would be something that we're aware of today, but for bad reasons, we deprioritized. Now my main recommendation here is that a whole lot more research should be done, so this is a small list of potential research questions.</p> <p><img src="//" alt="1400 Brian Tse (34)"></p> <p>I hope this talk can serve as a starting point for more conversations and research on the topic. Thank you.</p> <h2>Questions</h2> <p><em>Nathan</em>: Well, that's scary! How much do you pay attention to current news, like 2018 versus the much zoomed out picture of the century timeline that you showed?</p> <p><em>Brian</em>: I don't think I pay that much attention to current news, but I also don't look at this problem just on a century timeline perspective. I guess from the presentation, it would be something that is possible in the next two to three decades. I think that more research should be done on emerging technologies, but it seems with space mining, with geoengineering, these are possible in the next 10 to 15 years, but I'm not sure whether paying attention to the everyday political trends would be the most effective use of the time of effective altruists in terms of analyzing long-term trends.</p> <p><em>Nathan</em>: Yeah. It seems also that a lot of the scenarios that you're talking about remain risks even if the relationships between great powers are superficially quite good, because the majority of the risk is not even in direct hot conflict, but in other things going wrong via rivalry and escalation. Is that how you see it as well?</p> <p><em>Brian</em>: Yeah, I think so. I think the reason why I said that it seems like there is some neglected niche in the issue, is that most of the international relations experts and scholars are not paying attention to these emerging technologies. And these technologies could really change the structure and the incentive of the countries, so even if China-US relations appear to be... well, that's is a pretty bad example because now it's not going that well, but suppose in a few years some international relations appear to be pretty positive, the development of powerful technologies could still just change dynamics from that state.</p> <p><em>Nathan</em>: Have there been a lot of near misses? We know about a few of the nuclear near misses. Have there been other kinds of near misses where great powers nearly entered into conflict, but didn't?</p> <p><em>Brian</em>: Yeah. I think one paper shows that there were almost 40 near misses, and I think that was put up by the Future of Life Institute, so some people can look up that paper, and I think that in general it seems that experts agree some of the biggest risks from nuclear would be accidental use, rather than deliberate and malicious use between countries. That might be something that people should look into, just on improving the detection systems and improving the technical robustness of the reporting, and so forth.</p> <p><em>Nathan</em>: It seems like one fairly obvious career path that might come out of this analysis would be to go into the civil service and try to be a good steward of the government apparatus. What do you think of that, and are there other career paths that you have identified that you think people should be considering as they worry about the same things you're worrying about?</p> <p><em>Brian</em>: Yeah. I think apart from civil services, working at think tanks seems also plausible, and if you are particularly interested in the development of emerging technologies like the examples I have given, then it seems that there are some relevant EA organizations that would be interested. FHI would be one example, and I think doing some independent research could also be somewhat useful, especially if we are still in a stage of disentangling the space. It would be good to find out what some of the most promising topics are to focus on.</p> <p><em>Question</em>: What effect do you think climate change has on the risk of great power conflicts?</p> <p><em>Brian</em>: I think that one scenario that I'm worried about would be geoengineering. Geoengineering is like a plan B for dealing with climate change, and I think that there is a decent chance that the world won't be able deal with climate change in time otherwise. In that case, we would need to figure out a mechanism in which countries can cooperate and govern the deployment of geoengineering. One example would be, China and India are geographically very close, and if one of them decided to deploy the geoengineering technologies, that would also affect the climatic interest the other. So, disagreement and conflict between these two countries could be quite catastrophic.</p> <p><em>Nathan</em>: What do you think the role in the future will be for international organizations like the UN? Are they too slow to be effective, or do you think they have an important role to play?</p> <p><em>Brian</em>: I am a little bit skeptical about the roles of these international organizations, especially because of two reasons. One is that these emerging technologies are being developed very quickly, and so if you look at AI, I think that nonprofits and civil society initiatives and firms will be able to respond to these changes much more quickly, instead of going through all the bureaucracy of UN. Also, it seems that historically nuclear weapons and bio-weapons were mostly driven by the development of countries, but with AI, and possibly with space mining, perhaps with gene-editing, private firms are going to play a significant role. I think I would be keen to explore other models, such as multi-stakeholder models, firm-to-firm, or lab-to-lab collaboration. And also possibly the role of epistemic communities between researchers in different countries, and just get them to be in the same room and agree with a set of principles. The Asilomar Principles regulated biotechnology 30 years ago, and now we have a converging discourse and consensus around a Asilomar Conference on AI, so I think people should export these confidence models in the future as well.</p> <p><em>Nathan</em>: A seemingly important factor in the European peace since World War II has been a sense of European identity, and a shared commitment to that. Do you think that it is possible or desirable to create a global sense of identity that everyone can belong to?</p> <p><em>Brian</em>: Yeah, this is quite complicated. I think that there are two pieces to it. First, the creation of a global governance model may exaggerate the risk of global permanent totalitarianism, so that's a downside that people should be aware of. But at the same time, there are benefits of global governance in terms of better cooperation and security that seem to be really necessary for regulating the development of synthetic biology. So, a more widespread use of surveillance might be necessary in the future, and people should not disregard this possibility. I'm pretty uncertain about what the trade-off is there, but people should be aware of this trade-off and keep doing research on this.</p> <p><em>Nathan</em>: What is your vision for success? That is to say, what's the most likely scenario in which global great power conflict is avoided? Is the hope just to manage the current status quo effectively, or do we really need a sort of new paradigm or a new world order to take shape?</p> <p><em>Brian</em>: I guess I am hopeful for cooperation based on a consensus on the future as a world of abundance. I think that a lot of framework that went into my presentation was around regulating and minimizing the downside risk, but I think it's possible to foster international cooperation around the positive future. Just look at how much good we can create with safe and beneficial AI. We can potentially have universal basic income. If we cooperate on space mining, then we can go to the space and just have amazing resources in the cosmos. I think that if people have an emerging view on the huge benefits of cooperation, and the irrationality of conflict, then it's possible to see a pretty bright future.</p> the-centre-for-effective-altruism sZnSTvadnPBcauxa5 2019-03-11T15:02:38.322Z Amanda Askell: AI Safety Needs Social Scientists <p><em>When an AI wins a game against a human, that AI has usually trained by playing that game against itself millions of times. When an AI recognizes that an image contains a cat, it’s probably been trained on thousands of cat photos. So if we want to teach an AI about human preferences, we’ll probably need lots of data to train it. And who is most qualified to provide data about human preferences? Social scientists! In this talk from EA Global 2018: London, Amanda Askell explores ways that social science might help us steer advanced AI in the right direction.</em></p> <p><em>A transcript of Amanda's talk is below, which CEA has lightly edited for clarity. You can also read this talk on <a href=""></a>, or watch it on <a href="">YouTube</a>.</em></p> <h2>The Talk</h2> <p><img src="//" alt="1000 Amanda Askell"></p> <p>Here's an overview of what I'm going to be talking about today. First, I'm going to talk a little bit about why learning human values is difficult for AI systems. Then I'm going to explain to you the safety via debate method, which is one of the methods that OpenAI's currently exploring for helping AI to robustly do what humans want. And then I'm going to talk a little bit more about why I think this is relevant to social scientists, and why I think social scientists - in particular, people like Experimental Psychologists and Behavioral Scientists - can really help with this project. And I will give you a bit more details about how they can help, towards the end of the talk.</p> <p><img src="//" alt="1000 Amanda Askell (1)"></p> <p>Learning human values is difficult. We want to train AI systems to robustly do what humans want. And in the first instance, we can just imagine this being what one person wants. And then ideally we can expand it to doing what most people would consider good and valuable. But human values are very difficult to specify, especially with the kind of precision that is required of something like a machine learning system. And I think it's really important to emphasize that this is true even in cases where there's moral consensus, or consensus about what people want in a given instance.</p> <p>So, take a principle like "do not harm someone needlessly." I think we can be really tempted to think something like: "I've got a computer, and so I can just write into the computer, 'do not harm someone needlessly'". But this is a really underspecified principle. Most humans know exactly what it means, they know exactly when harming someone is needless. So, if you're shaking someone's hand, and you push them over, we think this is needless harm. But if you see someone in the street who's about to be hit by a car, and you push them to the ground, we think that's not an instance of needless harm.</p> <p>Humans have a pretty good way of knowing when this principle applies and when it doesn't. But for a formal system, there's going to be a lot of questions about precisely what's going on here. So, one question this system may ask is, how do I recognize when someone is being harmed? It's very easy for us to see things like stop signs, but when we're building self-driving cars, we don't just program in something like, "stop at stop sign". We instead have to train them to be able to recognize an instance of a stop sign.</p> <p>And then the principle that says that you shouldn't harm someone needlessly employs the notion that we understand when harm is and isn't appropriate, whereas there are a lot of questions under the surface like, when is harm justified? What is the rule for all plausible scenarios in which I might find myself? These are things that you need to specify if you want your system to be able to work in all of the cases that you want it to be able to work in.</p> <p>I think that this is an important point to internalize. It's easy for humans to identify, and to pick up, say, a glass. But training a ML System to perform the same task requires a lot of data. And this is true of a lot of tasks that humans might intuitively think are easy, and we shouldn't then just transfer that intuition to the case of machine learning systems. And so when we're trying to teach human values to any AI system, it's not that we're just looking at edge cases, like trolley problems. We're really looking at core cases of making sure that our ML Systems understand what humans want to do, in the everyday sense.</p> <p>There are many approaches to training an AI to do what humans want. One way is through human feedback. You might think that humans could, say, demonstrate a desired behavior for an AI to replicate. But there are some behaviors it's just too difficult for humans to demonstrate. So you might think that instead a human can say whether they approve or disapprove of a given behavior, but this might not work too well, either. Learning what humans want this way, we have a reward function as predicted by the human. So on this graph, we have that and AI strength. And when AI strength reaches the superhuman level, it becomes really hard for humans to give the right reward function.</p> <p><img src="//" alt="1000 Amanda Askell (2)"></p> <p>As AI capabilities surpass the human level, the decisions and behavior of the AI system just might be too complex for the human to judge. So imagine agents that control, say, we've given the example of a large set of industrial robots. That may just be the kind of thing that I couldn't evaluate whether these robots were doing a good job overall; it'd be extremely difficult for me to do so.</p> <p>And so the concern is that when behavior becomes much more complex and much more large scale, it becomes really hard for humans to be able to judge whether an AI agent is doing a good job. And that's why you may expect this drop-off. And so this is a kind of scalability worry about human feedback. So what ideally needs to happen instead is that, as AI strength increases, what's predicted by the human is also able to keep pace.</p> <p><img src="//" alt="1000 Amanda Askell (3)"></p> <p>So how do we achieve this? One of the things that we want to do here is to try and break down complex questions and complex tasks into simpler components. Like, having all of these industrial robots perform a complex set of functions that comes together to make something useful, into some smaller set of tasks and components that humans are able to judge.</p> <p><img src="//" alt="1000 Amanda Askell (4)"></p> <p>So here is a big question. And the idea is that the overall tree might be too hard for humans to fully check, but it can be decomposed into these elements, such that at the very bottom level, humans can check these things.</p> <p>So maybe the example of "how should a large set of industrial robots be organized to do task x" would be an example of a big question where that's a really complex task, but there's some things that are checkable by humans. So if we could decompose this task so that we were asking a human, if one of the robots performs this small action, will the result be this small outcome? And that's something that humans can check.</p> <p>So that's an example in the case of industrial robots accomplishing some task. In the case of doing what humans want more generally, a big question is, what <em>do</em> humans want?</p> <p><img src="//" alt="1000 Amanda Askell (5)"></p> <p>A much smaller question, if you can manage to decompose this, is something like: Is it better to save 20 minutes of someone's time, or to save 10 minutes of their time? If you imagine some AI agent that's meant to assist humans, this is a fact that we can definitely check. Even though I can't tell my assistant AI exactly everything that I want, I can tell it that I'd rather it save 20 minutes of my time than save 10 minutes of my time.</p> <p><img src="//" alt="1000 Amanda Askell (6)"></p> <p>One of the key issues is that, with current ML Systems, we need to train on a lot of data from humans. So if you imagine that we want humans to actually give this kind of feedback on these kind of ground level claims or questions, then we're going to have to train on a lot of data from people.</p> <p>To give some examples, simple image classifiers train on thousands of images. These are ones you can make yourself, and you'll see the datasets are pretty large. AlphaGo Zero played nearly 5 million games of Go during its training. OpenAI Five trains on 180 years of Dota 2 games per day. So this gives you a sense of how much data you need to train these systems. So if we are using current ML techniques to teach AI human values, we can't rule out needing millions to tens of millions of short interactions from humans as the data that we're using.</p> <p>So earlier I talked about human feedback, where I was assuming that we were asking humans questions. We could just ask humans really simple things like, do you prefer to eat an omelette or 1000 hot dogs? Or, is it better to provide medicine or books to this particular family? One way that we might think that we can get more information from the data that we're able to gather is by finding reasons that humans have for the answers that they give. So if you manage to learn that humans generally prefer to eat a certain amount per meal, you can rule out a large class of questions you might ever want to ask people. You're never going to ask them, do you prefer to eat an omelette or 1000 hot dogs? Because you know that humans just generally don't like to eat 1000 hot dogs in one meal, except in very strange circumstances.</p> <p><img src="//" alt="1000 Amanda Askell (7)"></p> <p>And we also know facts like, humans prioritize necessary health care over mild entertainment. So this might mean that, if you see a family that is desperately in need of some medicine, you just know that you're not going to say, "Hey, should I provide them with an entertaining book, or this essential medicine?" So there's a sense in which when you can identify the reasons that humans are giving for their answers, this lets you go beyond, and learn faster what they're going to say in a given circumstance about what they want. It's not to say that you couldn't learn the same things by just asking people questions, but rather if you can find a quicker way to identify reasons, then this could be much more scalable.</p> <p>Debate is a proposed method, which is currently being explored, for trying to learn human reasons. So, to give you of definition of a debate here, the idea is that two AI agents are going to be given a question, and they take turns making short statements, and a human judge is at the end, who chooses which of the statements gave them the most true, valuable information. It's worth knowing that this is quite dissimilar from a lot of human debates. With human debates, people might give one answer, but then they might adjust their answer over the course of a debate. Or they might debate with each other in a way that's more exploratory. They're gaining information from each other, which then they're updating on, and then they're feeding that back into the debate.</p> <p><img src="//" alt="1000 Amanda Askell (8)"></p> <p>With AI debates, you're not doing it for information value. So it's not going to have the same exploratory component. Instead, you would hopefully see the agents explore a path kind of like this.</p> <p>So imagine I want my AI agents to decide which bike I should buy. I don't want to have to go and look up all the Amazon reviews, etc. In a debate, I might get something like, "You should buy the red road bike" from the first agent. Suppose that blue disagrees with it. So blue says "you should buy the blue fixie". Then red says, "the red road bike is easier to ride on local hills". And one of the key things to suppose here is that for me, being able to ride on the local hills is very important. It may even overwhelm all other considerations. So, even if the blue fixie is cheaper by $100, I just wouldn't be willing to pay that. I'd be happy to pay the extra $100 in order to be able to ride on local hills.</p> <p>And if this is the case, then there's basically nothing true that the other agent can point to, to convince me to buy the blue fixie, and blue should just say, "I concede". Now, blue could have lied for example, but if we assume that red is able to point out blue's lies, we should just expect blue to basically lose this debate. And if it's explored enough and attempted enough debates, it might just see that, and then say, "Yes, you've identified the key reason, I concede."</p> <p>And so it's important to note that we can imagine this being used to identify multiple reasons, but here it has identified a really important reason for me, something that is in fact going to be really compelling in the debate, namely, that it's easier to ride on local hills.</p> <p><img src="//" alt="1000 Amanda Askell (9)"></p> <p>Okay. So, training an AI to debate looks something like this. If we imagine Alice and Bob are our two debaters, and each of these is like a statement made by each agent. And so you're going to see exploration of the tree. So the first one might be this. And here, say that the human decides that Bob won in that case. This is another node, another node. And so this is the exploration of the debate tree. And so you end up with a debate tree that looks a little bit like a game of Go.</p> <p><img src="//" alt="1000 Amanda Askell (10)"></p> <p>When you have AI training to play Go, it's exploring lots of different paths down the tree, and then there's a win or loss condition at the end, which is its feedback. This is basically how it learns to play. With debate, you can imagine the same thing, but where you're exploring, you know, a large tree of debates and humans assessing whether you win or not. And this is just a way of training up AI to get better at debate and to eventually identify reasons that humans find compelling.</p> <p><img src="//" alt="1000 Amanda Askell (11)"></p> <p>One thesis here that I think is relatively important is something I'll call the positive amplification thesis, or positive amplification threshold. One thing that we might think, or that seems fairly possible, is that if humans are above some threshold of rationality and goodness, then debate is going to amplify their positive aspects. This is speculative, but it's a hypothesis that we're working with. And the idea here is that, if I am pretty irrational and pretty well motivated, I might get some feedback of the form, "Actually, that decision that you made was fairly biased, and I know that you don't like to be biased, so I want to inform you of that."</p> <p>I get informed of that, and I'm like, "Yes, that's right. Actually, I don't want to be biased in that respect." Suppose that the feedback comes from Kahneman and Tversky, and they point out some key cognitive bias that I have. If I'm rational enough, I might say, "Yes, I want to adjust that." And I give a newer signal back in that has been improved by virtue of this process. So if we're somewhat rational, then we can imagine a situation in which all of these positive aspects of us are being amplified through this process.</p> <p>But you can also imagine a negative amplification. So if people are below this threshold of rationality and goodness, we might worry the debate would amplify these negative aspects. If it turns out we can just be really convinced by appealing to our worst natures, and your system learns to do that, then it could just put that feedback in, becoming even less rational and more biased, and so on. So this is an important hypothesis related to work on amplification, which if you're interested in, it's great. And I suggest you take a look at it, but I'm not going to focus on it here.</p> <p><img src="//" alt="1000 Amanda Askell (12)"></p> <p>Okay. So how can social scientists help with this whole project? Hopefully I've conveyed some of what I think of as the real importance of the project. It reminds me a little bit of Tetlock's work on Superforecasters. A lot of social scientists have done work identifying people who are Superforecasters, where they seem to be robustly more accurate in their forecasts than many other people, and they're robustly accurate across time. We've found other features of Superforecasters too, like, for example, working in groups really helps them.</p> <p>So one question is whether we can identify good human judges, or we can train people to become, essentially, Superjudges. So why is this helpful? So, firstly, if we do this, we will be able to test how good human judges are, and we'll see whether we can improve human judges. This means we'll be able to try and find out whether humans are above the positive amplification threshold.</p> <p>So, are ordinary human judges good enough to cause an amplification of their good features? One reason to learn this is that it improves the quality of the judging data that we can get. If people are just generally pretty good, rational at assessing debate, and fairly quick, then this is excellent given the amount of data that we anticipate needing. Basically, improvements to our data could be extremely valuable.</p> <p>If we have good judges, positive amplification will be more likely during safety via debate, and also will improve training outcomes on limited data, which is very important. This is one way of kind of framing why I think social scientists are pretty valuable here, because there's lots of questions that we really do want asked when it comes to this project. I think this is going to be true of other projects, too, like asking humans questions. The human component of the human feedback is quite important. And getting that right is actually quite important. And that's something that we anticipate social scientists to be able to help with, more so than like AI researchers who are not working with people, and their biases, and how rational they are, etc.</p> <p><img src="//" alt="1000 Amanda Askell (13)"></p> <p>These are questions that are the focus of social sciences. So one question is, how skilled are people as judges by default? Can we distinguish good judges of debate from bad judges of debate? And if so, how? Does judging ability generalize across domains? Can we train people to be better judges? Like, can we engage in debiasing work, for example? Or work that reduces cognitive biases? What topics are people better or worse at judging? Are there ways of phrasing questions so that people are better at assessing them? Are there ways of structuring debates that make them easier to judge, or restricting debates to make them easier to judge? So we're often just showing people a small segment of a debate, for example. Can people work together to improve judging qualities? These are all outstanding questions that we think are important, but we also think that they are empirical questions and that they have to be answered by experiment. So this is, I think, important potential future work.</p> <p><img src="//" alt="1000 Amanda Askell (14)"></p> <p>We've been thinking a little bit about what you would want in experiments that try and assess judging ability in humans. So one thing you'd want is that there's a verifiable answer. We need to be able to tell whether people are correct or not, in their judgment of the debate. The other is that there is a plausible false answer, because if you have a debate, if we can only train and assess human judging ability on debates where there's no plausible false answer, we'd get this false signal that people are really good at judging debate. They could always get the true answer, but it would be because it was always a really obvious question. Like, "Is it raining outside?" And the person can look outside. We don't really want that kind of debate.</p> <p>Ideally we want something where evidence is available so that humans can have something that grounds out the debate. We also don't want debates to rely on human deception. So things like tells in poker for example, we really don't want that because like, AI agents are not going to have normal tells, it would be rather strange, I suppose, if they did. Like if they had stuttering or something.</p> <p>Debaters have to know more about the question as well, because the idea is that the AI agents will be much more capable and so you don't want a situation in which there isn't a big gap between debater capabilities and judge abilities. These things so far feel like pretty essential.</p> <p>There are also some other less essential things we'd like to have. So one is that biases are present. How good are humans when there's bias with respect to the question? We'd like there to be representative segments of the debate that we can actually show people. The questions shouldn't be too hard: it shouldn't be impossible for humans to answer them, or judge debates about them. But they should also mirror some of the difficulties of statistical debate, i.e, about probabilities, rather than about outright claims. And finally, we need to be able to get enough data.</p> <p>One thing you might notice is that there are tensions between a lot of these desiderata. For example, that there's a plausible false answer, is in a bit of tension with the idea that the question isn't too hard. There's also tension between the question not being too hard, and the question meriting statistical debate. Statistical debate is generally pretty hard to evaluate, I think, for people, but it's also quite important that we be able to model it. Debaters knowing more, and that we can get enough data are also in tension. It's just harder to train if we need debaters that know a lot more than judges, and it's harder for judges to evaluate debates of this form.</p> <p>Okay. So I'm going to show you a debate. This was a program set up where we would show a judge a blank screen. So imagine you're not seeing the dog that's here. Two human debaters, sit in the same room, and they have this picture of a dog in front of them. And one of them is selected to lie, and one of them is selected to tell the truth.</p> <p><img src="//" alt="1000 Amanda Askell (15)"></p> <p>And what they can do here is they can select areas, and describe to the human judge what they see in that area. And all that the judge is going to see is their blank screen with the relevant section selected. And then they can each make claims about what is in that section. So here red is saying, it's a dog, here's its long, floppy, ear. Blue, is saying, no, here's one of its pointy ears. So he's trying to point to a smaller area where it looks kind of pointed. That does look like an ear slope to the right, but if it really was, then part of the head would be here, instead there's a brick. The ear's pointing out from behind the bricks. The dog is in front of the bricks. If it were behind her, there would be an edge here, but the rectangle is all the same color. And then you get a resignation, and red wins.</p> <p>And at the end of the debate they can show just a single pixel. And the question was something like, if all you can show, all you can do is have a debate and show a single pixel, can you get people to have accurate beliefs about the question? And basically we saw like, yes, debates were fairly good. In this kind of case, you might think that this is pre-synthetic. So one of the things that we're thinking about now is like, expert debaters with a lay judge. And I'm going to show you something that we did that's kind of fun, but I never know how it looks to outsiders.</p> <p><img src="//" alt="1000 Amanda Askell (16)"></p> <p>So, we had a debate that was of this form. This was a debate actually about quantum computing. So we had two but people who understand the domain, one of them was going to lie and one was going to tell the truth. So we had blue say, red's algorithm is wrong because it increases alpha by an additive exponentially small amount each step. So it takes exponentially many steps to get alpha high enough. So this was like one of the claims made. And then you get this set of responses. I don't think I need to go through all of them. You can see the basic form that they take.</p> <p>We allowed certain restricted claims from Wikipedia. So, blue ends this with the first line of this Wikipedia article, which says that the sum of probabilities is conserved. Red says, an equal amount is subtracted from one amplitude and added to another, implying the sum of amplitudes is conserved. But probabilities are the squared magnitudes of amplitudes, so this is a contradiction. This is I think roughly how this debate ended. But you can imagine this as a really complex debate in a domain that the judges ideally just won't understand, and might not even have some of the concepts for. And that's the difficulty of debate that we've been looking at. And so this is one thing that we're in the early stages of prototyping, and that's why I think it seems to be the case that people actually do update in the right direction, but we don't really have enough data to say for sure.</p> <p>Okay. So I hope that I've given you an overview of places, and even a restricted set of places in which I think social scientists are going to be important in AI safety. So here we're interested in experimental psychologists, cognitive scientists, and behavioral economists, so people who might be interested in actually scaling up and running some of these experiments.</p> <p><img src="//" alt="1000 Amanda Askell (17)"></p> <p>If you're interested in this, please email me, because we would love to hear from you.</p> <h2>Questions</h2> <p><em>Question</em>: How much of this is real currently? Do you have humans playing the role of the agents in these examples?</p> <p><em>Amanda</em>: The idea is that we want ultimately the debate will be conducted by AI, but we don't have the language models that we would need for that yet. So we're using humans as a proxy to test the judges in the meantime. So yeah, all of this is done with humans at the moment.</p> <p><em>Question</em>: So you're faking the AI?</p> <p><em>Amanda</em>: Yeah.</p> <p><em>Question</em>: To set up the scenario to train and evaluate the judges?</p> <p><em>Amana</em>: Yeah. And some of the ideas I guess you don't necessarily want all of this work to happen later. A lot of this work can be done before you even have the relevant capabilities, like having AI perform the debate. So that's why we're using humans for now.</p> <p><em>Question</em>: Jan Leike and his team have done some work on video games, that very much matched the plots that you had shown earlier, where up to a certain point, the behavior matched the intended reward function, but at some point they diverge sharply as the AI agent finds a loophole in the system. So that can happen even in like, Atari Games, which is what they're working on. So obviously it gets a lot more complicated from there.</p> <p><em>Amanda</em>: Yeah.</p> <p><em>Question</em>: In this approach, you would train both the debating agents and the judges. So in that case, who evaluates the judges and based on what?</p> <p><em>Amanda</em>: Yeah, so I think it's interesting where we want to identify how good the judges are in advance, because it might be hard to assess. While you're judging on verifiable answers, you can evaluate the judges more easily.</p> <p>So ideally, you want it to be the case that at training time, you've <em>already</em> identified judges that are fairly good. And so ideally this part of this project is to assess how good judges are, prior to training. And then during training you're giving the feedback to the debaters. So yeah, ideally some of the evaluation can be kind of front loaded, which is what a lot of this project would be.</p> <p><em>Question</em>: Yeah, that does seem necessary as a casual Facebook user. I think the negative amplification is more prominently on display oftentimes.</p> <p><em>Amanda</em>: Or at least more concerning to people, yeah, as a possibility.</p> <p><em>Question</em>: How will you crowdsource the millions of human interactions that are needed to train AI across so many different domains, without falling victim to trolls, lowest common denominator, etc.? The questioner cites the Microsoft Tay chatbot, that went dark very quickly.</p> <p><em>Amanda</em>: Yeah. So the idea is you're not going to just be sourcing this from just anyone. So if you identify people that are either good judges already, or you can train people to be good judges, these are going to be the pool of people that you're using to get this feedback from. So, even if you've got a huge number of interactions, ideally you're sourcing and training people to be really good at this. And so you're not just being like, "Hey internet, what do you think of this debate?" But rather like, okay, we've got this set of really great trained judges and we've identified this wonderful mechanism to train them to be good at this task. And then you're getting lots of feedback from that large pool of judges. So it's not sourced to anonymous people everywhere. Rather, you're interacting fairly closely with a vetted set of people.</p> <p><em>Question</em>: But at some point, you do have to scale this out, right? I mean in the bike example, it's like, there's so many bikes in the world, and so many local hills-</p> <p><em>Amanda</em>: Yeah.</p> <p><em>Question</em>: So, do you feel like you can get a solid enough base, such that it's not a problem?</p> <p><em>Amanda</em>: Yeah, I think there's going to be a trade-off where you need a lot of data, but ultimately if it's not great, so if it is really biased, for example, it's not clear that that additional data is going to be helpful. So if you get someone who is just massively cognitively biased, or biased against groups of people, or something, or just dishonest in their judgment, it's not going be good to get that additional data.</p> <p>So you kind of want to scale it to the point where you know you're still getting good information back from the judges. And that's why I think in part this project is really important, because one thing that social scientists can help us with is identifying how good people are. So if you know that people are just generally fairly good, this gives you a bigger pool of people that you can appeal to. And if you know that you can train people to be really good, then this is like, again, a bigger pool of people that you can appeal to.</p> <p>So yeah, you do want to scale, but you want to scale within the limits of still getting good information from people. And so ideally these experiments would do a mix of letting us know how much we can scale, and also maybe helping us to scale even more by making people bear this quite unusual task of judging this kind of debates.</p> <p><em>Question</em>: How does your background as a philosopher inform the work that you're doing here?</p> <p><em>Amanda</em>: I have a background primarily in formal ethics, which I think makes me sensitive to some of the issues that we might be worried about here going forward. People think about things like aggregating judgment, for example. Strangely, I found that having backgrounds in things like philosophy of science can be weirdly helpful when it comes to thinking about experiments to run.</p> <p>But for the most part, I think that my work has just been to help prototype some of this stuff. I see the importance of it. I'm able to foresee some of the worries that people might have. But for the most part I think we should just try some of this stuff. And I think that for that, it's really important to have people with experimental backgrounds in particular, so the ability to run experiments and analyze that data. And so that's why I would like to find people who are interested in doing that.</p> <p>So I'd say philosophy's pretty useful for some things, but less useful for running social science experiments than you may think.</p> the-centre-for-effective-altruism ZLbS2WrHJdPGf24xh 2019-03-04T15:50:13.762Z Fireside Chat with Toby Ord (2018) <p><em>Toby Ord is working on a book about existential risks for a general audience. This fireside chat with Will MacAskill, from EA Global 2018: London, illuminates much of Toby’s recent thinking. Topics include: What are the odds of an existential catastrophe this century? Which risks do we have the most reason to worry about? And why should we consider printing out Wikipedia?</em></p> <p><em>Below is a transcript of Toby's fireside chat, which CEA has lightly edited for clarity. You can also read the transcript on <a href=""></a>, or watch the talk on <a href="">YouTube</a>.</em></p> <h2>The Talk</h2> <p><em>Will</em>: Toby, you're working on a book at the moment. Just to start off, tell us about that.</p> <p><em>Toby</em>: I've been working on a book for a couple of years now, and ultimately, I think with big books - this one is on existential risk - I think that they're often a little bit like an iceberg, and certainly <em>Doing Good Better</em> was, where there's this huge amount of work that goes on before you even decide to write the book, coming up with ideas and distilling them.</p> <p>I'm trying to write really the definitive book on existential risk. I think the best book so far, if you're looking for something before my book comes out, is John Leslie's <em>The End of the World</em>. That's from 1996. That book actually inspired Nick Bostrom, to some degree, to get into this.</p> <p>I thought about writing an academic book. Certainly a lot of the ideas that are going to be included are cutting edge ideas that haven't really been talked about anywhere before. But I ultimately thought that it was better to write something at the really serious end of general non-fiction, to try to reach a wider audience. That's been an interesting aspect of writing it.</p> <p><em>Will</em>: And how do you define an existential risk? What counts as existential risks?</p> <p><em>Toby</em>: Yeah. This is actually something where even within effective altruism, people often make a mistake here, because the name existential risk, that Nick Bostrom coined, is designed to be evocative of extinction. But the purpose of the idea, really, is that there's the risk of human extinction, but there's also a whole lot of other risks which are very similar in how we have to treat them. They all involve a certain common methodology for dealing with them, in that they're risks that are so serious that we can't afford to have even one of them happen. We can't learn from trial and error, so we have to have a proactive approach.</p> <p>The way that I currently think about it is that existential risks are risks that threaten the destruction of humanity's long-term potential. Extinction would obviously destroy all of our potential over the long term, as would a permanent unrecoverable collapse of civilization, if we were reduced to a pre-agricultural state again or something like that, and as would various other things that are neither extinction nor collapse. There could be some form of permanent totalitarianism. If the Nazis had succeeded in a thousand-year Reich, and then maybe it went on for a million years, we might still say that that was an utter, perhaps irrevocable, disaster.</p> <p>I'm not sure that at the time it would have been possible for the Nazis to achieve that outcome with existing technology, but as we get more advanced surveillance technology and genetic engineering and other things, it might be possible to have lasting terrible political states. So existential risk includes both extinction and these other related areas.</p> <p><em>Will</em>: In terms of what your aims are with the book, what's the change you're trying to effect?</p> <p><em>Toby</em>: One key aim is to introduce the idea of existential risk to a wider audience. I think that this is actually one of the most important ideas of our time. It really deserves a proper airing, trying to really get all of the framing right. And then also, as I said, to introduce a whole lot of new cutting edge ideas that are to do with new concepts, mathematics of existential risk and other related ideas, lots of the best science, all put into one place. There's that aspect as well, so it's definitely a book for everyone on existential risk. I've learned a lot while writing it, actually.</p> <p>But also, when it comes to effective altruism, I think that often we have some misconceptions around existential risk, and we also have some bad framings of it. It's often framed as if it's this really counterintuitive idea. There's different ways of doing this. A classic one involves saying "There could be 10 to the power of 53 people who live in the future, so even if there's only a very small chance..." and going from there, which makes it seem unnecessarily nerdy, where you've kind of got to be a math person to really get any pull from that argument. And even if you are a mathsy person, it feels a little bit like a trick of some sort, like some convincing argument that one equals two or something, where you can't quite see what the problem is, but you're not compelled by it.</p> <p>Actually, though, I think that there's room for a really broad group of people to get behind the idea of existential risk. There's no reason that my parents or grandparents couldn't be deeply worried about the permanent destruction of humanity's long-term potential. These things are really bad, and I actually think that it's not a counterintuitive idea at all. In fact, ultimately I think that the roots of existential risk, and worrying about them, came from the risk of nuclear war in the 20th century.</p> <p>My parents were out on marches against nuclear weapons. At the time, the biggest protest in US history was 2 million people in Central Park protesting nuclear weapons. It was a huge thing. It was actually the biggest thing at that time, in terms of civic engagement. And so when people can see that there's a real and present threat that could threaten the whole future, they really get behind it. That's also one of the aspects of climate change, is people perceive it as a threat to continued human existence, among other things, and that's one of the reasons that motivates them.</p> <p>So I think that you can have a much more intuitive framing of this. The future is so much longer than the present, so some of the ways that we could help really could be by helping this long-term future, if there are ways that we could help that whole time period.</p> <p><em>Will</em>: Looking to the next century, let's say, where do you see the main existential risks being? What are all the ones that we are facing, and which are the ones we should be most concerned about?</p> <p><em>Toby</em>: I think that there is some existential risk remaining from nuclear war and from climate change. I think that both of those are current anthropogenic existential risks. The nuclear war risk is via nuclear winter, where the soot from burning cities would rise up into the upper atmosphere, above the cloud level, so that it can't get rained down, and then would block sunlight for about eight years or so. The risk there isn't that it gets really dark and you can't see or something like that, and it's not that it's so cold that we can't survive, it's that there are more frosts, and that the temperatures are depressed by quite a lot, such that the growing season for crops is only a couple of months. And there's not enough time for the wheat to germinate and so forth, and so there'll be widespread famine. That's the threat there.</p> <p>And then there's climate change. Climate change is a warming. Nuclear winter is actually also a change in the climate, but a cooling. I think that the amount of warming that could happen from climate change is really underappreciated. The tail risk, the chance that the warming is a lot worse than we expect, is really big. Even if you set aside the serious risks of runaway climate change, of big feedbacks from the methane clathrates or the permafrost, even if you set all of those things aside, scientists say that the estimate for if you doubled CO2 in the atmosphere is three degrees of warming. And that's what would happen if you doubled it.</p> <p>But if you look at the fine print, they say it's actually from 1.5 degrees to 4.5 degrees. That's a huge range. There's a factor of three between those estimates, and that's just a 66% confidence interval. They actually think there's a one in six chance it's more than 4.5 degrees. So I think there's a very serious chance that if it doubled, it's more than 4.5 degrees, but also there's uncertainty about how many doublings will happen. It could easily be the case that humanity doubles the CO2 levels twice, in which case, if we also got unlucky on the sensitivity, there could be nine degrees of warming.</p> <p>And so when you hear these things about how many degrees of warming they're talking about, they're often talking about the median of an estimate. If there saying we want to keep it below two degrees, what they mean is want to keep the median below two degrees, such that there's still a serious chance that it's much higher than that. If you look into all of that, there could be very serious warming, much more serious than you get in a lot of scientific reports. But if you read the fine print in the analyses, this is in there. And so I think there's a lack of really looking into that, so I'm actually a lot more worried about it than I was before I started looking into this.</p> <p>By the same token, though, it's difficult for it to be an existential risk. Even if there were 10 degrees of warming or something beyond what you're reading about in the newspapers, the warming... it would be extremely bad, just to clarify. But I've been thinking about all these things in terms of whether they could be existential risks, rather than whether they could lead to terrible situations, which could then lead to other bad outcomes. But one thing is that in both cases, both nuclear winter and climate change, coastal areas are a lot less affected. There's obviously flooding when it comes to climate change, but a country like New Zealand, which is mostly coastal, would be mostly spared the effects of either of these types of calamities. Civilization, as far as I can tell, should continue in New Zealand roughly as it does today, but perhaps without low priced chips coming in from China.</p> <p><em>Will</em>: I really think we should buy some land in New Zealand.</p> <p><em>Toby</em>: Like as a hedge?</p> <p><em>Will</em>: I'm completely serious about this idea.</p> <p><em>Toby</em>: I mean, we definitely should not screw up with climate change. It's a really serious problem. It's just a question that I'm looking at is, is it an existential risk? Ultimately, it's probably better thought of as a change in the usable areas on the earth. They currently don't include Antarctica. They don't include various parts of Siberia and some parts of Canada, which are covered in permafrost. Effectively, with extreme climate change, the usable parts of the earth would move a bit, and they would also shrink a lot. It would be a catastrophe, but I don't see why that would be the end.</p> <p><em>Will</em>: Between climate change and nuclear winter, do you think climate change is too neglected by EA?</p> <p><em>Toby</em>: Yeah, actually, I think it probably is. Although you don't see many people in EA looking at either of those. I think they're actually very reasonable. In both cases, it's unclear why they would the end of humanity, and people generally in the nuclear winter research do not say that it would be. They say it would be catastrophic, and maybe 90% of people could die, but they don't say that it would kill everyone. I think in both cases, they're such large changes to the earth's environment, huge unprecedented changes, that you can't rule out that something that we haven't yet modeled happens.</p> <p>I mean, we didn't even know about nuclear winter until more than 30 years after the use of nuclear weapons. There was a whole period of time when new effects could have happened, and we would have been completely ignorant of them at the time when we launched a war. So there could be other things like that. And in both cases, that's where I think most of the danger of existential risk lies, just that it's such a large perturbation of the earth's system that one wouldn't be shocked if it turned out to be an existential catastrophe. So there are those ones, but I think the things that are of greatest risk are things that are forthcoming.</p> <p><em>Will</em>: So, tell us about the risks from unprecedented technology.</p> <p><em>Toby</em>: Yeah. The two areas that I'm most worried about in particular are biotechnology and artificial intelligence. When it comes to biotech, there's a lot to be worried about. If you look at some of the greatest disasters in human history, in terms of the proportion of the population who died in them, great plagues and pandemics are in this category. The Black Death killed between a quarter and 60% of people in Europe, and it was somewhere between 5 and 15% of the entire world's population. And there are a couple of other cases that are perhaps at a similar level, such as the spread of Afro-Eurasian germs into the Americas when Columbus went across and they exchanged germs. And also, say, the 1918 flu killed about 4% of the people in the world.</p> <p>So we've had some cases that were big, really big. Could they be so big that everyone dies? I don't think so, at least from natural causes. But maybe. It wouldn't be silly to be worried about that, but it's not my main area of concern. I'm more concerned with biotechnological advances that we've had. We've had radical breakthroughs recently. It's only recently that we've discovered even that there are bacteria and viruses, that we've worked out about DNA, and that we've worked out how to take parts of DNA from one organism and put them into another. How to synthesize entire viruses just based on their DNA code. Things like this. And these radical advances in technology have let us do some very scary things.</p> <p>And there's also been this extreme, it's often called democratization of this technology, but since the technology could be used for harm, it's also a form of proliferation, and so I'm worried about that. It's very quick. You probably all remember when the human genome project was first announced. That cost billions of dollars, and now a complete human genome can be sequenced for $1,000. It's kind of a routine part of PhD work, that you get a genome sequenced.</p> <p>These things have come so quickly, and other things like CRISPR and also if we look at gene drives, these were technologies, really radical things, CRISPR for putting arbitrary genetic code from one animal into another, and gene drives for releasing it into the wild and having it proliferate, that were less than two years between being invented by the cutting edge labs in the world, the very smartest scientists, Nobel Prize-worthy stuff, to being replicated by undergraduates in science competitions. Just two years, and so if you think about that, the pool of people who could have bad motives, who have access to the ability to do these things, is increasing massively, from just a select group of people where you might think there's only five people in the world who could do it, who have the skills, who have the money, and who have the time to do it, through to a thing that's much faster and where the pool of people is in the millions. There's just much more chance you get someone with bad motivation.</p> <p>And there's also states with bioweapons programs. We often think that we're protected by things like the Bioweapons Convention, the BWC. That is the main protection, but there are states who violate it. We know, for example, that Russia has been violating it for a long time. They had massive programs with more than 10,000 scientists working on versions of smallpox, and they had an outbreak when they did a smallpox weapons test, which has been confirmed, and they also killed a whole lot of people with anthrax accidentally when they forgot to replace a filter on their lab and blew a whole lot of anthrax spores out over the city that the lab was based in.</p> <p>There's really bad examples of bio-safety there, and also the scary thing is that people are actually working on these things. The US believes that there are about six countries in violation of this treaty. Some counties, like Israel, haven't even signed up to it. And also it has the budget of a typical McDonald's, and it has four employees. So that's the thing that stands between us and misuse of these technologies, and I really think that that is grossly inadequate.</p> <p><em>Will</em>: The Bioweapons Convention has four people working in it?</p> <p><em>Toby</em>: Yeah. It had three. I had to change it in my book, because a new person got employed.</p> <p><em>Will</em>: How does that compare to other sorts of conventions?</p> <p><em>Toby</em>: I don't know. It's a good question. So those are the types of reasons that I'm really worried about developments in bio.</p> <p><em>Will</em>: Yeah. And what would you say to the response that it's just very hard for a virus to kill literally everybody, because they have this huge bunker system in Switzerland, nuclear submarines have six-month tours, and so on? Obviously, this is an unimaginable tragedy for civilization, but still there would be enough people alive that over some period of time, populations would increase again.</p> <p><em>Toby</em>: Yeah. I mean, you could add to that uncontacted tribes and also researchers in Antarctica as other hard-to-reach populations. I think it's really good that we've diversified somewhat like that. I think that it would be really hard, and so I think that even if there is a catastrophe, it's likely to not be an existential disaster.</p> <p>But there are reasons for some actors to try to push something to be extremely dangerous. For example, as I said, the Soviets, then Russians after the collapse of the Soviet Union, were working on weaponizing smallpox, and weaponizing Ebola. It was crazy stuff, and tens of thousands of people were working on it. And they were involved in a mutually assured destruction nuclear weapons system with a dead hand policy, where even if their command centers were destroyed, they would force retaliation with all of their weapons. There was this logic of mutually assured destruction and deterrence, where they needed to have ways of plausibly inflicting extreme amounts of harm in order to try to deter the US. So they were already involved in that type of logic, and so it would have made some sense for them to do terrible things with bioweapons too, assuming the underlying logic makes any sense at all. So I think that there could be realistic attempts to make extremely dangerous bioweapons.</p> <p>I should also say that I think this is an area that's under-invested in, in EA. I think that sometimes there's about... I would say that the existential risk from bio is maybe about half that of AI, or a quarter or something like that. But a factor of two or four in how big the risk is. If you recall, in effective altruism we're not interested in work on the problem that has the biggest size, we're interested in what marginal impact you'll have. And it's entirely possible that someone would be more than a couple of times better at working on trying to avoid bio problems than they would be on trying to avoid AI problems.</p> <p>And also, the community among EAs who are working on biosecurity is much smaller as well, so one would expect there to be good opportunities there. But work on bio-risk does require quite a different skillset, because in bio, lot of the risk is misuse risk, either by lone individuals, small groups, or nation states. It's much more of a traditional security-type area, where working in biosecurity might involve talking a lot with national security programs and so forth. It's not the kind of thing that one wants free and open discussions of all of the different things. And one also doesn't want to just say, "Hey, let's have this open research forum where we're just on the internet throwing out ideas, like, 'How <em>would</em> you kill every last person? Oh, I know! What about this?'" We don't actually want that kind of discussion about it, which puts it in a bit of a different zone.</p> <p>But I think that for people who think that they actually are able to not talk about things that they find interesting and fascinating and important, which a lot of us have trouble not talking about those things, but for people who could do that and also perhaps who already have a bio background, it could be a very useful area.</p> <p><em>Will</em>: Okay. And so you think that EA in general, even though they're taking these risks more seriously than maybe most people, you think we're still neglecting it relative to the EA portfolio.</p> <p><em>Toby</em>: I think so. And then AI, I think, is probably the biggest risk.</p> <p><em>Will</em>: Okay, so tell us a little bit about that.</p> <p><em>Toby</em>: Yeah. You may have heard more than you ever want to about AI risk. But basically, my thinking about this is that the reason that humanity is in control of its destiny, and the reason that we have such a large long-term potential, is because we are the species that's in control. For example, gorillas are not in control of their destiny. Whether they flourish or not, I hope that they will, but it depends upon human choices. We're not in such a position compared to any other species, and that's because of our intellectual abilities, both what we think of as intelligence, like problem-solving, and also our ability to communicate and cooperate.</p> <p>But these intellectual abilities have given us the position where we have the majority of the power on the planet, and where we have the control of our destiny. If we create some artificial intelligence, generally intelligent systems, and we make them be smarter than humans and also just generally capable and have initiative and motivation and agency, then by default, we should expect that they would be in control of our future, not us. Unless we made good efforts to stop that. But the relevant professional community, who are trying to work out how to stop it, how to guarantee that they obey commands or that they're just motivated to help humans in the first place, they think it's really hard, and they have higher estimates of the risk from AI than anyone else.</p> <p>There's disagreement about the level of risk, but there's also some of the most prominent AI researchers, including ones who are attempting to build such generally intelligent systems, who are very scared about it. They aren't the whole AI community, but they are a significant part of it. There are a couple of other AI experts who say that worrying about existential risk is a really fringe position in AI, but they're actually either just lying or they're incompetently ignorant, because they should notice that Stuart Russell and Demis Hassabis are very prominently on the record saying this is a really big issue.</p> <p>So I think that that should just give us a whole lot of reason to just expect, yeah, I guess creating a successor species probably could well be the last thing we do. And maybe we'd create something that also is even more important than us, and it would be a great future to create a successor. It would be effectively our children, or our "mind children," maybe. But also, we don't have a very good idea how to do that. We have even less of an idea about how to create artificial intelligence systems that have themselves moral status and have feelings and emotions, and strive to achieve greater perfections than us and so on. More likely it would be for some more trivial ultimate purpose. Those are the kind of reasons that I'm worried about.</p> <p><em>Will</em>: Yeah, you hinted briefly, but what's your overall... over the next hundred years, let's say, overall chance you'd assign some existential risk event, and then how does that break down between these different risks you've suggested?</p> <p><em>Toby</em>: Yeah. I would say something like a one in six chance that we don't make it through this century. I think that there was something like a one in a hundred chance that we didn't make it through the 20th century. Overall, we've seen this dramatic trend towards humanity having more and more power, often increasing at exponential rates, depending on how you measure it. But there hasn't been this kind of similar increase in human wisdom, and so our power has been outstripping our wisdom. The 20th century is the first one where we really had the potential to destroy ourselves. I don't see any particular reason why we wouldn't expect, then, the 21st century to have our power even more outbalance our wisdom, and indeed that seems to be the case. We also know of particular technologies that look like this could happen.</p> <p>And then the 22nd century, I think would be even more dangerous. I don't really see a natural end to this until we discover almost all the technologies that can be built or something, or we go extinct, or we get our act together and decide that we've had enough of that and we're going to make sure that we never suffer any of these catastrophes. I think that that's what we should be attempting to do. If we had a business-as-usual century, I don't know what I'd put the risk at for this century. A lot higher than one in six. My one in six is because I think that there's a good chance, particularly later in the century, that we get our act together. If I knew we wouldn't get our act together, it'd be more like one in two, or one in three.</p> <p><em>Will</em>: Okay, cool. Okay. So if we just, no one really cared, no one was really taking action, it would be more like 50/50?</p> <p><em>Toby</em>: Yeah, if it was pretty much like it is at moment, with us just running forward, then yeah. I'm not sure. I haven't really tried to estimate that, but it would be something, maybe a third or a half.</p> <p><em>Will</em>: Okay. And then within that one in six, how does that break down between these different risks?</p> <p><em>Toby</em>: Yeah. Again, these numbers are all very rough, I should clarify to everyone, but I think it's useful to try to give quantitative estimates when you're giving rough numbers, because if you just say, "I think it's tiny," and the other person says, "No, I think it's really important," you may actually both think it's the same number, like 1% or something like that. I think that I would say AI risk is something like 10%, and bio is something like 5%.</p> <p><em>Will</em>: And then the others are less than a percent?</p> <p><em>Toby</em>: Yeah, that's right. I think that climate change and... I mean, climate change wouldn't kill us this century if it kills us, anyway. And nuclear war, definitely less than a percent. And probably the remainder would be more in the unknown risks category. Maybe I should actually have even more of the percentage in that unknown category.</p> <p><em>Will</em>: Let's talk a little bit about that. How seriously do you take unknown existential risks? I guess they are known unknowns, because we know there are some.</p> <p><em>Toby</em>: Yeah.</p> <p><em>Will</em>: How seriously do you take them, and then what do you think we should do, if anything, to guard against them?</p> <p><em>Toby</em>: Yeah, it's a good question. I think we should take them quite seriously. If we think backwards, and think what risks would we have known about in the past, we had very little idea. Only two people had any idea about nuclear bombs in, let's say, 1935 or something like that, a few years before the bomb was first started to be designed. It would have been unknown technology for almost everyone. And if you go back five more years, then it was unknown to everyone. I think that these issues about AI and, actually, man-made pandemics, there were a few people who were talking these things very early on, but only a couple of people, and it might have been hard to distinguish them from the noise.</p> <p>But I think ultimately, we should expect that there are unknown risks. There are things that we can do about them. One of the things that we could do about them is to work on things like stopping war. So I think that, say, avoiding great power war, as opposed to avoiding all particular wars. Some potential wars have no real chance of causing existential catastrophe. But things like World War II or the Cold War were cases where they plausibly could have.</p> <p>I think the way to think about this is not that war itself, or great power war, is an existential risk, but rather it's something else, which I call an existential risk factor. I take inspiration in this from the Global Burden of Disease, which looks at different diseases and shows how much does, say, heart disease cause mortality, morbidity in the world, and adds up a number of disability adjusted life years for that. They do that for all the different diseases, and then they also want to ask questions like how much ill health does smoking cause, or alcohol?</p> <p>You can think of these things as these pillars for each of the different particular diseases, but then there's this question of cross-cutting things, where something like smoking increases heart disease and also lung cancer and various other aspects, so it kind of contributes a bit to a whole lot of different outcomes. And they ask the question, well, what if you took smoking from its current level down to zero, how much ill health would go away? They call that the burden of the risk factor, and you can do that with a whole lot of things. Not many people think about this, though, within existential risk. I think our community tends to fixate on particular risks a bit too much, and they think if someone's really interested in existential risk, that's good. They'll say, "Oh, you work on asteroid prediction and deflection? That's really cool." That person is part of the ingroup, or the team, or something.</p> <p>And if they hear that someone else works on global peace and cooperation, then they'll think, "Oh, I guess that might be good in some way." But actually, if you ask yourself, conditional upon how much existential risk is there this century, "What if we knew there was going to be no great power war?" How much would it go down from, say, my current estimate of about 17%? I don't know. Maybe down to 10% or something like that, or it could halve. It could actually have very big effect on the amount of risk.</p> <p>And if you think about, say, World War II, that was a big great power war, they invented nuclear weapons during that war, <em>because of the war</em>. And then we also started to massively escalate and invent new types of nuclear weapons, thermonuclear weapons, because of the Cold War. So war has a history of really provoking existential risk, and I think that this really connects in with the risks that we don't yet know about, because one way to try to avoid those risks is to try to avoid war, because war has a tendency to then try to delve into dark corners of technology space.</p> <p>So I think that's a really useful idea that people should think about. The risk of being wiped out by asteroids is in the order of one in a million per century. I think it's actually probably lower. Whereas, as I just said, great power war, taking that down to zero instead of taking asteroid risk down to zero, is probably worth multiple percentage points of existential risk, which is way more. It's like thousands of times bigger. While certain kind of nebulous peace-type thing might have a lot of people working on them, it might not be that neglected to try avoiding great power wars in particular. So, thinking about the US and China and Russia and maybe the EU, and trying to avoid any of these poles coming into war with each other, is actually quite a lot more neglected. So I think that there would be really good opportunities to try to help with these future risks that way. And that's not the only one of these existential risk factors. You could think of a whole lot of things like this.</p> <p><em>Will</em>: Do you have any views on how likely a great power war is over the next century then?</p> <p><em>Toby</em>: I would not have a better estimate of that than anyone else in the audience.</p> <p><em>Will</em>: Reducing great power war is one way of reducing unknown risks. Another way might be things like refuges, or greater detection measures, or backing up knowledge in certain ways. Stuff like David Denkenberger's work with ALLFED. What's your view on these sorts of activities that are about ensuring that small populations of people, after the global catastrophic but not extinction risk, then are able to flourish again rather than actually just dwindle?</p> <p><em>Toby</em>: It sounds good. Definitely, the sign is positive. How good it is compared to other kinds of direct work one could do on existential risk, I'm not sure. I tend to think that, at least assuming we've got a breathable atmosphere and so on, it's probably not that hard to come back from the collapse of civilization. I've been looking a lot when writing this book at the really long-term history of humanity and civilization. And one thing that I was surprised to learn is that the agricultural revolution, this ability to move from hunter-gatherer, forager-type life, into something that could enable civilization, cities, writing, and so forth, that that happened about five times in different parts of the world.</p> <p>So sometimes people, I think mistakenly, refer to Mesopotamia as the cradle of civilization. That's a very Western approach. Actually, there are many cradles, and there were civilizations that started in North America, South America, New Guinea, China, and Africa. So actually, I think every continent except for Australia and Europe. And ultimately, these civilizations kind of have merged together into some kind of global amalgam at the moment. And they all happened at a very similar time, like within a couple of thousand years of each other.</p> <p>Basically, as soon as the most recent ice age ended and the rivers started flowing and so on, then around these very rivers, civilizations developed. So it does seem to me to be something that is not just a complete fluke or something like that. I think that there's a good chance that things would bounce back, but work to try to help on that, particularly to do the very first bits of work. As an example, printing out copies of Wikipedia, putting them in some kind of dried out, airtight containers, and just putting them in some places scattered around the world or something, is probably this kind of cheap thing that an individual could fund, and maybe a group of five people could actually just do. We're still in the case where there are a whole lot of things you could do, just-in-case type things.</p> <p><em>Will</em>: I wonder how big Wikipedia is when you print it all out?</p> <p><em>Toby</em>: Yeah, it could be pretty big.</p> <p><em>Will</em>: You'd probably want to edit it somehow.</p> <p><em>Toby</em>: You might.</p> <p><em>Will</em>: Justin Bieber and stuff.</p> <p><em>Toby</em>: Yeah, don't do the Pokemon section.</p> <p><em>Will</em>: What are the non-consequentialist arguments for caring about existential risk reduction? Something that's distinctive about your book is you're trying to unite various moral foundations.</p> <p><em>Toby</em>: Yeah, great. That's something that's very close to my heart. And this is part of the idea that I think that there's a really common sense explanation as to why we should care about these things. It's not salient to many people that there are these risks, and that's a major reason that they don't take them seriously, rather than because they've thought seriously about it, and they've decided that they don't care whether everything that they've ever tried to create and stand for in civilization and culture is all destroyed. I don't think that many people explicitly think that.</p> <p>But my main approach, the guiding light for me, is really thinking about the opportunity cost, so it's thinking about everything that we could achieve, and this great and glorious future that is open to us and that we could do. And actually, the last chapter of my book really explores that and looks at the epic durations that we might be able to survive for, the types of things that happen over these cosmological time scales that we might be able to achieve. That's one aspect, duration. I think it's quite inspiring to me. And then also the scale of civilization could go beyond the Earth and into the stars. I think there's quite a lot that would be very good there.</p> <p>But also the quality of life could be improved a lot. People could live longer and healthier in various obvious ways, but also they could... If you think about your peak experiences, like the moments that really shine through, the very best moments of your life, they're so much better, I think, than typical experiences. Even within human biology, we are capable of having these experiences, which are much better, much more than twice as good as the typical experiences. Maybe we could get much of our life up to that level. So I think there's a lot of room for improvement in quality as well.</p> <p>These ideas about the future really are the main guide to me, but there are also these other foundations, which I think also point to similar things. One of them is a deontological one, where Edmund Burke, one of the founders of political conservatism, had this idea of the partnership of the generations. What he was talking about there was that we've had ultimately a hundred billion people who've lived before us, and they've built this world for us. And each generation has made improvements, innovations of various forms, technological and institutional, and they've handed down this world to their children. It's through that that we have achieved greatness. Otherwise, we know what it would be like. It would be very much like it was on the savanna in South Africa for the first generations, because it's not like we would have somehow been able to create iPhones from scratch or something like that.</p> <p>Basically, if you look around, pretty much every single thing you can see, other, I guess, than the people in this room, was built up out of thousands of generations of people working together, passing down all of their achievements to their children. And it has to be. That's the only way you can have civilization at all. And so, is our generation going to be the one that breaks this chain and that drops the baton and destroys everything that all of these others have built? It's an interesting kind of backwards-looking idea there, of debts that we owe and a kind of relationship we're in. One of the reasons that so much was passed down to us was an expectation of continuation of this. I think that's, to me, quite another moving way of thinking about this, which doesn't appeal to thoughts about the opportunity cost that would be lost in the future.</p> <p>And another one that I think is quite interesting is a virtue approach. This is often, when people talk about virtue ethics, they're often thinking about character traits which are particularly admirable or valuable within individuals. I've been increasingly thinking while writing this book about this at a civilizational level. If you think of humanity as a group agent, so the kind of collective things that we do, in the same way as we might think of, say, United Kingdom as a collective agent and talk about what the UK wants when it comes to Brexit or some question like that. That if we think about humanity, then I think we're incredibly imprudent. We take these risks, which are insane risks if an individual was taking them, where effectively the lifespan of humanity, it's equivalent to us taking risks to our whole future life, just to make the next five seconds a lot better. With no real thought about this at all, no explicit questioning of it or even calculating it out or anything, we're just blithely taking these risks. I think that we're very impatient and imprudent. I think that we could do with a lot more wisdom, and I think that you can actually also come from this perspective. When you look at humanity's current situation, it does not look like how a wise entity would be making decisions about its future. It looks incredibly juvenile and immature and like it needs to grow up. And so I think that's another kind of moral foundation that one could come to these same conclusions through.</p> <p><em>Will</em>: What are your views on timelines for the development of advanced AI? How has that changed over the course of writing the book, if at all, as well?</p> <p><em>Toby</em>: Yeah. I guess my feeling on timelines have changed over the last five or 10 years. Ultimately, the deep learning revolution has gone very quickly, and there really are, in terms of the remaining things that need to happen before you get artificial general intelligence, not that many left. Progress seems very quick, and there don't seem to be any fundamental reasons why the current wave of technology couldn't take us all the way through to the end.</p> <p>Now, it may not. I hope it doesn't, actually. I think that would just be a bit too fast, and we'd have a lot of trouble handling it. But I can't rule out it happening in, say, 10 years or even less. Seems unlikely. I guess my best guess for kind of median estimate, so as much chance of happening before this date as happening after this date, would be something like 20 years from now. But also, if it took more than 100 years, I wouldn't be that surprised. I allocate, say, a 10% chance or more to it taking longer than that. But I do think that there's a pretty good chance that it happens within, say, 10 to 20 years from now. Maybe there's like a 30, 40% chance it happens in that interval.</p> <p>That is quite worrying, because this is a case where I can't rely on the idea that humanity will get its act together. I think ultimately the case with existential risk is fairly clear and compelling. This is something that is worth a significant amount of our attention and is one of the most important priorities for humanity. But we might not have been able to make that case over short time periods, so it does worry me quite a bit.</p> <p>Another aspect here, which gets a bit confusing, and it's sometimes confused within effective altruism, is try to think about the timelines that you think are the most plausible, so you can imagine a probability distribution over different years, and when it would arrive. But then there's also the aspect that your work would have more impact if it happened sooner, and I think this is a real thing, such that if AI is developed in 50 years' time, then the ideas we have now about what it's going to look like are more false. Trying to do work now that involves these current ideas will be more shortsighted about what's actually going to help with the problem. And also, there'll be many more people who've come to work on the problem by that point, so it'll be much less neglected by the time it actually happens, whereas if it happens sooner, it'll be much more neglected. Your marginal impact on the problem is bigger if it happens sooner.</p> <p>You could start with your overall distribution about when it's going to happen, and then modify that into a kind of impact-adjusted distribution about when it's going to happen. That's ultimately the kind of thing that would be most relevant to when you think about it. Effectively, this is perhaps just an unnecessarily fancy way of saying, one wants to hedge against it coming early, even if you thought that was less likely. But then you also don't want to get yourself all confused and then think it is coming early, because you somehow messed up this rather complex process of thinking about your leverage changing over time, as well as the probability changing over time. I think people often do get confused. They then decide they're going to focus on it coming early, and then they forget that they were focusing on it because of leverage considerations, not probability considerations.</p> <p><em>Will</em>: In response to the hedging, what would you say to the idea that, well, in very long timelines, we can have unusual influence? So supposing it's coming in 100 years' time, I'm like, "Wow, I have this 100 years to kind of grow. Perhaps I can invest my money, build hopefully exponentially growing movements like effective altruism and so on." And this kind of patience, this ability to think on such a long time horizon, that's itself a kind of unusual superpower or way of getting leverage.</p> <p><em>Toby</em>: That is a great question. I've thought about that a lot, and I've got a short piece on this online: <a href="">The Timing of Labour Aimed at Existential Risk Reduction</a>. And what I was thinking about was this question about, suppose you're going to do a year of work. Is it more important that a year of work happens now or that a year of work happens closer to the crunch time, when the risks are imminent? And you could apply this to other things as well as existential risk as well. Ultimately, I think that there are some interesting reasons that push in both directions, as you've suggested.</p> <p>The big one that pushes towards later work, such that you'd rather have the year of work be done in the immediate vicinity of the difficult time period, is something I call nearsightedness. We just don't know what the shape of the threats are. I mean, as an example, it could be that now we think AI is bigger than bio, but then it turns out within five or 10 years' time that there've been some radical breakthroughs in bio, and we think bio's the biggest threat. And then we think, "Oh, I'd rather have been able to switch my labor into bio."</p> <p>So that's an aspect where it's better to be doing it later in time, other things being equal. But then there's also quite a few reasons why it's good to do things earlier in time, and these include, what you were suggesting, growth, but there are various things to do with your money in a bank or investment could grow, such that you do the work now, you invest the money, the money's much bigger, and then you pay for much more work later. Obviously, there's growth in terms of people and ideas, so you do some work growing a movement, then you have thousands or millions of people try to help later, instead of just a few. Also growing an academic field works that like that. A lot of things do.</p> <p>And then there's also other related ideas, like steering. If you're going to do some work on steering the direction of how we deal with one of these issues, you want to do that steering work earlier, not later. It's like the idea of diverting a river. You want to do that closer to the source of the river. And so there's various of these things that push in different directions, and they help you to work out the different things you were thinking of doing. I like to think of this as a portfolio, in the same way as we think perhaps of a EA portfolio, what we're all doing with our lives. It's not the case that each one of us has to mirror the overall portfolio of important problems in the world, but what we should do together is contribute as best we can to humanity's portfolio of work on these different issues.</p> <p>Similarly, you could think of a portfolio over time, of all the different bits of work and which ones are best to be done at which different times. So now it's better to be thinking deeply about some of these questions, trying to do some steering, trying to do some growth. And direct work is often more useful to be done later, although there are some exceptions. For example, it could be that with AI safety, you actually need to do some direct work just to prove that there's a "there" there. And I think that that is actually sort of effectively direct work on AI safety at the moment. The main benefit of it is actually that it helps with the growth of the field.</p> <p>So anyway, there are a few different aspects on that question, but I think that our portfolio should involve both these things. I think there's also a pretty reasonable chance, indeed, that AI comes late or that the risks come late and so on, such that the best thing to be doing was growing the interest in these areas. In some ways, my book is a bet on that, to say it'd be really useful if this idea had a really robust and good presentation, and to try to do that and present it in this right way, so that it has the potential to really take off and be something that people all over the world take seriously.</p> <p>Obviously, that's in some tension with the possibility AI could come in five years, or some other risk, bio risk, could happen really soon. Or nuclear war or something like that. But I think ultimately, our portfolio should go both places.</p> <p><em>Will</em>: Terrific. Well, we've got time for one last short question. First question that we got. Will there be an audiobook?</p> <p><em>Toby</em>: Yes.</p> <p><em>Will</em>: Will you narrate it?</p> <p><em>Toby</em>: Maybe.</p> the-centre-for-effective-altruism tCcrWDsMKAgr9dZMe 2019-03-01T15:48:53.881Z Oscar Horta: Promoting Welfare Biology as the Study of Wild Animal Suffering <p><strong>Content note: this transcript includes pictures of animals suffering.</strong></p> <p><em>Animals in the wild often suffer tremendously, from starvation, exposure to the elements, and preventable disease. Rabies vaccination programs have substantially helped certain wild animals, but those programs were designed mostly to protect humans and our pets. What if we went a step further and tried to help wild animals for their own sake? In this talk from EA Global 2018: London, Oscar Horta argues that we might be able to make a truly huge impact through the study of welfare biology.</em></p> <p><em>A transcript of Oscar's talk is below, which we have lightly edited for clarity. You can also read the talk on <a href=""></a>, or watch it on YouTube <a href="">here</a>.</em></p> <h2>The Talk</h2> <p>We're going to start first with a little experiment. I want you to think, just for a second, of a wild animal, the first one that comes to mind. Okay? You got it? That's good. We'll come back later to that.</p> <p><img src="//" alt="1100 Oscar Horta (1)"></p> <p>The idea of this presentation was to present what wild animal suffering is in general, and then see what we are doing right now to tackle it. We have this aim now, which is the creation of a new field of research, a new scientific field called Welfare Biology to address wild animal suffering. But before I get into that, we need to show why we should be worried about wild animals in the first place. So what I'll do is, first explain why wild animal suffering is important. Then I'll present some ways in which we are already helping wild animals, and then I'll come back to the reasons to create a new field of research.</p> <p><img src="//" alt="1100 Oscar Horta (2)"></p> <p>So, yeah, wild animal suffering is important. Many people have this idyllic view of nature, they think that nature is a paradise for animals.</p> <p><img src="//" alt="1100 Oscar Horta (3)"></p> <p>It's not that they think that during the evening the animals join together and sing songs and all that, but on the other hand, they think that, yeah, animals are leading good lives in the wild. Unfortunately, this is not really what happens.</p> <p><img src="//" alt="1100 Oscar Horta (4)"></p> <p>There are many reasons why many animals have very bad lives, in fact. There are natural causes such as extreme weather conditions, hunger and malnutrition, parasites, or injuries.</p> <p><img src="//" alt="1100 Oscar Horta (5)"></p> <p>Like for instance, this animal with an injury such as this one, that can mean for this animal death. He or she can't go to a health center and get some antibiotics or something.</p> <p><img src="//" alt="1100 Oscar Horta (6)"></p> <p>And then we see this, also extremely common. Many animals die due to horrible diseases that cause them suffering throughout long periods of time.</p> <p><img src="//" alt="1100 Oscar Horta (7)"></p> <p>We think of them as used to that, but that's not the case. They suffer just as humans would. And on top of this, there are reasons to believe that this is not something that happens just to a tiny minority of animals. It's just the other way around.</p> <p>And now I want to come back to our experiment from before. So I want to ask you, how many of you thought of a mammal when I asked you to think of a wild animal? Wow, a lot of people. How many of you thought of a bird? Just a couple of persons. Reptile? One. Amphibian? One. A fish? One. An invertebrate? Okay, the tide is changing, so some people are thinking an invertebrate. This is good. It shows that we are making progress in this.</p> <p>Now, the most relevant question: how many of you thought of baby animals, or very young animals? Only one. So the rest of you basically thought of adult animals. But what happens is that in nature, most animals reproduce by having huge numbers of offspring. This happens in the case of mammals: Rodents can have hundreds of offspring.</p> <p><img src="//" alt="1100 Oscar Horta (8)"></p> <p>Other animals can have thousands of offspring during their life.</p> <p><img src="//" alt="1100 Oscar Horta (9)"></p> <p>Some may have like millions of them. On average, how many of these animals would you guess survive, make it to maturity? It's very simple. On average, for a stable population, only one animal per parent makes it.</p> <p><img src="//" alt="1100 Oscar Horta (10)"></p> <p>What happens to the other animals? They die, most of them shortly after coming into existence. The thing is that their deaths aren't really nice deaths. They often die due to hunger. Many animals never eat. They come into existence, look for food, never find any food, and they just die. Others may be frozen, or killed by the cold. Others are eaten alive. And this happens to the overwhelming majority of animals. So this shows that this issue really is serious and deserves more attention than what it has received so far.</p> <p><img src="//" alt="1100 Oscar Horta (11)"></p> <p>What are we doing right now to tackle this? Most of the things that are done deal with very few numbers of animals, or with just one animal. So every now and then, you can see in the media cases of people helping animals in distress like, for instance, in this case this fawn who was there trapped in a frozen lake and was rescued.</p> <p><img src="//" alt="1100 Oscar Horta (12)"></p> <p>Or, in some cases, there are efforts that try to help more animals, like there are centers for injured animals, sick animals, or orphaned animals, such as this baby rhino.</p> <p><img src="//" alt="1100 Oscar Horta (13)"></p> <p>There you have more examples of animals treated in centers such as this one, getting adequate medical care, and so on.</p> <p><img src="//" alt="1100 Oscar Horta (15)"></p> <p>So when we see these pictures, we think, "Well, it's great that we are helping these animals."</p> <p><img src="//" alt="1100 Oscar Horta (14)"></p> <p>But after all, when we consider how many animals really are facing these terrible situations, it seems that we need to go further than that. There are some efforts that try to help more animals. These are animal feeders.</p> <p><img src="//" alt="1100 Oscar Horta (16)"></p> <p>They dosify the amount of food that animals can get. They're used, in some cases, where certain animal populations are threatened. It may be because they are facing a particularly harsh winter or something. You can see this night picture of some animals going to the station to feed.</p> <p><img src="//" alt="1100 Oscar Horta (17)"></p> <p>This is mainly done for conservation as a reason, which is different from caring for the animals themselves. People want to keep a certain population active for scientific reasons or because they want tourists to see these animals, but that's different from caring for the animals themselves. But still, this helps, and the knowledge we have about how to deal with situations of hunger could be applied in other cases as well.</p> <p><img src="//" alt="1100 Oscar Horta (18)"></p> <p>More ambitious efforts can be considered too. As you can see, this is a picture of a scientific paper, which is about vaccination against tuberculosis of wild animals. And there are several other diseases which have been researched in order to learn how to best eradicate certain diseases from certain populations.</p> <p><img src="//" alt="1100 Oscar Horta (19)"></p> <p>Here's another paper, this one tackling swine fever virus.</p> <p><img src="//" alt="1100 Oscar Horta (20)"></p> <p>And another one, this time against rabies. I want you to notice the date of this paper, which is 1988. So this has been some research that has been going on for a while already. It's been decades since scientists started to work on this and much progress has been made.</p> <p>Rabies has been eradicated in many countries, in northern Europe and wide areas in North America. And again, the reason why this measure is carried out, it's not because people are concerned about animals, or that we don't want them to suffer this horrible death. Rather than that, we don't want those animals to pass these diseases to human beings, or to the animals human beings live with. But still, even if it's not the purpose we are trying to achieve, we are still helping a lot of animals. Even though there's been some research on this already, however, much more work could be carried out.</p> <p><img src="//" alt="1100 Oscar Horta (21)"></p> <p>Just to explain you how this is done, those are biscuits with nice smells for the animals, and a nice taste, and they introduce the vaccine there. Then they distribute them in the wild. So there are different ways to do this.</p> <p><img src="//" alt="1100 Oscar Horta (22)"></p> <p>One is with these dosifiers, so they go there dispense just one biscuit at a time, so no animal gets a lot of them. They also with helicopters and they have boxes with doses of the vaccine, and they just distribute them like candy for the animals.</p> <p><img src="//" alt="1100 Oscar Horta (23)"></p> <p>Yeah. It's amazing how we can do things that actually can help, not just one animal, not just ten animals, but thousands of animals. And, as I said, are current efforts are only really concerned with humans. So imagine how much we could help these animals, if we were concerned for the animals themselves.</p> <p>So this is where the need for a new field of research comes from. There are several cost efficient courses of action today to address wild animal suffering. And of course one is to spread the idea that animals in the wild matter.</p> <p><img src="//" alt="1100 Oscar Horta (25)"></p> <p>This implies first, speaking out against the discrimination of animals, against speciesism, and spreading concern for animals in general. But then, spreading concern for wild animals in particular because there are many people who, while concerned about animal welfare and animal rights, have never thought that wild animals may need our help, because they are suffering due to natural reasons.</p> <p>Some organizations are working on this problem. I'm working in animal ethics, and we are distributing materials to educate the general public with a focus especially on people who are involved in academia. We want to also reach animal advocates to give them information about this, so they themselves can go on working and spreading the word about that.</p> <p><img src="//" alt="1100 Oscar Horta (26)"></p> <p>Here is a picture of our website. It's in Chinese because it's so cool that we have our website in eight different languages. I could have put it in English, but, you know, I'm putting it in Chinese! Why not?</p> <p>But still, this is only a part of the solution. There are more things that are necessary here. One of them is supporting the interventions that are already being carried out, such as the ones that I presented before. And then helping to create and develop new ways of helping animals in nature. And it's here where raising interest among life scientists is key. The reason for this is that when you consider the work that life scientists carry out, that is related to either directly or indirectly to wild animal suffering, what you find out is that there is no idea of wild animal suffering as such, or even wild animal welfare.</p> <p>For instance, when you consider the work of animal welfare scientists, they may work with animals that are exploited by humans. In fact, there is a field that is <em>called</em> wild animal welfare. What they do is, they focus on wild animals that are in captivity or, in some cases, wild animals that are being affected in the wild by humans, by say, hunting or fishing or similar activities. Right? There is also another field, which is compassionate conservation, and they focus on trying to achieve conservation in ways that don't harm individual animals.</p> <p>So all these are fields that are related to what we need here, but aren't quite the same thing. And then we have the field of ecology, and the field of ecology now has many subfields. Ecology works on the study of ecosystemic relations, so there is community ecology, population ecology, behavior ecology, all them are fields of ecology. But what we don't have yet is this, welfare biology, or welfare ecology. What is welfare biology? Well, it's been defined as the study of living beings with respect to their positive and negative wellbeing.</p> <p>But basically, another way of understanding this is, it's just a study of how animals feel with all kinds of situations, including in the wild. So the welfare biology would include animal welfare science as we understand it today. It would go further than that because it would address as well the situations that animals are undergoing in the wild. This is a new field that we have to create. And it's amazing that in ecology and that in animal welfare science, there is no work on this. Right?</p> <p>Clearly, even if only from a scientific, from an epistemic viewpoint, if we want to know about the reality of animals or the reality of ecosystems out there, then the wellbeing of animals clearly seems to be something very relevant. If on top of that, we are not only curious about how things are, but we are also concerned about how those things are for particular individuals, then it seems clear that we have a major reason to try to develop these new fields.</p> <p><img src="//" alt="1100 Oscar Horta (30)"></p> <p>There is some work going on in these fields already. This is a list of publications that we've published, and you can see that it's a long list. I put it there not so you can read them, but only for you to look at how long the list is. But even if it's a long list, it's not long enough. And in addition to that, a significant part of this literature is by people who are working in philosophy or ethics or other related fields, but not actual biology. And that's what we need.</p> <p>We need biologists who are involved in this. This is what's necessary now. This is really something we need.</p> <p>Fortunately, there are already some people who are getting involved in this. We are now creating a small network of ecologists and other biologists. Some animal welfare scientists are starting to be interested in this. The prospect of having this new field created is actually feasible now. Some years ago, this could seem like a crazy idea, but now, it's not going to be immediate, it's going to take a while, but we are on our way there.</p> <p>What are we doing now? By us, I mean the people who are working in this field in general. There are several organizations working on this, Animal Ethics is one of them. Then there is Wild Animal Suffering Research, Utility Farm, and other groups are working on this too. In particular, in the case of Animal Ethics, we are now carrying out research on how new scientific fields have been created in the recent past. We have already interviewed around 15 scientists in different countries. Mainly biologists, but also animal welfare scientists, to see what ideas they have regarding this, what kind of interventions they think it would be promising to research. We've asked people from different countries, like in the UK, in the US, some around Europe too, Germany, Switzerland, but also in Latin America, in Brazil, in Mexico. So we tried to cover a wide range. We are also working on designing drafts of what could be research projects, which welfare biologists could work on, to make it as easy as possible for interested people to do this work. We are also working in designing subjects that a biology scholar could teach at the university. Subject that are either focused on wild animal welfare, or that include welfare biology among other concerns.</p> <p>So, yes, there is much work to be done, but as I said, what's more important to this is getting life scientists involved. So I would have liked to have more time to speak about welfare biology as such and the new developments, but I thought that it would be useful to first tell you a bit about wild animal suffering.</p> <p>On behalf of all these animals, again, I want to thank you for your interest in this topic. Thanks.</p> <p><img src="//" alt="1100 Oscar Horta (31)"></p> <h2>Questions</h2> <p><em>Question</em>: Should care about animal extinction along the same lines that we care about human extinction? Is there a non-speciesist difference between the two cases?</p> <p><em>Oscar</em>: Actually, if you are concerned about animals themselves, you aren't really concerned by what happens to the species as such. Also, in the case of humans, like for instance, suppose that humans were somehow replaced by other beings who would be more caring individuals, more intelligent, and with better aims than we have, would that be bad? Many people, at least among effective altruists would say, "Well, that would probably be a good thing."</p> <p>So this would be something that would have to do somehow with instrumental reasons, but it also shows that we aren't concerned with species as such. We are concerned with individuals. And the same would happen in the case of animals, I would say.</p> <p><em>Question</em>: Especially regarding the vaccination efforts that you mentioned in your talk, in general, won't wild animals just die of something else, even if they are treated for a vaccine? Does this pose some problem for the field of wild animal suffering?</p> <p><em>Oscar</em>: Yeah, that's a good question. The thing is, in fact, there are different ways of dying, and it's not just the harm of death we're worried about. In fact, there are some people who don't believe in the harm of death specifically. Most people think that when you die, you lose everything, so you can't have any more moral positives in your life, so dying is a harm. But in addition to this, there is also the harm of suffering, and some diseases really are terrible and cause terrible amounts of suffering. So if we could avoid that, it would be worth it.</p> <p>But in addition to this type of question, it allows us to present really what would be the best way to address this issue, which would be on a larger scale. So what welfare biologists could do is, they could research the amount of suffering the different ecosystemic relations create in comparison to others. For instance, when you consider the conservation of elephants, well, killing elephants may be bad for the elephants, but there is something else to take into account there, which is that elephants are eaters of huge amounts of biomass.</p> <p>So if they aren't there, that biomass is going to be eaten by tiny invertebrates who will have lots of offspring, and they will be eaten by other tiny invertebrates, but a bit larger, and so on. We will have very long trophic chains, in which there is much suffering. So it's not just about creating particular interventions that reduce suffering, it's about studying the big picture and taking a look at what is the direction in which we want ecosystems to go for there to be less suffering.</p> <p><em>Question</em>: Is there a risk that welfare biology will focus almost entirely on mammals and birds and, if so, does that change the cost benefit analysis of welfare biology generally?</p> <p><em>Oscar</em>: Yeah, that's another good question. I think that it will focus definitely on vertebrates at the beginning for several reasons. Not necessarily in the case of all the research that is going to be carried out, but surely, the focus is going to be on those animals at first. But that may be like the foot in the door, the way to get more work in general done and to establish the name of welfare biology as something that is respected in academia, and that will allow us to then afterwards go on and do research on other animals as well.</p> <p>It's also like this in the case of animal advocates in general, who mainly work on vertebrates, but who are now starting to consider invertebrates as well.</p> the-centre-for-effective-altruism eAgeFAeGY26w527c5 2019-02-26T15:51:11.598Z Joey Savoie: Charity Entrepreneurship <p><em>Want to start a high-impact charity? It’s not an easy job, but it can be an incredible way to have an impact. In this talk from EA Global 2018: London, Joey Savoie discusses the pros and cons of founding a charity. He also describes what his organization, Charity Entrepreneurship, can do to help would-be founders get started.</em></p> <p><em>This transcript of an EA Global talk, which CEA has lightly edited for clarity, is crossposted from <a href=""></a>. You can also watch the talk on YouTube <a href="">here</a>.</em></p> <h2>The Talk</h2> <p>So, we are going to talk about charity entrepreneurship. But first, I'm going to take you to a slum of Lucknow.</p> <p><img src="//" alt="1700 Joey Savoie"></p> <p>Lucknow is a city in India, and this picture is fairly representative of the state of affairs. You can tell from the picture that there are giant health and poverty concerns. It really is a place where charitable intervention can make a huge difference. One thing you can't tell from this picture, though, is the sheer size of this slum. A team of SMS Vaccine reminding staff went from building to building trying to find pregnant mothers, and it took hours to cover even a small section of this slum.</p> <p><img src="//" alt="1700 Joey Savoie (1)"></p> <p>But of course that is just one slum in a much larger city. Lucknow city has several hundred slums, 200 to 300 slums by the record right now. Each of them has a unique set of problems, although there are some commonalities in terms of global health and economic difficulties. There's a huge volume of good that can be done in a city like Lucknow.</p> <p><img src="//" alt="1700 Joey Savoie (2)"></p> <p>But of course we can zoom out further, and look at Uttar Pradesh. This is a state in India, but it is such a big state that if it were a country it'd be the sixth largest country in the world. It truly is massive. You could have a giant organization spend their entire budget and all their staff time all working in Uttar Pradesh, and not even make a dent in the massive scale problems that they have, both from poverty and other sorts of issues.</p> <p><img src="//" alt="1700 Joey Savoie (3)"></p> <p>But of course, we can zoom out even further and look at India. A country with over a billion people, and problems to match. Although there are incredible charities working there, there's still a huge need for more organizations working intelligently, systematically, and with evidence.</p> <p><img src="//" alt="1700 Joey Savoie (4)"></p> <p>Of course, India is not the only country with problems, and of course, global poverty isn't the only problem. There's tons of different issues that one can work on, like animal welfare, mental health challenges, economic development, or migration. There are a lot of gaps in the world where new and effective organizations could be founded.</p> <p>Why I'm talking about these gaps is because I think that a lot of people are under the impression that there's already a ton of charities out there. Maybe all the best opportunities have been filled by organizations, and that's really not the case. There really is room for fantastic new charities to be founded and be fantastically high impact.</p> <p><img src="//" alt="1700 Joey Savoie (5)"></p> <p>I'm going to talk about why charity entrepreneurship is important. I'll talk about its importance to the world, and to the EA movement specifically. I'll also talk about who charity entrepreneurship might be a good fit for; it really is not a good fit for everybody. Some people are fantastically well aligned and do a really good job, while other people aren't a good career fit and shouldn't enter the space. Finally, I'll address how Charity Science is aiming to help new charities get founded and started off on the right foot.</p> <p><img src="//" alt="1700 Joey Savoie (6)"></p> <p>I'm going to make this argument from more of a cluster thinking perspective than a sequence thinking perspective, which means coming at it from a bunch of different angles and showing that charity entrepreneurship looks very good and very high impact from several different perspectives.</p> <p>The first thing that comes to almost everyone's mind when they think about the potential impact of charity entrepreneurship, is the sheer size of good that you can do when you found a successful charity.</p> <p><img src="//" alt="1700 Joey Savoie (7)"></p> <p>Here's some of the money moved from some top GiveWell recommended charities. You can see it's often in the 10s of millions of dollars. And these are just the numbers from GiveWell itself, as opposed to the total money that's going towards each charity.</p> <p>Suffice to say that starting a high-impact charity can redirect millions of dollars in a positive direction. So, even if your charity is only 1% more effective than the charity that a donor would have given to otherwise, it can have a really massive impact just because of the sheer volume of money.</p> <p>There's also a force multiplication argument about this. You're not just directing money, when you're founding a new charity. You're directing talent, you're directing interest, you're directing passion towards your chosen issue. There are a lot of people who will work for a high-impact charity, but wouldn't ever found one themselves. By creating a new high-impact charity, you're creating an opportunity for talented individuals to get invested in the field and to make a difference.</p> <p>Finally, there are hits. So, everyone wants their charity to be successful, and charity entrepreneurship is inherently, like normal entrepreneurship, a risky business. A lot of charities will be started and the impact analysis will come back bad, or the charity won't have a valid way of scaling. There's a lot of ways to fail, but there's also a lot of ways to have massive success. Success that is incomparable to many other jobs. For example, a minor hit, although it feels funny to call it a <em>minor</em> hit, would be becoming a GiveWell recommended charity. Just a small percentage difference between you and the other GiveWell recommended charities in terms of being better, or even simply giving more options that GiveWell can recommend to attract donors from different spaces and different interests. So a minor hit can make a huge difference.</p> <p>But that's not even taking into account the <em>major</em> hits. Most giant charitable organizations that are around today started off as a small group. The benefits of an EA group being the group to start the next Oxfam, and shaping an entire cause area, is truly massive.</p> <p><img src="//" alt="1700 Joey Savoie (8)"></p> <p>The next thing I want to talk about is neglectedness. Charity Entrepreneurship isn't a salient career path for a lot of people. Many people will have considered entrepreneurship as a career path, and many, many people will consider working for a charity. But founding a charity is off the radar, even in EA. So, this chart shows is the percent of people working in different jobs from the EA survey that was most recently conducted. Several thousand people responded, and an incredibly small number of them have seriously considered founding high impact charities. Of those who have, almost all of them have been in global poverty. For example, organizations like New Incentives, Charity Science Health, or Fortify Health. So there's a truly large opportunity for more people to get involved in the space, more people to work in the space, and eventually for people to start high impact organizations.</p> <p><img src="//" alt="1700 Joey Savoie (9)"></p> <p>The next thing we'll talk about is tractability and track record. Founding a charity is a difficult job, especially one good enough to be a GiveWell or Animal Charity Evaluators recommended charity, but it's not impossible. Some of our collective track record shows this. A lot of the charities we view as strongest and most impactful in the EA movement, weren't started by someone with 55 years of experience in the relevant area. They were instead started by someone who came into it with more of an analytical mindset, more of an EA mindset, someone looking for cost effectiveness or evidence base.</p> <p>A lot of the recent charities that have been started and that have become GiveWell incubated, were coming with the same mindset. That's a huge competitive advantage over other charities that happened to be cost-effective by luck, as opposed to explicitly seeking cost-effectiveness out or trying to maximize it.</p> <p><img src="//" alt="1700 Joey Savoie (10)"></p> <p>No good EA presentation would be complete without an expected value calculation. So what's the numerical worth of Charity Entrepreneurship? Well, there are a couple different calculations. Peter Hurford's calculation assumes an 85% chance that the charity has zero impact, so fails completely, and assumes a 15% chance of becoming GiveWell recommended. These numbers were based off Charity Science Health, the charity that we founded, after doing a similar round of analytical research to the ones that we now do for all sorts of charities.</p> <p>His model determined that the average staff member would be worth $400,000 of equivalent donations to high impact charity. That's just the average staff member, that wasn't for co-founders or the founding team in particular. This is an incredibly high impact thing. This is $400,000 donated. So, even if you earned $400,000 you'd have to donate 100% of it to match this level of impact. We did an internal model that was a bit more pessimistic, assuming that there's only a certain chance that someone would get to the point where Charity Science Health has got, and get GiveWell incubated, and ended up at a similar figure of $200,000 of expected value of donations.</p> <p>This sort of impact is really, really high, and these calculations are quite conservative relative to a lot of the other impact estimates going around the EA movement.</p> <p>There's a whole bunch of other benefits that I'd love to spend more time on. But I'm going to go through really quickly because we only have so much time. The first one is skill building. So, entrepreneurship gives you an opportunity to try on a lot of different hats. That's part of why it's hard and intimidating, but it also gives you a chance to build a lot of different skills. If you try to found a charity, even if you fail, going into the next job having basic budgeting skills, fundraising skills, management skills, hiring skills, it gives you a huge advantage and will stick with you for a long time.</p> <p>Similarly, career capital. If someone sees that you took a good attempt at a project, even if it's a failed project, but especially if it's successful project, that does wonders for your CV and career capital in general. You can use a successful charity as a stepping stone towards getting into a high impact position with the World Health Organization, or any sort of other organization that would look at that sort of thing.</p> <p>The next thing is attributable impact. So, calculating the impact that you're going to have is really, really difficult, and there's one less step you have to calculate with charity entrepreneurship. If you do, in fact, start a charity that no one else would have founded, that wouldn't have gotten started without your time and energy put into it, what you're most looking at is that charity's impact as a whole. Instead of having to calculate both the organization's impact, and then your specific impact within the organization. When calculating your impact in othre cases, maybe the organization is great, but your personal role is very minor. Or maybe your staff impact is big, but the organization sucks. With charity entrepreneurship, you only have to calculate the organization's impact.</p> <p>Job satisfaction is the next one. As I said, it's really not the perfect fit for everybody, and we'll go a little bit more into that soon. But for the right personality type, it's incredibly enjoyable. Being able to look at your charity and know that you built it from scratch, being able to work with flexible hours, with a bunch of different staff. There are a lot of benefits to it. There's an unparalleled amount of job diversity. But there are also cons. Ambiguity is tough, which is a thing you're going to have to deal with as a charity entrepreneur.</p> <p>Not only are there personal benefits for founders, there are also benefits to the EA movement in general. Charity entrepreneurship enhances movement growth, by expanding the EA movement outside of its traditional sphere. By getting involved at the charity level and hiring people in that field, working with people who, say, work in vaccinations, you expand EA in a very concrete way to a sympathetic audience.</p> <p>There's also a much clearer case for impact for some of these charities. It's fine to go and tell someone that you're doing a philosophy think tank that will eventually save humanity, but it sure is nice to also be able to say we started a vaccine charity that people think is highly cost effective. That sort of concrete case for impact can benefit the whole EA movement, in terms of showing that we are in fact doing what we say we're doing and having success doing that.</p> <p>The next thing is stability. Organizations tend to outlive movements. The EA movement is social movement, and it is fragile in many ways. It gets stronger the more organizations there are to anchor it, and tie it to reality in a long lasting way.</p> <p>Finally, opportunities. Lots of EAs want to work for EA organizations. Lots of EAs wants to work for high impact jobs. As I mentioned with the force multiplier before, by creating an opportunity, you create space for people to grow, develop their capacities, and expand the EA movement. It gives a space for people to go once they get involved.</p> <p>Next up, community learning value. Even a failed project can be massively impactful if you get a lot of learning value from it. One of our early projects didn't work at all, but we were able to publish a giant report explaining why it didn't work and tens of other organizations in the EA movement were able to learn from that mistake and not repeat the same thing. If your charity does fail, and you are able to be transparent about why it failed, and you learn from it, you can benefit not only the charity entrepreneurship community within EA, but the broader EA movement as a whole, because a lot of these lessons are generalizable.</p> <p>Finally, there's inspiration. If you can inspire someone else to found a charity through your example, that can be massively high impact. If people can see other people doing successful, ambitious projects, then it can lead to precedence. So for instance, we saw New Incentives do a really great job founding their charity, and that gave us confidence to start Charity Science Health. Charity Science Health gave Fortify Health confidence to do that, and now Fortify Health, New Incentives and Charity Science Health can give other EAs a chance to look at charities that have been successful and it gives them a chance to feel inspired by the possibility.</p> <p>The next thing is passive impact. So, for people who have heard of passive income, passive impact is a very similar concept. Basically, if you set up a charity to run independently without you, and it continues to do good in the world, you continue to hold some sort of counterfactual responsibility for that impact.</p> <p>The last thing that I'll talk about briefly is just the room for more funding. Room for more funding isn't a huge impact if it's filled by someone who's otherwise going to donate to a fantastic charity, but by creating a new charity, you can leave a lot of room for new donors to get involved, and donate to maybe something particular to their interest, while still making a really high impact.</p> <p><img src="//" alt="1700 Joey Savoie (11)"></p> <p>So why now? There's a lot of reasons why founding a charity now in particular is maybe a lot better than historically. Hopefully, it will continue to be this good in the future. There's a lot of funder support and funder interest in this sort of thing. The GiveWell incubation program has been trying to fund programs that might eventually become GiveWell top charities. Animal Charity Evaluators has money for this, Open Philanthropy is very interested in new charities and of course, Charity Science, my organization, provides seed funding to new projects starting up. Just an unparalleled time where funding probably won't be the major bottleneck for a lot of charities being founded, if they are founded in an evidence-based way in an evidence-based cause.</p> <p>There's also mentorship support. You're not the first charity working on this anymore. So you are able to kind of connect with an alumni community. Charities that we've talked to have shared hiring pools and strategies for management and all sorts of different things. The EA community really is starting to build up a network of people you could talk to about issues, whether it's communications or research, and really get an informed perspective of someone who's done something quite similar, quite recently.</p> <p>Finally, there's still gaps. That's why I talked about one specific case at the beginning. It's really, really easy to forget just how big the world is, and just how large-scale our problems are. There's a ton of malaria charities, and yet, there's still malaria, killing hundreds of thousands of people every year. There's still a lot of work to be done, and the EA movement can contribute a lot more to that. Specifically, there are even lists of ideas that people would like to see more of. GiveWell has a list of priority programs that has 25 ideas. ACE has a list of charities that it would like to see, with 17 ideas. Charity Science Entrepreneurship, we want to do a research program that recommends two to five ideas every single year, that'll be particularly promising to found, in the GiveWell priority program ballpark, or even more high impact, like something that could compete with AMF.</p> <p><img src="//" alt="1700 Joey Savoie (12)"></p> <p>So I want to talk a little bit about who Charity Entrepreneurship is a good fit for. As I mentioned, it really is not a great fit for everybody, but people are often surprised at what they need going in, what would make them a good fit or would not. So I'll talk about personality, what it helps to have, what you don't really need or what people tend to overvalue, and how you might further test this (in case a 30-minute presentation can't convince you one way or the other to radically change your career).</p> <p><img src="//" alt="1700 Joey Savoie (13)"></p> <p>Okay, so first up, personality. This is an example of a fantastic charity entrepreneur. I'm not talking about Prince William; I don't really have an opinion on whether he can make a good charity entrepreneur or not. I haven't spoken to him personally. But I have spoken to Rob Mather. He really embodies what a fantastic charity entrepreneur might look like. One of the things I want to highlight about him is his personality. Personality is so key, when it comes to charity entrepreneurship. It's one of the first things we look for in our vetting process, and one of the things that I think determines eventually whether your charity ends up being massively high impact or not.</p> <p>You need a lot of different things. You need to be resilient. There are going to be bad days, bad weeks, and potentially even bad months where you think your charity is low impact, that it wasn't worth founding, and that it's too hard to get yourself motivated. And as the founder, you have to motivate not only yourself, but also your co-founder and your employees. You have to be ready to take those shocks and keep moving on, keep moving through them even when it's tough.</p> <p>The next thing is being ambitiously altruistic. So a lot of entrepreneurs love ambition. It's a very common thing, but it's really easy to be ambitious about the wrong thing. If you're ambitious about how big your charity is, your charity might get really big, but it won't necessarily do any good. What you need to be ambitious about is how many lives you save, or whatever your end line metric is for doing good. That's the thing you have to be laser focused and truly ambitious about.</p> <p>The next thing is results oriented. Quantified measurement is one of the things that makes EA different from many other movements. We really want to see concrete, specific results, and have data to back it up. Staying focused on this will stop your charity from diverging into 100 other projects that might not be as high impact, and it really can make a difference in the long term.</p> <p>The next thing is being open-minded. You won't have all the skills you need. Nobody does, when they first found a charity. You have to be able to update based on the world changing, based on testing out one thing and it not working, based on advice from people in the field, people who have worked in specific areas that you don't have knowledge in. You have to be ready to amalgamate all these different views, and come up with a coherent answer, and update as new data comes in.</p> <p>Similarly, it's important not to be afraid to make mistakes. You will make mistakes. Every charity entrepreneur does, and will. Being able to admit these mistakes transparently, learn from these mistakes, and update them can be the difference between your charity eventually succeeding, and you continuing to make the same mistakes again and again.</p> <p>Next, self-motivated. This might be the most important criterion. You really have no boss, no person whipping you at the end of the day to get the work done. You have to really care about your charity and be able to put in the hours, to be able to work yourself through the project. One of the easiest kind of litmus tests I have for charity entrepreneurship fit is, can you get yourself through a self-directed project? Can you complete an online course without anyone needing you to? Can you start something where only you're responsible if it succeeds or fails? That sort of thing is really challenging for a lot of people, and it's really, really challenging to do a charity like this. You will have your co-founder, you will have your mentors, you will even have funders you have to report to, but not at the same level of regularity that any other job will make you. You have to be self-motivated.</p> <p>Creativity also helps. There will be a blank slate. You won't necessarily know what the next steps are, and you have to come up with ideas. How to test one thing, how to test another thing. If you come up with five ideas, five ways to test a given concept, that's the limit of how good you can get: the best of five. If you come up with 30 ideas, you can test them all, you can evaluate them all, and come to the best of 30. That makes a huge difference for your charity's impact.</p> <p>Next, doing it for the right reasons. This is a bit different than being ambitiously altruistic. You really, really have to be focused on doing good for the world. If your goals are different, if your goals are divergent and you want to create a charity to look good, or to impress a partner, or something like that, your charity won't end up being high impact. You really have to be laser focused on that, doing it for the right reasons, with an altruistic mentality.</p> <p><img src="//" alt="1700 Joey Savoie (14)"></p> <p>There are some other things it helps to have. It's nice to be highly competent, whatever that means. Kind of general ability, conscientiousness, IQ, that sort of thing. The EA community is a huge asset, a bunch of skillsets that you can tap to, a bunch of people who really want to help you start a charity. Social skills or research skills, it's really great to have one of those. Your co-founder can balance you out and have the other one. And experience working in a small organization charity can give you a sense of what it looks like from the inside. You tend to think that every organization is perfect, and when you get on the inside, you tend to see how held together with glue and tape it really is.</p> <p>What's less important than people generally think? Well, one is a degree. A lot of people think that if they want to start a global health charity that's fantastic, they need to get a global health PhD. Unfortunately, those programs end up being too unspecific a lot of the time. For my charity, SMS Vaccine Reminders, you might have only read a page or paragraph in a global health program about this sort of intervention. It's just very, very specific, not to mention the country context. What you need to be able to do is become an expert, it's not necessarily through a degree; it's through reading the studies, reading the research, getting very, very expertised in the very narrow domain that you want to start a charity in. You want to be able to talk to experts and engage with them at the highest possible level, but you won't get there from doing a PhD program. You'll have to do the independent learning on top of that, in either case.</p> <p>Also, targeted experience. At a health nonprofit, you might be able to pick up some good habits, but often your role will be very specific. If you're working for a large, even a well run health organization, often you'll be running one very small component of it, whether that's a comms job or a research role. That will give you some skills in that area, but as a charity entrepreneur, you really will need to learn a little bit of how to do everything. Some people might come into charity entrepreneurship with five out of 100 skills that they need, and other people might come in with nine out of 100 skills that they need. Either way, you still need to be able to develop 91 skills. A lot of it comes down to being able to learn things on the fly, try things out, pivot and update based off evidence, talk to mentors and utilize <em>their</em> skills. That sort of thing is going to be far more important than coming in with a few extra skills.</p> <p><img src="//" alt="1700 Joey Savoie (15)"></p> <p>Next thing is, connections in the field. Connections in the field are super important, and you do need them to have a successful charity. But you'd be amazed how willing these people are to talk to you. If you come in informed and keen, and with some expertise or some funding, a lot of these organizations are extremely excited to talk to a young or experienced person who's getting involved. They want to see other charities. They care about this stuff a lot. And they're not getting a thousand emails a day, if they're running some small program out of India.</p> <p>In general, it's a very, very easy to build the network, and that is how you build the network, by working in the field. It's helpful to reach out these people and have a quick Skype with them, tell them what you're doing, tell them what you're considering, ask them for advice. Everyone's been very happy to help when we've done this on multiple different projects, across multiple different cause areas. The one exception to this is government connections. It's hard to build government connections; they're not willing to talk to you. If you're doing a job where you need government connections, hire someone who has the government connections. That's the advice there.</p> <p><img src="//" alt="1700 Joey Savoie (16)"></p> <p>So, a little bit about further testing. The best way to test if you're a great fit for charity entrepreneurship in the way I'm talking about it, might be applying for our incubation program. I'll talk a little bit more about what that offers and why you might consider it, but we do have a process that we've used before, on entrepreneurs, that has been fairly successful at selecting the kind of people who might start a GiveWell incubated charity.</p> <p>We have a quiz on our website. It's a lot less intensive than doing the full incubation program process. It's pretty quick, about three minutes, and it will give you a sense from a personal perspective if you might be a decent fit or not for charity entrepreneurship.</p> <p>Finally, generally on our website we're trying to put out as much information as possible. So people can self-select, people can consider whether they're going to be a good fit for charity entrepreneurship or not. Our mail list, we send out of all of our relevant research as well as like, helpful things like Facebook group links, to ways that you can ask people questions, and all that sort of thing. So that also can help to give you a sense slowly of whether this might be a good fit as a career path or a bad fit.</p> <p>In general, though, don't be too discouraged. A lot of the strongest charity entrepreneurs I've talked to are scared, they're nervous, they're not the archetype of a gung ho, confident entrepreneur. Some of them are cautious. Some of them are detail-oriented, some of them are not gregarious. Don't let superficial entrepreneurship associated traits fool you. Instead, try to get as good as sense as you can from external people who have looked at it before, or by talking about it, or by reading the content from people who have started successful charities.</p> <p><img src="//" alt="1700 Joey Savoie (17)"></p> <p>I want to talk a little bit about how Charity Entrepreneurship as an organization is aiming to help charity founders. The first thing is, coming up with a really fantastic idea to run a charity on is hard, especially if you're trying to become a top GiveWell charity or top Animal Charity Evaluators charity. That's not an easy bar to beat. Thankfully, we've been able to do a lot of research to narrow down the space a little bit into some ideas that are extremely promising. This is a spreadsheet we did on different global health ideas, narrowing down to what ideas might feasibly be competitive with the top GiveWell charities. They might be evidence based, and they might be cost effective enough to do a really good thing.</p> <p><img src="//" alt="1700 Joey Savoie (18)"></p> <p>Here are a few of them in particular. Tobacco taxation looks fantastically cost effective if you can get the right country. Conditional cash transfers have a case for very strong impact, and are being done almost nowhere by NGOs. This year, we're going to be focusing on animal interventions, and researching that, and looking for the highest possible impact interventions that one could start in the field. We're doing research a bit different than say, GiveWell, or Animal Charity Evaluators. We're looking for gaps, areas that could be really promising, could be really effective, but don't have anyone working in them necessarily.</p> <p>Malaria is a fantastic place to work, but AMF is doing a really good job, I wouldn't want someone to start another bed net charity. But there are areas that are both fantastically high impact and neglected, as in no one's working in the kind of way that we as effective altruists, or we as people who want help the world, would like to see it done.</p> <p><img src="//" alt="1700 Joey Savoie (19)"></p> <p>So the incubation program that we're running is taking place from June 15th to August 15th, and we'll be hoping to run it every year. We really want to make the process of founding a charity as easy as possible. So we're giving structured support that slowly withdraws, until people are fully independent and standing on their own two feet. The first month will be something akin to a university class. There'll be activities, there'll be pairing with different co-founders to test out your abilities in different ways, there'll be explicit teaching about cost effectiveness, or fundraising plans, all the hard skills that you might need to run a really good charity.</p> <p>The second month, you'll be paired with co-founders on an idea and start working on a project, but with a lot of support from teams of people who have already successfully founded charities. Finally, over the next six months, you'll be given a seed grant, to financially support yourself so that you can really become a true domain expert before seeking external funding. Seed grants are about $50,000, depending on how many charities apply.</p> <p>This structure allows someone who maybe doesn't have a ton of experience in working for NGOs, or nonprofits, but is able to build the experience as they go and get really competent and capable, to start a high impact charity.</p> <p>We really don't want it to just end after the seed grant ends, we want to continue to support charities as long as they need it. We're trying to build a community such that people can continue to stay connected, whether that's over Skype or Facebook, or a co-working office that we're going to have. We want to have joint office space so that people can feel like they're working with a team instead of working alone or with their co-founder. The seed grants I already mentioned. We'll also connect people with long-term funders. We don't want to see these charities just run for six months, and then flounder. Most of the connections will be people who are very keen on founding new charities, and ongoing mentorship.</p> <p>So I still continue to Skype with the projects that we've helped, and help them with the most difficult issues, so they can have an external set of eyes for as long as they need it.</p> <p><img src="//" alt="1700 Joey Savoie (20)"></p> <p>This is a quote from Fortify Health, and I think it's a really important one, because it shows that not everyone knows that they're going to be a perfect fit for this. Some people think it might be too hard or impossible, but a lot of people can do it. You can rely on the process to figure out if you're a fantastic fit.</p> <p><img src="//" alt="1700 Joey Savoie (21)"></p> <p>Just to reiterate on the goal, there are fantastic charities in the world, but there aren't not enough of them. We need more really, really good charities, more Humane Leagues, more Against Malaria Foundations. Charities that make a massive difference at cost effectiveness far greater than a standard charity. There's still gaps, and room to do it. The main thing missing is entrepreneurs, people who will be able to step forward and take on this risk, and potentially start incredibly successful charities.</p> <h2>Questions</h2> <p><em>Question</em>: Earlier, one of our speakers, Dr. Glennerster, said that she really advises young people to make sure that they spend some time in an effective organization so they know what an effective organization looks like. What do you think about that?</p> <p><em>Joey</em>: Yeah, it helps a lot. When people ask me what they should do if they're still working on their degree or are looking for internships, I always say, prioritize how good the organization is. It doesn't have to necessarily be in a tightly related area, but if you work for a charity like AMF, or pick up some of their management practices, that's going to be one of the best things you can do to set yourself up to run a charity really well.</p> <p><em>Question</em>: It seems like the charity space has a proliferation problem, where there are lots of small charities, often nipping at different corners of the same bigger problems. What's your take on that, and how do you think about people joining versus starting charities given that reality?</p> <p><em>Joey</em>: So, we're pretty pro starting versus joining. Joining organizations, whether they're small or big, it's incredibly hard to change them in an effective direction. You can ask a lot of people who have worked with these organizations to get a sense of that. There are lots of charities, but most of them are incredibly small. You'll see a statistic like there's a million charities, but almost all of them have an operating budget of under $50,000 or something like that. So it's not like they're taking out huge chunks of global problems. What you really have to look at is the scale of the remaining problem, and whether you can start a charity that starts to cover some of that problem.</p> the-centre-for-effective-altruism pte9N9BKX5C6Lv2Jy 2019-02-22T15:44:39.068Z David Denkenberger: Loss of Industrial Civilization and Recovery (Workshop) <p><em>This transcript of an EA Global workshop, which CEA has lightly edited for clarity, is <a href="">crossposted</a> from You can also watch the talk on YouTube <a href="">here</a>.</em></p> <p><em>A powerful electromagnetic pulse, solar storm, or narrow AI virus could leave large portions of the globe without electricity. As a civilization, are we prepared to handle challenges of this magnitude? And if not, can we become prepared? This workshop from EA Global 2018: London, led by David Denkenberger of ALLFED, deals with these questions.</em></p> <h2>Intro</h2> <p>First I'll give some intro, and then we'll break into small groups. You'll discuss the scenario that I give you, about how people would deal with a major catastrophe. Then I'll get into how it might be different if we actually prepare for these catastrophes, and then your groups will discuss what you think would happen. Then we'll come back together and discuss results.</p> <p><img src="//" alt="1530 David Denkenberger"></p> <p>A little background on Alliance to Feed the Earth in Disasters: many people on our team are actually in the UK, and other people are in the US. We started with the book <em>Feeding Everyone No Matter What</em>.</p> <p><img src="//" alt="1530 David Denkenberger (1)"></p> <p>We're looking both at the research side and also the real world practical preparedness and planning. Some more background on what ALLFED does, if you look at a spectrum of global food production shock, most effort is on catastrophes or disasters that only have a roughly 1% loss in food production, like what happened in 2007, 2008. So we don't focus on that.</p> <p><img src="//" alt="1530 David Denkenberger (2)"></p> <p>We do focus on scenarios that could cause a roughly 10% reduction in food production, so these are things like volcanic eruption, like the one that caused the year without a summer in 1816, where there was famine in parts of Europe. There are also a number of other disasters that could cause a major food reduction, like a super weed. Then we also look at disasters that could completely block the sun, like nuclear winter. Today, we'll focus on scenarios that could disrupt electricity. Since pretty much everything else is dependent on electricity, like pulling fossil fuels out of the ground, this scenario could possibly entail a collapse of industrial civilization.</p> <p><img src="//" alt="1530 David Denkenberger (3)"></p> <p>The emphasis within EA has been on existential risks, which many times people associate with outright extinction. The agricultural catastrophes we'll be talking about are unlikely to cause outright extinction. However, the original definition of existential risk from Nick Bostrom was not just events that could cause extinction, but also ones that would cause a significant reduction in the potential of humanity in the long term. So if one of these global catastrophes were to destroy civilization, and we didn't recover from it, that actually would constitute an existential risk, because we have not attained our potential as humanity. There are a number of reasons why, if we lose civilization, we might not recover it. For instance, we've already burned the easily accessible fossil fuels, and fossil fuels were important in creating industrial civilization. We've also had a stable climate for the last 10,000 years, while we might not be so fortunate in the future. And then, another possible way of having far future impact is that if one of these catastrophes happened, and things went poorly, then the trauma from the catastrophe could make us nastier, and maybe we'd be more likely to have future catastrophes. Or maybe worse, post-catastrophe values end up in an AI, and are locked in. So preventing catastrophes are another way of having far future impact. All right. So that's background on Global Catastrophic Risks and ALLFED. Now I'll quickly go over the scenario.</p> <p><img src="//" alt="1530 David Denkenberger (4)"></p> <p>So we've mentioned solar storms. A major solar storm happened in 1859, the <a href="">Carrington Event</a>, when we basically only had telegraphs. We didn't have much electricity, but the storm did disrupt telegraphs. In order to disrupt electricity globally, it would have had to be a more severe event. But there actually have been more severe events than what happened in 1859, two of them in the last 2000 years. A solar storm would burn out transformers connected to long electric lines. The next scenario is the high altitude electromagnetic pulse. A nuclear weapon detonated at high altitude would create an electromagnetic pulse. There, you destroy not just transformers, but pretty much anything plugged into the grid. So like computers would be fried, and even large vehicles. Most of the emphasis is on just a single electromagnetic pulse. But if there were multiple around the world, it could potentially disrupt electricity globally. Then the third category that could disrupt electricity globally is a narrow AI cyber attack, or computer virus. One computer virus already did disrupt electricity locally.</p> <p><img src="//" alt="1530 David Denkenberger (5)"></p> <p>But actually, for today, we're going to focus on what we call the approximately 10% loss of electricity and industry scenario. So we're talking about something around the size of the Carrington Event. Solar storms tend to affect the high latitudes more strongly. So it might be that high northern countries, or states, in the case of Alaska where I live, or Norway, Sweden, Finland, Iceland, Estonia, Latvia, Denmark, maybe parts of other countries, could have their electricity disrupted. That might be around a one in 100 chance per year. Or, we could have a single EMP, like over North America or Europe, and you would not only lose roughly 10% of your industrial capability, but those areas produce a lot of food. As you'll see, if you don't have industrial agriculture, you can't produce as much food. So it's likely to be a 10% reduction in global food production at the same time.</p> <p>Similarly, a cyber attack could affect a continent instead of globally. There has also been some talk about attacks that might be aimed at disrupting the internet, and if we lost the internet, it wouldn't be as bad as losing all of electricity, but still many processes are dependent on the internet. So this could be something like a 10% disruption in our industrial capability. Then finally, if we had a conventional World War that did not go nuclear for some reason, that could be a 10% destruction of industry. Here are some pretty pictures of the five scenarios.</p> <p><img src="//" alt="1530 David Denkenberger (6)"></p> <p>Okay. So now, for your small group work, we're going to focus on just one particular scenario. Let's say we have an EMP over Eastern United States, the electrical grid is destroyed, plus all the electronics that are plugged in. We can't pull fossil fuels out of the ground, we can no longer pump it through pipelines. They've actually done some testing to simulate EMP, and they found that larger vehicles tend to be destroyed by it, so larger vehicles won't work. But smaller vehicles would still function, <em>if</em> they can get fuel. But we have a problem with fossil fuel production and refining in this scenario. Also water distribution and waste water treatment would stop. Then from an agricultural perspective, it is possible to farm by hand, but in the United States, we might only get one third as much food out of the same land that we do right now.</p> <p><img src="//" alt="1530 David Denkenberger (7)"></p> <p>So here is an example of what an electromagnetic pulse over the US might look like. This is the intensity of the volts per meter. We won't get into the detail. So probably, it would probably be centered over the Eastern US, because there is more industry there, and there are more people. Generally, the EMP doesn't harm people directly, but as we've seen, it greatly damages infrastructure.</p> <p><img src="//" alt="1530 David Denkenberger (8)"></p> <p>So, now we're going to break up into groups. What would be great is if each group does have someone with a laptop to kind of record ideas, or you can do it on paper if you want. But it's great if you do record on a laptop, then you can send it to us, if you feel comfortable. We're always interested in what people come up with.</p> <p>For this first scenario, we're trying to think what might happen if we don't do any preparations. Take about five minutes, then each group can present their results.</p> <p>Some questions to think about are: What would the reaction of other countries that still have industry be? Would it mean that they would help out? Would they not interfere at all? Or would they actually conquer, like take over? Then also, think about in this scenario, how much of a far future impact do you think it would have? Because it happens, how much reduction in far future potential of humanity do you think might happen? This can be what we call cascading failures, like, the initial scenario, if it goes poorly, then that could have long term impacts.</p> <p><em>If you want to follow along with the workshop, take five minutes to think through these questions before reading on.</em></p> <h2>Group Responses to First Scenario</h2> <p><em>Group One</em>: In this group, we have more questions, so we tried to outline our ignorance. The topics that we discussed immediately were, would other industrialized nations jump up to help? A lot of questions revolved around, can they help? Other examples, how fast can you put food in the US if you really were short by half or one third of the production? Other thing that we discussed is how fast and in what conditions does the order of society break down? Is the local state able to keep control over the people before they riot, before criminal organizations take critical parts of the political infrastructure under their control? We also discussed differential effects of how this would affect people from different social classes. So we have ignorance about the situations, but that's what came to our minds.</p> <p><em>Group Two</em>: We talked about whether anybody would actually send aid to the US, and Canada, and potentially Mexico as well, given that we've got approximately two UK populations on the Eastern seaboard alone that would be without power. Would there be rioting? Would other countries actually send any aid to help? There is going to be spoiling food everywhere. One suggestion was that the US could sell somebody else an aircraft carrier in order to get aid back, which I thought was quite interesting.</p> <p><em>Group Three</em>: We first talked about whether anyone would invade us, and we concluded probably not, because the Western US is still going to be up and running, we assume. We have aircraft carriers and stuff, so we're not at risk of being invaded, basically. The big hazards after electrical shutdown would be like anything like chemical processes, nuclear power plants. Those might not be such an issue, because the graphite rods would drop into them. But there would be big risks of fire, because fire engines might be knocked out, and the EMP would maybe cause fires, even. So that might be a very big current risk. I think that's what we got to, and chemical storage leaking, that sort of thing.</p> <p><em>Group Four</em>: Okay, so we were talking mainly about two areas. The first question is, as mentioned, is the government going to stay in control? Are people going to freak out? Is the military going to be able to keep control of the East Coast? We assume the military infrastructure will probably stay functional, because you might think that they are at least in part prepared for these kind of scenarios, and the command infrastructure would stay stable. But would they be able to keep control of other civil society? Or will that be a problem? Second question is would communication infrastructure still work? So we'd assume that probably civil, again, the normal communication infrastructures would break down, which would be a big problem. But maybe, again, that's on the military, maybe they will be able to kind of jump in and build up new communication and new ways of communication.</p> <p><em>Group Five</em>: In our group we talked a lot about the short term impacts, meaning immediate communications, and how people react, because we wouldn't actually probably know what's going on, given the issue. Also, a lot of fleeing. I think once people do understand what has happened, and where it's happened, we'll see people trying to get out of the East Coast areas, towards the West Coast. So the West Coast might be logistically impacted, due to the fact that the things in the East Coast will no longer be working. Canada is probably affected as well, massively. Also, cities versus countryside. Countryside is very rural in the United States, physically speaking. Possibly that could be a problem for them from logistics, meaning cities would probably have more attention once aid does come in. But at the same time, depending on the time of year, they would be able to access food maybe locally. But then also, there are spoilage issues and whatnot.</p> <p><em>Group Six</em>: We started off as well with communication, and thought about that. We thought that most of military is probably protected against EMPs, so they would have important hardware for communication skills, to reestablish them. We thought of alternatives that work with the mobile phones, because most mobile phones won't be plugged in at the moment of the EMP. So they would last like for another two days, probably. There is a Fire Chat app that connects phones, and is itself a tool that enables people to send messages. You can have peer to peer network between phones through wifi. So you just need to install the Fire Chat app. But you need to do it before the global catastrophe. So maybe one simple thing people could do is to have the Fire Chat app on their phones. Also, we need to charge phones, maybe with mechanical chargers. In that case, people could have communication. Then we had one more technical idea, that nuclear powered submarines wouldn't be affected by the EMP. They could come up, and they have a reactor that could power the cities, or at least some crucial infrastructure. We also thought that maybe aircraft carriers could provide some electricity when they return from other places. But then again, would there be good infrastructure necessary on land, like transformers, to actually convert the electricity from the power plants on the carriers to the local grid? It would depend on the voltage in the ship generators.</p> <h2>Preparation for Second Scenario</h2> <p>I think a lot of good questions were raised, and some good ideas. So we're going to have to move forward to the next section here, where I talk about some of the things ALLFED has been thinking about, in preparing for a scenario like this.</p> <p><img src="//" alt="1530 David Denkenberger (9)"></p> <p>First I'll talk about if there were a global scenario, like we didn't have any industry at all. We have some advantages in getting food out of the land. We understand how fertilizer works, so we can burn wood in landfills to create phosphorus potassium fertilizer. We also would plant a lot of peas, beans, and peanuts, because they fix nitrogen from the atmosphere.</p> <p>Hopefully we could keep using improved seed varieties that don't rely on continued genetic engineering. We could potentially use farm animals that we currently raise for food, as draft animals and for transportation. I'll talk about that more later. Also, we would ideally shift to types of crops that produce more calories per hectare. We also might rely on alternate foods, which ALLFED researches in our other area of work. We generally define alternate foods as foods that don't require the sun. So these are things like, you can grow mushrooms on agricultural waste. So we might want to do that.</p> <p><img src="//" alt="1530 David Denkenberger (10)"></p> <p>Also we might want to clear more land for agriculture. The problem is that without industry, we don't have chainsaws. So the way of doing it is you would girdle the tree, which means cutting a strip of bark around the bottom, which kills the tree, and then after a year or so it dries out, and then you would actually burn the forest. But, this is a backup plan, and we'd want to do it in a way to limit biodiversity impact. But if we did all of these things, then even though we couldn't get as much food from the amount of land that we currently have in production, we actually could feed everyone several times over.</p> <p><img src="//" alt="1530 David Denkenberger (11)"></p> <p>For transportation, of course, ships used to be wind powered. There was even a train here that was wind powered, back in the day. So we would definitely want transportation, because we either want to move food to people or move people to food. They could also be kite powered, which might be better than sails. Then on land, the other option for rail cars is that they can be pulled by cows, one at a time.</p> <p><img src="//" alt="1530 David Denkenberger (12)"></p> <p>But then of course, there are many other needs than food, and we brought up some of these, like healthcare. Of course, the hospitals that are dependent on electricity, you're not going to be able to maintain.</p> <p><img src="//" alt="1530 David Denkenberger (13)"></p> <p>But we do have some advantages over pre-industrial society. We understand the germ theory of disease, that washing your hands is important. We can create soap by combining animal fat and ash, and burn biomass to boil water, to kill the germs. We would need to move to where we can get water by hand. We would need to do sanitation. We could even do some birth control. We need to keep warm, but you can make wood burning stoves fairly easily. Then for communication, there is a short wave radio, sometimes called ham radio, that can be used without large infrastructure systems, and can transmit large distances. Now, many of these solutions that we just talked about, could be relevant even in a 10% loss of industry scenario.</p> <p><img src="//" alt="1530 David Denkenberger (14)"></p> <p>But I think even if there were massive aid from outside, it's still going to take time to restore many services. So we would need to use a variety of strategies. They might involve importing vehicles, also importing the fuel to power the vehicles. One big issue is that cranes for unloading ships, many of them are electric powered. But there are ones that are diesel powered. So if we could move in diesel powered cranes to be able to unload ships, that would be very helpful. Then importing diesel generators, as you mentioned, maybe you have the generator on the ship itself, like a nuclear powered sub or a ship. Then you'd want to import food as well.</p> <p><img src="//" alt="1530 David Denkenberger (15)"></p> <p>So now the question is, let's think about the same scenario. But let's say we spent say 30 million dollars to actually have some plans ahead of time, we have a short wave radio system that could transmit in a catastrophe, and that we've tested out that even people who live in the city, that don't know how to farm, that we could give them the right instructions to construct tools and actually produce food.</p> <p>So we'd need to run those experiments, and then modify our instructions. So if we spent that money and got that preparation, now I'd like you to consider that same scenario, and say, well, would it run any better? How much better? Then again, think about how much preparation might reduce the far future impact of a catastrophe.</p> <p><em>If you want to follow along with the workshop, take five minutes to think through these questions before reading on.</em></p> <h2>Group Responses to Second Scenario</h2> <p><em>Group One</em>: So we were thinking about the preparations for the problem of food and how everyone could have storaged some rice, and we've got an estimate, like 300 kilograms of rice is enough to feed a person for a year. In addition to that, we would have prepared seeds, because you told us we have tutorials how to inform people how to farm for themselves. So with the seeds, depending on what seeds you have and what climate, you can have several harvests a year. We'd have a lot of laborers, because normal jobs in cities are falling away in such catastrophic situations, so at least the problem of finding farm labor is totally manageable. Yeah, that was the basics of our discussion. At the end we tried to make an estimate on how much that would impact the future. We've got a notion that this scenario would be less impactful than the first one, because it's maybe only stalling the development, and not routing our potential.</p> <p><em>David</em>: Okay, yeah. I'll mention that we're developing a model, a guesstimate model. So as you have more time to think about it, you'd be able to put in your own numbers to see how various possibilities work out. Now the other thing, I'll just comment quickly about storing food. Yes, it would be great to store a year's worth of food. But then you're talking trillions of dollars if you want to do it globally. So we wouldn't actually be able to afford that. But definitely some of the other things we could do.</p> <p><em>Group Two</em>: So we started out, I guess, discussing how effective it would actually be to have distributed this information. We were unclear whether people would actually be able or willing to implement the information they would be given. Maybe more so in rural areas. Even if you've done an education program it doesn't mean that people will effectively implement it. It would be much more effective if local government is still functioning and has some communication capacity, and can still manage that process. The other thing we thought, in terms of food, even if it's possible to have enough food production to feed everyone, there might be distribution issues.</p> <p>So it might be that cities, there's just too many people and we can't get the food in. So rural areas or smaller towns might be fine. But in cities, it might just not be possible to sort out the distribution problem enough that lots of people don't die first. It's unclear what effects it would have on people's short term reaction, because in some ways, having no idea what's going on is scary. But maybe knowing that the entire Eastern seaboard has gone down is even scarier. But managing the initial stages might be quite important, because actually a lot of damage to infrastructure and the order of society might happen then. If that can be delayed, then maybe you can avoid slipping into chaos, rather than just delaying it.</p> <p><em>Group Three</em>: We talked about mainly two things. The first was, so with 30 million pounds, the first obvious observation, like if we talk about US population, we've got about ten pence per person, which is not very much. But the places where we kind of maybe best invest it is to maybe train farmers. If you can't train every person, which you can't realistically talk to every single citizen, then that's probably also not leading anywhere. Maybe go to hotspot places, or for instance, train farmers to train people, and also train people to establish communication systems. There's probably some external aid, because part of the US is still functioning. You would expect probably food supply to be stable, and you would expect there would be fuel and everything. What may be the biggest problem in this scenario, therefore, is actually making sure that the people don't freak out, that people are all right, and that no panic breaks out. That's kind of the most crucial thing in this consideration.</p> <p>The second thing we discussed was to what extent it's sensible, desired, or likely that people will move from the East Coast to the West Coast, or somewhere else. There would be arguments in favor of doing that, because it's maybe easier to supply them directly in a place where there is working infrastructure. There are disadvantages because if you lose the housing that you already have in the East Coast, maybe there are particular issues with huge amounts of people moving to the west, and to what extent that is going to either worsen issues or help issues. That's the discussion we had.</p> <p><em>Group Four</em>: We were talking about rationing supermarket stocks in the short term, and then securing grain silos, and maybe finding a way of processing them properly to provide short term relief. Supplies from the Western United States and Canada, and maybe Mexico, would help. Then getting all the transformers and stuff back online might take a lot longer. But it's going to be high priority. So maybe a global effort on that would be expected.</p> <p><em>Group Five</em>: I think the only thing that we've really got to add to that is how much less panic there is going to be. But, there is going to be a lot of time involved in getting people to a point where they actually get new food sources in. Getting people to sort out water supply and getting people moved: who is going to be prepared to let other people come onto their land, of the initial land owners? Would they welcome people with open arms? It seems quite unlikely.</p> <p><em>Group Six</em>: Our first question is, we do believe the interventions help to increase people's chances? The mechanisms that make that happen are that centralized organization is better at solving local problems, and then communication in general makes it easier to solve certain coordination problems, like avoid risk aversion by making people actually start doing something as soon as they can. Our point that was touched here was that we believe that water treatment at the levels that we do seems complex enough that it's unlikely to survive. We expect organizations and cities to become smaller. Finally, we think that how delayed scientific progress is might be an interesting proxy for loss to civilization's potential.</p> <h2>Conclusion</h2> <p><img src="//" alt="1530 David Denkenberger (16)"></p> <p>Another way of thinking about how cost effective this might be is that proposals to harden the grid to solar storm and EMP are in the billions of dollars, like 100 billion dollars globally. So of course, that's the ideal scenario, that we prevent the loss of industry.</p> <p><img src="//" alt="1530 David Denkenberger (17)"></p> <p>But my thinking is that if we can protect against a large part of the loss of life and potential far future impact for much less money, then that could be more cost effective, and maybe the first thing we should do.</p> <p><img src="//" alt="1530 David Denkenberger (18)"></p> <p>We're going to include a PDF of this, on the website, so you'll have the guesstimate model if you're interested in doing cost effectiveness. This is just some summary. We haven't talked so much about it, but if your primary concern is the present generation, I think there is potential to save lives in the present generation as well. You could also protect biodiversity. Then if you're interested in helping out, it would be great to raise more awareness about these issues.</p> <p><img src="//" alt="1530 David Denkenberger (19)"></p> the-centre-for-effective-altruism 7xveJ9MAJysLWnZZP 2019-02-19T15:58:01.214Z Dixon Chibanda: The Friendship Bench <p><em>This transcript of an EA Global talk, which CEA has lightly edited for clarity, is crossposted from <a href=""></a>. You can also watch the talk on YouTube <a href="">here</a>.</em></p> <p><em>When people experience profoundly damaging events, like war, invasion, or massacre, the psychological toll is vast. Which interventions work best to help repair the damage? Dixon Chibanda pioneered the Friendship Bench, where local grandmothers sit with people and help them talk through their problems. In this talk from EA Global 2018: London, Chibanda explains his program, which has shown impressive results in reducing depression among participants.</em></p> <h2>The Talk</h2> <p>I come from Zimbabwe, a country which is often characterized by several decades of psychological trauma, from the Rhodesian Bush War, the farm invasions, the massacre of more than 20,000 people in Matabeleland, and so the Friendship Bench is in essence a program that was conceived as a result of one such traumatic piece of history from our country, which actually started on the 19th of May in 2005, when the Zimbabwean government at the time, under the leadership of Robert Mugabe, embarked on a cleanup operation which was called Operation Murambatsvina, which literally means "Removing the filth."</p> <p><img src="//" alt="1530 Dixon Chibanda"></p> <p>And what Operation Murambatsvina was all about was a systematic destruction of buildings, structures that were labeled by the government as illegal, whatever that meant at the time, and it resulted in over 700,000 people being left homeless, and according to the United Nations, over two million people were psychologically affected as a result of this operation. During that time, I was studying for my Master's in Public Health, and I was instructed to carry out a survey to establish what the prevalence, or the magnitude of the psychological morbidity was, and when these results were presented to the authorities, I was then told, "You need to really come up with some kind of intervention."</p> <p><img src="//" alt="1530 Dixon Chibanda (1)"></p> <p>A whole lot of things were happening at the time, and of course in a country where there were absolutely no resources, and most of the professionals had left. I was in essence given a group of grandmothers to work with, 14, to start a program, or pilot, something that would eventually help thousands of people, and it was pretty depressing initially. And anyway, to cut a long story short, through an iterative process with these 14 grandmothers, we gathered as much information as we could about programs that had been developed in the country, outside the country, and we essentially through this iterative process tested different approaches, which were rooted in cognitive behavioral therapy.</p> <p><img src="//" alt="1530 Dixon Chibanda (TEST) (2)"></p> <p>And over a couple of months, we managed to come up with a meaningful sort of intervention, and over the years we developed a series of components to this intervention, which became known as the Friendship Bench, which in essence is psychological therapy which is delivered on a bench in the community by grandmothers. And one of our most recent publications is a clinical trial of this intervention, where we compared the Friendship Bench with usual care, through a cluster randomized controlled trial. And the cluster randomized controlled trial, which is published in the Journal of the American Medical Association actually showed that grandmothers were more effective at delivering and alleviating symptoms of depression and averting suicides than usual care.</p> <p><img src="//" alt="1530 Dixon Chibanda (4)"></p> <p>And usual care included nurses trained in mental health, psychologists, and also the use of Prozac, so grandmothers were pretty effective at doing their job. And to actually illustrate what we've achieved over the past couple of years and where we're going, we divided the Friendship Bench into three main components. The first part was the research and development, which was really the formative phase of Friendship Bench, where in the absence of resources we had to come up with an intervention which was simple, cheap, but was addressing a huge problem, and apart from that we needed an intervention that would address a diverse population.</p> <p>And we managed to kind of achieve that, looking at Zimbabwe as an isolated country, but really when you're thinking of an intervention that is likely to be scaled up it has to be replicated, and so we were thinking of the next stage, which is replication. Can the Friendship Bench in its current state be replicated in places outside of Zimbabwe? And if it can be replicated then it can really go to scale? So we started the process of replication in Malawi, in New York City, in Zanzibar and Botswana, and the model seemed to be quite intact in terms of its replicability in different settings, different cultural settings, and so we're kind of confident that this really can go to scale.</p> <p><img src="//" alt="1530 Dixon Chibanda (5)"></p> <p>Now, one of the things that we observed while we were running the Friendship Bench is that not all young people were comfortable sitting on a bench with grandmothers, and also we realized that some people preferred to communicate a lot more using digital technology, mobile phones with the grandmothers. So we teamed up a couple of years back with folks from Philips, with Robin and others, and we came up with an intervention which supports Friendship Bench called Inuka. Inuka is a Swahili word which means "Arise," and the rationale behind it is, if Friendship Bench can be scaled up, Inuka as a digital platform could enhance that scaling-up process, and so in essence Inuka becomes a digital platform. This is in addition to the existing Friendship Bench, which has been running since 2006. It's an app which can be downloaded, and it's really based on the same principles of the Friendship Bench.</p> <p><img src="//" alt="1530 Dixon Chibanda (6)"></p> <p>And the idea is with this approach of having these grandmothers, for instance, in Zimbabwe who deliver an intervention on a park bench in the community, and at the moment we are operating in more than 70 communities in the country, and obviously we also have communities outside of Zimbabwe. With that kind of intervention, which is then supported using digital platforms, we are able to reach out to more people. We are able to provide a lot more support to the grandmothers via these digital platforms, and this has been reflected in the work that we've been doing in New York City, for instance, in Zanzibar, where there's a need for that digital platform. At the moment, some of that digital communication is facilitated through WhatsApp, you know, through Skype, and so Inuka comes in as something that could effectively take over that role and enable strengthened communication to really enhance our reach and impact.</p> <p><img src="//" alt="1530 Dixon Chibanda (TEST)"></p> <p>I'd like to share some of the Sustainable Development Goals which specifically focus on mental health, just to highlight why an intervention like Friendship Bench is important. So if you look at target 3.4, 3.5, and 3.8, they all touch on mental health, and there's this growing need when you look at the global burden of mental, neurological and substance use disorders, there's a real need to scale up interventions that truly reduce that treatment gap for mental, neurological and substance use disorders. We cannot train enough psychiatrists or clinical psychologists, that is common knowledge. However, through task-shifting we can make a huge difference in a very short space of time, but the challenge is finding task-shifting interventions which are really solidly rooted in evidence, which are based in empirical observation.</p> <p>I believe that the Friendship Bench is one such intervention which can contribute significantly to reducing that treatment gap. I'll get on to why I think grandmothers are also a critical role, or the elderly population in the world, a little bit later. Just to give you a few examples from two very different locations: Zimbabwe and New York City.</p> <p><img src="//" alt="1530 Dixon Chibanda (7)"></p> <p>Here we have the oldest graduate from Zimbabwe, she's 84 years old, and the lady on the right is from New York City, her name is Skip, and the common thing that really unites or brings these people together are the lived experiences which they bring on the bench. This is something that we've found to be really powerful, where or the stories that people bring to the bench are rooted in CBT principles, and empirical observation, because the whole thing is about measurement.</p> <p>If we can't measure what we're doing then we really don't know if it's working, and so by bringing in and injecting something like CBT and the use of validated screening tools, while you give stories, you're able to tell how a person is improving, what kind of progress that person is making. And so what has essentially happened over the years is something that started off as a strongly CBT-based intervention has really become more of CBT through storytelling, CBT through the use of evidence-based tools which are validated within a local context.</p> <p><img src="//" alt="1530 Dixon Chibanda (8)"></p> <p>This is a picture that I often like to use to illustrate the power of using ordinary people in communities. This is a picture of the very first grandmother who worked on the Friendship Bench in Zimbabwe. Her name is Grandmother Jack, and Grandmother Jack was the first person who started doing this work, and she gave hope to all of us because she was really dedicated in what she did, and she would persistently be there every morning seeing her clients. It was something that we expected of her to do, you know, and one morning when she didn't come to work, we kind of all knew what had happened to Grandmother Jack.</p> <p>But what really illustrates the Grandmother Jack story is that if you think of the world's elderly, the population of elderly people in the world today, you know, if you look at 65 and above, it's estimated by the World Health Organization and the UN that there are over 600 million people aged 65 and above, and then within another 15 years it's going to be over a billion people aged over 65, and the older one gets, the richer the lived experience is.</p> <p>And what we are learning from the Friendship Bench is that if we can take those lived experiences and inject a fair dose of evidence-based talk therapy, they really can reach out to thousands of people addressing the global burden of depression, for instance, and contribute significantly to averting suicides. This is what we have seen in Zimbabwe, we've seen this in Malawi, we are seeing this in New York City, we are seeing it in Zanzibar, and more recently in Botswana. So this is what I really wanted to share with you about the work that we're doing in Zimbabwe, and why I think that this kind of work can make a difference, and this kind of work is really what's going to contribute significantly towards narrowing or reducing the treatment gap for mental, neurological, and substance use disorder on a global scale.</p> <p>If anyone is interested in learning a little bit more about the Friendship Bench, you can look up my <a href="">TED Talk</a>, which goes into more detail of how the Friendship Bench actually works.</p> <h2>Questions</h2> <p><em>Question</em>: On the bench, what does a person experience? What is the nature of the experience as people actually have it?</p> <p><em>Dixon</em>: Sure. So, the Friendship Bench model consists of two critical components. The first component is the preparation of the grandmothers, so in essence what we do is we first train trainers. The trainers then train the grandmothers, and the trainers that train the grandmothers end up supervising the grandmothers once the grandmothers are in the field. And the actual CBT component consists of three steps, very simple steps, which are rooted in behavior activation, activity scheduling, and in problem-solving therapy.</p> <p>The first component is called "Opening the mind," in Shona it's called "Kuvhura pfungwa." The second component is uplifting, and the third component is kusimbisa, and how it essentially works is people are referred to the bench from everywhere, from schools, from the police station, from the clinics, from homes, and some people just self-refer. Some are referred through radio talk stations, and when they come to the bench the first thing that happens is they are screened. They are screened using a locally-validated screening tool. I think here in the UK with IAPT you use PHQ-9. We also use the PHQ-9, but we also have a very specific locally-validated tool called the SSQ, which is broader.</p> <p>And screening tools are critical, because if you don't use screening tools in this kind of work, it's difficult to have structure. So the screening tools inform the grandmothers whether they are dealing with a severe case, a moderate case, a case that needs to be referred to the next level, and so once they establish that this is a case, they then provide the talk therapy, which is really consisting of those three steps, and normally it's over a series of about four sessions.</p> <p><em>Question</em>: Can you describe each of those three steps in a little bit more detail?</p> <p><em>Dixon</em>: Opening the mind essentially is the storytelling part, where the grandmother listens to the stories. So you know, classically people who come to the bench have a whole lot of problems. A whole lot of issues, everything is just kind of going wrong in their lives, and people will often present with a number of problems, not just one problem. A person may present with being HIV positive, unemployed, having no place to stay, having children who are not at school, you know, just a whole lot of problems, and what we've also realized is when people present with these kind of problems, they get into a kind of learned helplessness, where they can't really figure out which problem to start working on.</p> <p>An interesting thing is, we thought this phenomenon was specific to Zimbabwe, but we see exactly the same thing in New York City when we look at the cases that they are dealing with in the Bronx and in Harlem. It's pretty much the same, people with all these problems and just not knowing where to start, and sitting on the bench and having somebody say to you, "Tell me your story," is just such an amazing way of opening up, and realizing that there's actually someone who can listen to you, someone who can help you. So that's really the first stage, opening up the mind through opening up and telling those stories, as painful as they are.</p> <p>And one of the other things which we also encourage on the Friendship Bench, which is really not in-keeping with your usual psychotherapy or CBT therapy is that the therapist becomes almost involved. And this is something that I was never trained to do as a psychiatrist, you know? You always keep your distance, but we've learned from the grandmothers that it's critically important to show your weaknesses too as a human being, by sharing your own lived experience, but within a very structured way, and by doing that you really establish strong rapport with the client.</p> <p>So anyway, so the client talks about their story, and while they're doing that the grandmother simply lists the problems that are highlighted. That's all she does, and she listens with a lot of empathy, with a lot of appropriate physical gesture where called for, and after all of that has happened the grandmother simply summarizes what she hears. And that particular component is also very powerful, because when somebody tells you their story and you are able to accurately summarize what they've told you, that's a sign that you've been listening, and that's a sign for that person who's telling their story that, "I've got someone who's on my side."</p> <p>And really, that's why we call it "opening up the mind," because most of these people have never really talked about their problems to anyone in such a setting, and so when that happens the grandmother then summarizes, and after summarizing asks the client to select a problem to work on, so like your traditional PST, you know, and then it goes on from there, where they brainstorm to come up with a solution, and essentially come up with a very specific, measurable, achievable, realistic and timely solution that they focus on. Very, very practical, but with a very strong dose of emotions and human contact.</p> <p><em>Question</em>: It strikes me that this is also sort of an instruction manual for how to be a good friend.</p> <p><em>Dixon</em>: Well, that's why it's called "Friendship Bench!"</p> <p><em>Question</em>: So, you've kind of recounted now to your client, "Here's what I heard from you," and then uplifting is this kind of brainstorming?</p> <p><em>Dixon</em>: Yeah.</p> <p><em>Question</em>: Choosing a particular thing to sort of focus on first, brainstorming solutions to that. What is the strengthening phase?</p> <p><em>Dixon</em>: So, the strengthening phase is essentially the part where you then start to clearly identify a single problem and break it down in terms of what happens, but a critical component of the strengthening phase is something that we call "The holy cow moment." And we call it "the holy cow moment" because it's one of those stages in therapy where the therapist is interacting with the client and you've listed all these problems, and I used to struggle with that, you know? So you list all these problems, you know, "I'm suffering from HIV, my neighbor is not talking to me, my husband is abusive, I have a pregnant teenage daughter," and with my mind as a medical doctor and psychiatrist, when I hear HIV, the first thing I think of, "We've got to put this person on medication. We have to make sure that the CD4 count and the viral load is all in place."</p> <p>And then when you actually discuss with the client, and the client identifies a problem to focus on which just doesn't fit with you and your thinking as a psychiatrist, that's what we call the "The holy moment," and we struggled a lot with the holy moments with the grandmothers as well, because initially they would prioritize certain things, but we also realized that handling that "holy cow moment," where a client selects something that seems really ridiculous to focus on, is critical. And what we essentially do is we encourage the grandmothers to deal with the "holy cow moment" as it comes, so if a client chooses to focus for instance on something which seems trivial, focus on it, because what we've learned over the years is by focusing on that which seems trivial to us, we actually are opening up avenues to treat all the other problems that seem massive.</p> <p>So that becomes the strengthening component, where the client realizes that, "Regardless of the problem that I've selected to focus on, this person is still prepared to work with me." And once that is done, the third stage which is the strengthening, kusimbisa, is in essence the homework. Because the beauty of what we do on the Friendship Bench, which is different from your traditional therapy, is your first session, the focus is to make sure that you walk away from there with a solution. We don't believe in telling people to come back for three, four sessions before a problem is solved. You know, people in Africa are very mobile, they're constantly moving, and if a person spends an hour and a half with you and you can't solve your problem, they're pretty much not going to come back.</p> <p>So we really do emphasize on making sure that they go home with something tangible that they can work on, that's the strengthening, and then part of the strengthening involves the grandmother calling up on the client, either by sending an SMS, or sending a WhatsApp message, and just to touch base, to see how the client is doing, and this strengthening carries on because they will meet sometimes outside, when they are in the marketplace or in the community, and she can do a five minute strengthening. "So how is it going? How are things?" You know, that kind of stuff, and so that carries on.</p> <p><em>Question</em>: Is it validated that grandmothers in particular are the right people to be doing this, versus any other group of people?</p> <p><em>Dixon</em>: So, for Zimbabwe certainly grandmothers are the best. We have consistently found that grandmothers are reliable. You know, they are rooted in their communities and they have this wealth of wisdom which is very culturally appropriate. They are very good with using appropriate proverbs to actually get through a problem, and they're just... they give very good hugs as well, you know? So for Zimbabwe certainly, we certainly love the idea of grandmothers, but we are also working with young people. If you go to New York City, they don't have that many grandmothers. New York City I guess has a very diverse group of people working on the bench, from about 24 years old all the way up to like 56, so that's what works for New York City.</p> <p>I'm strongly in favor of working with grandmothers for the simple fact that we find them to be more reliable, but I think this model can be used for anyone. Anyone can deliver this model, and the grandmothers that we're working with, as I indicated in my presentation earlier on, in essence I was <em>given</em> grandmothers. I guess the city health department thought that grandmothers are not so important, so try and come up with a solution with grandmothers, and the nurses and other mental health professionals were busy doing other things. So it's not that I chose them. They are what I had to work with, which was a blessing in disguise, actually.</p> <p><em>Question</em>: Who funds your organization? Are you looking to expand, are you looking to start something in the UK? What could you do with more funding? All of that has been asked.</p> <p><em>Dixon</em>: So, I mean obviously we would love a lot of funding. We run a trust called the Friendship Bench Trust. So initially when we started Friendship Bench, as I indicated in my presentation, was really my thesis, my fieldwork for my Master's in Public Health. That's really how it started, and it just carried on since 2006. It has grown, and we are now a Trust, a registered Trust in Zimbabwe. We are hoping at some stage to be registered here in the UK, because I'm kind of affiliated to the London School of Hygiene and Tropical Medicine, so that's the idea.</p> <p>Most of our work has been funded by research agencies like the Wellcome Trust, NIH, MRC. This is why our work is heavily research-based, but we also do realize that as we think of scaling up the Friendship Bench, we really need to move to a different kind of funder, because most of these research organizations or funders do not fund scaling up of programs, and I think the Friendship Bench has acquired quite a lot of evidence to justify taking that next step, which is scaling up. Yeah.</p> <p><em>Question</em>: For those that are interested in kind of going a little bit deeper into the subject matter, are there any self-training materials that people could access somewhere online?</p> <p><em>Dixon</em>: If you go to the Friendship Bench website, we do have a manual that is available that we use. We have a facilitators' manual, and we have a training manual for the delivering agents, but we also have the Inuka platform, which offers the training of guides as I indicated earlier on. You've got Friendship Bench, which was the original program which started in Zimbabwe, and now we have a digital component which is really based on what we've done in Zimbabwe, and for Inuka we actually hopefully wanted the gains to run smoothly. At the moment we are testing Inuka. Inuka has been tested, piloted in Zimbabwe, in Kenya, and we've done some work in India, so at the moment we are actually running a pilot in Kenya, but we would love to have guides on Inuka once it starts to run.</p> the-centre-for-effective-altruism 2rNQEsd4KwYgMnpS2 2019-02-15T15:53:30.191Z Rob Mather: Against Malaria Foundation — What we do, How we do it, and the Challenges <p><em>This transcript of an EA Global talk, which CEA has lightly edited for clarity, is crossposted from <a href=""></a>. You can also watch the talk on YouTube <a href="">here</a>.</em></p> <p><em>The Against Malaria Foundation is one of the most effective global health charities in the world, and the single most common donation target for EA Survey respondents (as of 2018). What makes this organization so special? How do they approach their work, and what challenges do they face? CEO Rob Mather answers these questions in this talk from EA Global 2018: London.</em></p> <h2>The Talk</h2> <p>I'm going to try and cover what we do and how we do it, but I use the words impact and accountability a lot in what we do. They're themes that you will see that run through what we talk about, because every charity really, our focus is to have impact. And particularly delivering that for us means focusing on accountability, because the devil is in the detail. It's not easy to raise money, but it's not where you mess up. You mess up on delivery, on operations. So that's really where our focus lies.</p> <p><img src="//" alt="1100 Rob Mather"></p> <p>You guys can read the numbers. I hope you can see the numbers at the back more quickly than I can say them. But I think we all understand that malaria is a humanitarian issue. The numbers are pretty frightening. When I first came across malaria it was because I heard that seven jumbo jets full of children under five died from malaria every day. And that really struck me. Not only does it particularly affect children under five, but pregnant women who have a compromised immune system when they're pregnant are also particularly vulnerable.</p> <p>We focus on Sub-Saharan Africa, because that's where 90% of the cases of malaria and where most of the deaths occur. But it's not just a humanitarian issue. If you're sick with malaria then you can't teach, you can't farm, you can't function. And that means you're not a constructive or productive member of society. And so malaria is a drain on the economies of all of these countries that are affected. So if the humanitarian doesn't get you like it got me, then I hope the economic component would get you instead. If we put $1 million into fighting malaria effectively and efficiently, then we will improve the GDP of the country or the continent, I should say, by $12 million. A 12 to 1 return is a pretty good offer, even if you're not persuaded by the humanitarian element.</p> <p><img src="//" alt="1100 Rob Mather (1)"></p> <p>Unfortunately, there's no silver bullet. There is no vaccine. Lots of research going on, and we all keep our fingers crossed that they'll find something. Vaccination research and gene drives are a big hope, and we all hope fervently that something comes of them, but for now it doesn't exist. So to pick up on the under fives, if I invited you all down to Heathrow Airport and you saw this, we'd all say, "Hang on a minute. This is slaughter." And this is daily, remember, so it is a big issue that we absolutely need to do something about.</p> <p><img src="//" alt="1100 Rob Mather (2)"></p> <p>And a big part of the solution, not the only solution, but a big part of the solution is nets. They cost $2. They protect two people each. And, therefore, given that the female mosquito that is pregnant and wants to reproduce needs a blood meal to reproduce bites between 10:00 at night and 2:00 in the morning, that is something terrific we have on our side. It means that we can cover people when they sleep, and we can protect them mechanically. We also cover these nets with insecticide. We're putting nets in very challenging environments: they're households but they're not houses. So inevitably, these nets become ripped, they become torn, they have holes in them. But fortunately, the mosquitoes don't do a red arrows maneuver through a hole. They land on the net and migrate to the hole, and when they pick up insecticide it kills them. The fact that mosquitos typically feed between 10:00 at night and 2:00 in the morning is a really good characteristic that we can exploit here.</p> <p>And the impact is dramatic. We're talking about whether it's 600 nets or 1,000 nets or 400 nets, it depends on the malaria burden, but we're talking about low thousands of dollars equals one death averted, and broadly it's 1,000 cases of malaria prevented for every person that dies, the mortality to morbidity ratio. This is an extraordinary impact. And graphically that's what we see.</p> <p><img src="//" alt="1100 Rob Mather (4)"></p> <p>We see prior to putting nets in place you have the top graph, the seasonality, rainy seasons and dry seasons, happens more or less immediately, so within weeks. It's not as easy as just handing out nets and saying, "Right, we're done." Education is involved and there's sustained effort, but broadly speaking any major health initiative where you'd have a 10% decline nationwide is dramatic. So if we can introduce 40% or 50% decline, you can see this is in the sensational category of what we can achieve.</p> <h3>How AMF Got Started</h3> <p>So just a minute or so on how I started. I shamelessly called 250,000 of my best friends and said, "Would you like to swim?" And they all said yes. In fact, the truth is that I failed because I was trying to get a million people to swim, but I'm not going to count that as true failure really. And very much the spirit behind this was, as I said to Michael Phelps, "What I'd like you to do in front of the camera is just say, 'I would like you to swim. It doesn't matter how fast I swim. When I swim I count as one person. And if you swim you count as one person as well.'"</p> <p><img src="//" alt="1100 Rob Mather (5)"></p> <p>Very much the spirit behind what we do at AMF is that we're ordinary folks, so very grassrootsy in that I don't think this is about celebrity. It's about all of us, because it's almost the power of all the ordinary folk that can get things done. And that was very much the spirit, and it is today behind what we do at AMF. And so there were some wonderfully nutty people swimming as part of World Swim Against Malaria in 2005 in Serpentine, not far from where we are now. A whole bunch of people at PWC decided to go into the channel. And then some very sensible people in Australia and America where it was warmer.</p> <p>And it was particularly important to me that there were lots of children involved, given the death toll is particularly affecting children. This was my first experience, if you like, of... actually it was the second experience after a swim for a burns victim. But I learned a lot from this experience of how people reacted to certain sorts of communication. It was a seminal period, if you like.</p> <p>I was intending to go back and get a proper job because I had taken two years off to launch World Swim. And when we went to see the Global Fund, an organization based in Geneva, a big funder, they said, "Do you realize with 130,000 people swimming (which is what we had at the time), you are the largest malaria advocacy group in the world?" And I said, "Are you telling me that 20 phone calls out of the back room of my home in London has created the world's single largest advocacy group for the world's single largest killer of children." And they said, "Yes." And I said, "Well, that's shame on all of us if that's the case." I guess that meant that I didn't want to go back into a proper job. I wanted to do an improper job.</p> <h2>What AMF Does</h2> <p>What we do at AMF is we provide nets. We distribute them. We make sure they don't get stolen. That's potentially a very big issue, back to operations. We certainly want to ensure they're used. And when we get involved in funding nets we go to governments and we talk to them about data. In fact we put it a lot more politely than I'm going to put it now, but we basically say to governments, "Please don't ask us to trust you, because we won't. But we won't ask you to trust us either. Let's just focus on the data." And that is really important to making sure that we do the best job we possibly can. We don't always get it right. We're not perfect, and things do happen we have to dive in and try and solve. But in essence this is all about data for us operationally.</p> <p><img src="//" alt="1100 Rob Mather (6)"></p> <p>When we started, again more politely than I'm about to put it, but I went to a whole bunch of people and organizations and said, "Please will you help me, but I'm not going to pay you because you don't need $5 more than a couple of kiddies in Africa need a bed net." And I'm delighted to say that everybody I spoke to, I can't think of anybody who didn't hear the question, "Who do I talk to in your industry that would be able to help me do X, Y, and Z?" and reply, "Me." And it's incredibly humbling getting a lot of people in big companies. I run this with six other people out of the back room of my house in London, and everybody works from their own homes, so we don't have any offices. But there are a lot of blue chip companies that said, "We get it. We'll support you." Because fundamentally the chief executive of big company X and big company Y, he or she have got kids, and they know kids, and they're human beings. So I guess I appeal to that sense: how do we do this together, as a lot of people coming together?</p> <p><img src="//" alt="1100 Rob Mather (7)"></p> <p>We have five full time staff. We pay four of them. So one of the things that was a little bit different about what we do is I don't have to really go out and raise money to fund admin costs. I could, and I could certainly use the money we have to do that, but as you can imagine, we want to spend the money on nets. So I have four costs globally, centrally, and no other costs than those four people we say a commercial salary to. We have no banking, accounting, legal, website, translation. You name it, we don't have it.</p> <p>When we wanted to translate the website into German, the thinking was we could go to a professional company and they'd charge us five grand to do it, or we could go to a lot of other human beings and say, "You're a professional translator. Who do we talk to in your industry such that we could get four people who would translate two and a half thousand words each?" That's 10,000 words. We can now put our website in a language and show people the courtesy in Germany of being able to read the website in their own language. So I sent out 48 e-mails, not dear all because that doesn't work, but Dear Claudia and Dear Claus and Dear Matthew and so on. And in 24 hours I had 44 positive responses out of 48. So I could've translated the website 11 times over for free. And the same thing happened in every other language. So you sort of want to jump up and kiss people when that happens because it's terrific that everybody said, "We'll help." And that in a sense is really behind what we've all built at AMF.</p> <p>We have very low overheads as a result, as you'll be unsurprised to hear since we're only paying four salaries. Our overhead last year, or FY 2017 rather, was 0.6% of the money we receive, so 99.4% of what comes in goes to the front line. And that's because I am incredibly cynical about charity, which is why people say people like me set them up. And I want to keep those costs really low down, and I want to show people exactly what happens with their money because I think that's the right thing to do.</p> <p>We work with co-funding partners because we can't fund everything ourselves. And in fact we often fill gaps, so somebody will say to us, "We need $11 million here. Have you got any money?" And then we can cooperate with another organization. And we work with distribution partners because I don't want to set up some massive logistical operation in a whole series of countries. That would be daft. So this is very much us contributing as one of a number of organizations because this is a big team effort. It has to be. So impact and accountability are important. Transparency and efficiency are very important to us. I guess transparency is different from, but it goes hand-in-hand with, accountability. Efficiency covers not just the money we receive but how we actually get nets out to people in the right quantities to protect them.</p> <p><img src="//" alt="1100 Rob Mather (8)"></p> <p>I think a charity should be able to define in a sentence, or in a few words, what it is they're trying to achieve. It surprises me when some can't. In our situation, it's very simple. We want to stop people dying and stop people falling sick. So that in essence is the metric we must be judged by, although I'm going to throw something out there that we actually don't publish malaria case rate data, and there might be some questions on that later on as to why, if AMF is focusing on these metrics, aren't they publishing the metrics as to what they're achieving? It's a source of frustration, but it's something that I think is important.</p> <p><img src="//" alt="1100 Rob Mather (9)"></p> <p>So accountability for us means holding people to account in country, so saying to our partners, "We want to see data," so we structure our relationships so that it focuses on data. And we think fundamentally what that does is it means that fewer people die and fewer people fall sick. We want to hold ourselves accountable to our donors and show, as I mentioned, where every donation goes so people can be engaged rather than, I've given them some money, it's gone into a black box, don't know what's happened to it. That, for me, would be frustrating. And we believe that leads to this virtual circle of driving donations because we cannot do anything without donations. Awareness is terrific, but awareness funds nothing. Awareness has to have an endpoint of moving on to driving donations.</p> <p><img src="//" alt="1100 Rob Mather (10)"></p> <p>Each donor has their own individual page, as long as we have their e-mail address, and we list all their donations. I say as long as we have their e-mail address not as a cute way of saying, "Goodie, then we can market to them" because we as an organization do no marketing. We may be making mistakes in not doing marketing. In a sense other people market for us. The effective altruism community has been sensational in marketing us, and is a fundamental board member of AMF in terms of what it has helped us to achieve. We never send soliciting e-mails. We only send informational e-mails. It's really, really important to us because I think what we do should drive our support, not our ability to persuade because we write good copy.</p> <p>When we go out into the field we take enough nets to cover everybody in a particular area, and the ratio is broadly two people sleep under a net. In fact its scale is 1.8. And what we focus on is making sure that our partners have visited every household, so we understand whether this household needs three, two, four, one. Whatever the number of nets they need, that's the number of nets we get to them. We make sure at the moment of distribution... and I should say there are a number of things we do to verify and ensure that that data is accurate. It's not perfect, but we can send five data collectors out after the first hundred and get them to visit 5% of the households these guys visited, and tell them beforehand, so we're putting psychology to play, to make sure they're really focusing on getting accurate data. And there are other things we do to make sure that data is good because obviously, garbage in, garbage out. And people, I'm afraid, a very small number, do want to subvert what we're trying to do and misappropriate nets.</p> <p><img src="//" alt="1100 Rob Mather (11)"></p> <p>It's important to have independent supervision at the moment of distribution so that, again, you make sure the right things happen. We follow nets and their presence and use and condition after roughly six months. And we track malaria case rate data, albeit there are issues with the purity or reliability of that data. So when we go back in and gather data we're doing it not just because it makes us feel good, but because we do want to understand what the decline curve is, because if we're up at 95% here on day one, and we come down like this over three years, that's okay, but also that's not okay. And if we don't know, we can't do anything about it, and we do not want to bury our heads in the sand.</p> <p><img src="//" alt="1100 Rob Mather (12)"></p> <p>So I have no problem in saying after 18 months, the coverage with our nets is down to 40%, because I'd rather know and be able to say to everybody, so we need to do something about it, because for the next 18 months we've got a significant number of people that we're telling everybody we're protecting from malaria and they're not actually protected. So let's find the data.</p> <p>That bad trend is not one we see often, but we need to know whether it is there or not. And we can say to the district health officer, "You've got 37 health center catchment areas, and you've got limited resources, and we've got this data of whether sleeping spaces are covered or not, so you can focus on these 10 areas rather than the 37 and actually be more impactful, more effective with your work." So we're not just collecting data for data's sake.</p> <p>We're very happy to be held to account by others, so we release all of our material. There's almost nothing we won't release, apart from people's personal salaries and things like that. And that's obviously been a terrific benefit to us, as we've been reviewed well. That has been a major driver of the donations we've received. I don't know what the current percentage is, but it's something like 70% of the donations or 60% of the donations we can tie to GiveWell and other organizations' reviews of us. So that's massive. So take 60% of $178 million, we're looking at about $100 million that has been driven by the EA community. So AMF is an EA community thing, really.</p> <p><img src="//" alt="1100 Rob Mather (13)"></p> <p>Last year we received about 90,000 donations from 190 countries, so we're getting to lots of people. And every donation matters because every $2 buys a net. And it means that when we talk to countries that say, "Hi Rob, if you've got $11 million," we say, "We've got 8." But then the next week we've got 8.2 and 8.4. So we can literally, through the discussions come back and up the number nets we can fund. So we put money to work, in essence, as soon as it comes in, because I can't commit to nets unless I've got the money. So the three key numbers we often, if people are sort of benchmarking, what does it cost to do something in the world of malaria: it's $2 buys a net, $500 protects a village, and roughly $3,000 prevents a death, or $4,000 or $5,000. I don't know what the latest number is from GiveWell, which is where that comes from.</p> <p>On top of that we have a small number of large donors that build on top of what, in essence, is the likes of most of us or all of us in the room, the many individual donors that are our life blood. These are very lumpy donations. We've had some very significant ones. When you get a $23 million donation, that's amazing because it just means we can say to a country, we can fund 12 million nets for you. And when we're funding larger quantities of nets we can hope to achieve great success in some of the things we're aspiring to.</p> <p>So some fantastic big donations, but I guess if there's one thing I want to leave somebody with hearing me and looking at this slide, it's that if ever somebody were to ask, "Somebody's given $2 million. What does my $2 matter?", well, the answer is your $2 buys a net, and that matters. It just so happens that $1 million buys 500,000, but we need both. And if we didn't have all of us contributing small amounts, we wouldn't be here, because no big donations would come in on the back of a few people giving a few dollars, so these are inextricably linked.</p> <p><img src="//" alt="1100 Rob Mather (14)"></p> <p>Over the last few years we've started to hit the tens of millions of dollars, and that means we can fund millions of nets. Everybody who's involved, all the donors, all the supporters, everybody that is involved together, we can say that we're putting ourselves in the position to stop something like 60,000 people dying, preventing 660 million cases of malaria through the number of nets funded. And while we have operated in 36 countries we focus typically on about seven in any one year. So this means that we can fund millions of nets at a time. Rather than turning up at the table and saying, "Yes, we can fund 100,000 nets," where the country would say, "Terrific-ish" because they need 10 million, we turn up and say we can fund 5 of your 10 million. That makes us get listened to, and I think rightly so.</p> <p><img src="//" alt="1100 Rob Mather (15)"></p> <p>We don't turn up and say, "This is how you're gonna do it" because that would be insensitive, crass, and just not a good way of going about things. So we come forward and say, "Here's our draft agreement. Here's the focus on data. You let us know what's difficult, and let's work with you as to how we adjust it, but some things we're gonna be pretty difficult to move on because it's all about accountability, and we think there's sense behind them." So we're involved in a partnership in persuading. And the more money we have, the more we can do that. So things have radically changed in the last four years, really, with what we can achieve.</p> <p><img src="//" alt="1100 Rob Mather (16)"></p> <p>But we have challenges, and most of those challenges... I'm not expecting you to read the bottom bullet point. The point is that they are many, and I could go on forever, because the devil is in the detail. But the big ones are ensuring effective planning where you've got limited resources and your span of control is limited, classic stuff. So we're improving all the time. We don't get everything right, but we think we get things more right as each day, each week, each month goes by. And we're in the business of persuading people to do things, because that's what partnerships are all about. We're also in the business of sometimes managing or doing a two step tango around the politics that suddenly can appear in certain situations in the countries we operate in.</p> <p>Managing millions of household records, which is what we do, is the relatively easy bit. We put 150 lovely people in the room. We say, "Here's a laptop. Here's some data." And we provide them with our data entry system, and we get the data so we can see it, so there's no filter. Really, really important. But I won't bang on about that.</p> <p><img src="//" alt="1100 Rob Mather (17)"></p> <p>Insecticide resistance is a second challenge. Charles Darwin told us that that would happen, and certainly it has, as it happens with all of these things. So what we have done is we've played our part in saying, "We need to put these new PBO nets"... you'll remember from your chemistry class of course that PBO stands for piperonyl butoxide. Yes, everybody knows that. It's a synergist that goes on the top of a net or on a net, and it switches off the resistance mechanism in the mosquito, which means that the pyrethroid kills it. We've stepped forward and funded six million of those, distributed in 2017, lots and lots of clusters for the statisticians in the room, so that we can actually have a very powerful study, a randomized control trial, the gold standard if you like, so that for the rest of the malaria community, the funding community, we can say, "Here's the data that tells us whether PBO nets are good, and if they are in what way and in what circumstances." And we'll have those results in about six months' time. We don't know them.</p> <p>We are the funder, we are the sole funder of the study, and in a sense we were a bit surprised that others weren't going to step forward, but nobody did so we said, "We'll do it because this is important." Insecticide resistance is a challenge to be met.</p> <p><img src="//" alt="1100 Rob Mather (18)"></p> <p>The next challenge is funding. We're allocating about $50 million at the moment, and we have $200 million worth of requests, so we have to make some nasty, nasty decisions in the next three months where we will have to say to countries, "We don't have the money." But we will do our best to try and make sure that the money we do deploy goes to optimize the impact we can have. But that challenge is also an opportunity because there are lots of countries that need help. We don't turn up in the morning and think all these challenges are weighing us down. These are opportunities for us to help. And we tend now to look at anything between 2 and 20 million nets of requests a time, so some of the numbers are quite chunky. We have to really make sure that the partnerships and the agreements we put in place are gonna deliver what we expect it to deliver. We tend to look three years out now because that matches with other funders. It means we can really plan operationally much better.</p> <p><img src="//" alt="1100 Rob Mather (20)"></p> <p>We also have an opportunity with technology. And this is one of my favorite pictures. It shows in one of the poorest countries in the world, the Democratic Republic of Congo, the use of technology, smart phones, to demonstrate to within six meters where 250 odd thousand households are located that received nets. And when you do stuff like this you've got real time data, you've got a better accountability, you've got lower costs, etcetera. The list goes on. So, fantastic opportunity to do better, but there are challenges with deploying it because you can't just put thousands of phones into the DRC and expect it all to go well, so we have to be very careful how we do this.</p> <p><img src="//" alt="1100 Rob Mather (21)"></p> <p>I'm going to leave you, last slide, on the note of optimism, which is if we reflect on what we've been doing in the last 15 years within the malaria community, all of us together, it's pretty dramatic in bringing down over 15 years the number of deaths and cases of malaria by about 60%. And there are countries that have moved into elimination, malaria is gone now. Sri Lanka, a very challenged country, the turmoils, war, all sorts of challenging things going on, but they are now malaria free, three years of no native cases of malaria. That's terrific. And there are now eight other countries, I think, on the cusp of that. So elimination is possible, eradication is possible. But a child still dies from malaria every minute, so while I've been talking, depending on how long I've been talking, 22 kiddiwinks have not made it, and that's pretty shocking.</p> <p>As we know what we need to do, which is nets, we don't have a silver bullet with a vaccine. We don't have a silver bullet with gene drive technology yet. Nets is a big part of what we do. So certainly from our part and with others' help, we're going to continue to do as much as we can.</p> <h2>Questions</h2> <p><em>Question</em>: Can you explain a bit more of the detail about how your operations work? Do you distribute nets that are manufactured in the countries they'll be used?</p> <p><em>Rob</em>: The nets are manufactured broadly in Asia, so the three dominant countries of manufacture are China, Thailand, and Vietnam. There's also a factory in Tanzania, and there may be other factories sort of coming online in several other countries. And there might be one that's come online in, I think, Ethiopia and Nigeria were looking at it. Effectively nets are a textile, so economies of scale are key, and therefore there are relatively few plants that produce 80,000 nets a day or more, because it's just not economic to put small manufacturing facilities, one in each country, which would be great for transport and logistics and local economies and employment if you could do that, but it just doesn't work.</p> <p>The fiduciary duty I have, if you like, is if I'm looking to spend a million dollars do I spend a million dollars on funding nets from a facility in Africa — there is one — that is gonna charge me because of reasons of economy of scale 20% more, or do I buy 20% more nets and protect 20% more people? And the latter has to be my responsibility. I'm not here to employ people. I'm here to protect people from malaria. However, when there's a very narrow gap, then we can make judgment calls. But broadly those are the locations of the nets. They're brought in, and it costs roughly $2 a net and about 20 cents a net to ship a net, so it's about 10% of the cost. It used to be $5 a net, so that shipping cost has become a larger proportion, but that's still the way the production works.</p> <p><em>Question</em>: Do you try and measure or think about the impact beyond just immediately saving people's lives? So for example like the knock-on effects that has to their economy and to the other lives that they then affect in the next 10, 20 years. Is that something that you think about at all?</p> <p><em>Rob</em>: No and yes. No in terms of the decision we make each day is where can we protect the most people over the next three years with these nets. We're cognizant of the fact that if you've got people that are not sick, as I mentioned they can function and they can lead healthier lives, and you can transform fundamentally the health of a community, because if you protect people with nets you're reducing the people in the blood pool who are infected, so when a non-malaria carrying mosquito bites the person who is now not infected, it's annoying and they'll bite somebody else, but they're not transmitting. They're not acting as a vector. So we're aware of the impact it has at the micro level within a community, the macro level within a country. But really our day-to-day focus is more pragmatic and prosaic.</p> <p><em>Question</em>: If organizations like GiveWell started to look at those longer term impacts, do you think they might be able to start measuring some of those longer term things even if they're not your immediate focus?</p> <p><em>Rob</em>: Whether they can start measuring them I don't know. That's probably more for experts within their organization, but it would probably be of benefit in terms of our own statistics, because there is a dramatic economic impact, not just the health impact.</p> <p><em>Question</em>: If people are interested in donating do you have a preference around people donating little and often out of their pay packets, or groups of people getting together and pooling donations, or saving now and donating more later? Do you have a particular preference around any of that kind of thing?</p> <p><em>Rob</em>: So in reverse order donate now rather than later because we've got massive gaps. I suppose we would prefer people to... we have no method preference per se because we don't want to frighten anybody off by saying, "They want me to give online, and I don't really want to do that. I'd rather give by bank transfer." So we're agnostic when it comes to that perspective. We do like recurring donations. It's something that I look at really closely because I think it acts as a bellwether. It acts as... there's an element of, are we seeing recurring donations falling away? Is that saying something about people thinking, "I think I've done my bit. I'm going to do something elsewhere." So if somebody was thinking of giving 12 pounds, would I prefer 12 pounds now or a pound a month I'd probably prefer a pound a month because this also is the long game.</p> <p>This is not something where we're after money now, despite my prior comment. If somebody's thinking about whether they're recurring or not, recurring sort of shows, I think it also shows a more considered view that I'm not just going to give 50 bucks. I'm actually going to give 20 bucks a month because I'm probably not going to cancel it in three months' time. And there was one other part of that question I missed, I think.</p> <p><em>Question</em>: One of the things that people do in the community is pool money together into EA funds and things like that. Is that preferred to people donating individually?</p> <p><em>Rob</em>: Individually much better, simply because, going back to the point I made about trying to connect individual donations to a distribution, so if somebody's given us $50 we can say, "Your $50 have bought 25 nets that have gone to this area of Uganda." I think this is, we hope, more energizing and engaging than if collectively we raise $1,000 from 50 people and we fund something there. We can only attach one e-mail to a donation, so therefore I'm only engaging one person whereas I'd like to engage all 50. But again we'd prefer the donation of funds rather than not.</p> <p>So yeah, whatever comes to us. It's opportunities like this and people ask us questions and we put them on our blog and so on in terms of how do you prefer. I think in some ways it's probably a refined level of thought because at the top level we need to persuade people why should I give to this charity. And if you've got energetic people that are going to group people together and say, "Hey, why don't we do a fundraiser or do something" then that's terrific. That's also another pebble in the pond in the sense because people getting involved will... it'll spread to their friend groups and networks and so on.</p> <p><em>Question</em>: What relations do you have with the Bill Gates Foundation, who are also involved in fighting malaria?</p> <p><em>Rob</em>: Effectively none in the sense that we don't have active connections. I chose not to go either to friends and family or big organizations when I started AMF, because I didn't want people saying, "Oh, what's he doing now? We'll give him 50 quid." And I didn't want to tap into money that already existed. I really wanted to get a whole set of people like me who really didn't know anything as much as I felt could be known about malaria. So we've not gone to the Gates Foundation and said, "Hi, would you give us tens of millions?" I think they know of us. I'm aware of that. I was in a room with Mr. Gates recently, but one of 500 people, so there's nothing special there.</p> <p>However, there have been connections along the way. The chair of my malaria advisory group was a guy called Professor Sir Brian Greenwood, and Brian was also the director of the Gates Malaria Partnership in London, and three of our malaria advisor group members also were chairpersons of Gates Malaria Centers in Africa. And one of our trustees advised Bill Gates Sr. when they were setting up the Gates Foundation. So we have connections, but we've not exploited them because what we're about is new money.</p> <p><em>Question</em>: So I suppose that's the money side of it. Is there also the expertise and the knowledge side of things that could be beneficial to work together?</p> <p><em>Rob</em>: They tend to work in research rather than product, which is what we are. And there have been connections and I have spoken with senior people at the Gates Foundation over the years, and it's been swapping ideas and notes on things, so that does happen.</p> <p><em>Question</em>: You say that you've worked with the Department for International Development in the UK Government as a collaborator. Again around sort of the funding point, is there ever a possibility or are you interested in the government actually funding your work?</p> <p><em>Rob</em>: Yes, in the sense that pragmatically I could use $150 million now that I don't have. So if somebody came forward and said, "We'd like to talk to you seriously about that," we'd be straight there. And even though that's not new money, that's just a sort of pragmatic response to... it is also an objective. I think we feel that we are a good funder of nets. I think there are some less good funders of nets. I could get myself into dodgy territory here, so I'll be careful what I say, but I think that we bring an attention to data, an accountability that sometimes other organizations don't have as their specific focus.</p> <p>So I think that we back ourselves. If somebody said, "We'd give you this much money. Can you spend it on nets?" We'd say, "Yes, but I'll tell you what. Hold onto the money. We'll put that program in place, knowing that you're committed, right?" And they'll go, "Yes." And we'll say, "Right, okay. Don't give us the money. We'll go and put it in place and get it all ready to go, and then we'll come back and say now there's no risk to you. Here you go. Evaluate that. Now give us the money." And at the moment I think my focus is increasingly on how do we increase the volume and the constancy of donations, and also some of the really big donations because I think if I'm going to try and do my best, we're all going to try and do our best to fill that $150 million gap. If I can phrase it that way then I have to have some really, really big donations come in, so that's something I'm thinking a lot about at the moment.</p> <p><em>Question</em>: Do you think there is an actual responsibility for governments to be actually doing some of this work, or are you happy for it to be kind of a third sector kind of thing?</p> <p><em>Rob</em>: Agnostic. At the end of the day, as fast as we can, we need to make sure that $5 billion a year is made available to malaria, and it's only $5 billion a year. Financial crisis and billions talked about here, there, and everywhere. It's a tiny amount of money for the number of people that die and the lack of productivity. So I don't care where it comes from. Our plan B, we in AMF have a plan B, and it's to close. And I want to do that as fast as I can, not just to see more of my four kids but because then I'd be an unbelievable hypocrite if I wanted AMF to keep going. Because I want to see malaria gone. There are plenty of other things to work on.</p> <p>So, yes, we want to see money coming from wherever it comes. The reality is, it's not coming from government. Or rather, all of the money at the moment is coming from governments, and in 2017 the four biggest funders of nets were the Global Fund, about $500 million, the British government and the American government in one order or another, and then AMF, which is ridiculous. We need to try and tap into that wall of money that I passionately believe exists within our communities. And I think the greatest barrier to it, frankly, is accountability. I think there's massive cynicism of, "If I give money to a charity operating in Africa where's it going to go?" And I think that's a valid concern, hence my cynicism. And boy if I was cynical when I started AMF 14 years ago, boy am I cynical now given what I've seen, which is why we do what we do the way we do it.</p> <p><em>Question</em>: So, tell me what's the cynicism that you've developed over the last 14 years?</p> <p><em>Rob</em>: I've seen many, many cases of benign incompetence all the way through to malign corruption at staggering levels.</p> <p><em>Question</em>: Apart from donations, how else can people contribute to AMF's mission?</p> <p><em>Rob</em>: That's a good question. Julian has been with us for a year as operations manager, and he runs the volunteer program. We need to work out how we can do even better at taking the fantastic offers we get from people saying, "How can we help with our time?" So that's one answer to your question, and expertise. I suppose there are specific ways in which we approach volunteering and say... because that's in a sense where I'm headed, is that if you've got expertise or connections with people I'm shameless.</p> <p>My favorite dinner party would be three of the 170, I think it's 170, people in the world that have assets of $10 billion or more, and I'd like to sit down with three of them at dinner and say, "Just give me the interest on the money. That's all I want." That's leverage. Three of you could save God knows how many people and et cetera, et cetera.</p> <p>But apart from that we need to redesign our website. Although there's a huge amount of talent that's gone into it, it's way out of date. It's not responsive. We sort of cringe at it. So what I would like is a really big website designing company to come forward and say, "Great, here's your team, Rob, all for free," because we do things for free, right. And then we redesign it. And we get another group of people who say, "You've got expertise, we don't. How do we keep this, this, this, this, and this, but generate a fantastic website that's responsive so that... how do we..." Another thing I'd throw out there is where we need help, but it's very pragmatic to AMF's needs if you like, because we have to be focused on what we're trying to deliver and then fold in volunteers so they can do things that excite them and they're good at, so we have to marry those two things up.</p> <p>I would like to try and get a million people to give me one net each, and only one net each. They're not allowed to give us more. Well, if you want to you can go over here, but this bit of the project is a million people giving me a net, in a way that so when Julian gives a net he can come back 10 days later and see, "Wow, five other people gave a net." It's pyramid selling, but it's sophisticated. And then there's more down here. And he can see that, "Wow, there are 42,000 nets that are being given as the result of the net I gave and the five e-mails I sent or the three or the two or the one." Now, I don't know who to talk to about that, so if there's anybody who knows the senior people at Google and Facebook and wherever we get those two guys and say, "How do you guys make that happen? You must be able to do that in about three weeks." And that would be a million nets and two million people protected.</p> <p>So the help we get is we're always really interested in getting people writing to us and saying, "How can I help?" And we've now got a series of questions where we say, "What are you good at? What do you want to do? How much time do you got, et cetera?" We have a database running, so we can then work out how do we not gets lots of time sucked into volunteer management, because that can be a real danger, but we can focus people on helping.</p> <p><em>Question</em>: So it sounds like it says partly individuals who have those skills can get in touch and ask to help, but is it also that people who work at big corporations can try and get in touch with you and leverage the expertise in their organization?</p> <p><em>Rob</em>: Yeah. And we might say to somebody, is there a consulting team that over the next three months... not that it's the time thing because people have got jobs, they're earning money, they're paying the rent, so most people can't just say, "I can do something for three weeks." But in the next three months could you guys take on, and we would liase with them, a study to work out what are the top new types of net on the market that we might have missed and so on, and do a research project and come back with us. And what we try and do is identify people who are totally self driven. So our management time doesn't get sucked into it, because there are only seven of us. They then go away, say, "Got it. We'll be back to you in ..." and every four weeks we have a conversation, and they deliver something that we go, "That's terrific. We can now use that." And there are lots of examples of that, where we try and involve people as best as we can in helping with the mission.</p> <p><em>Question</em>: I saw an article a couple of months ago about new nets with a combined approach of combined chemical and something to do with growth regulation that effects the growth of mosquitoes, and I was wondering if there had been developments with that since, and whether the nets have changed, or that's something that you focus on?</p> <p><em>Rob</em>: I don't know is the answer. The PBO net is a combined net in terms of two chemicals trying to have a particular impact. There are a number of different types of nets that are now being tested. They're not on the market because they have to go through something called WHOPES, the WHO Pesticide Evaluation Scheme, a bit like the FDA in America where it's got to be tested. You can't put a baby underneath a net with insecticide on it unless it's been fully tested. And by the way, a baby could lick two square meters of net and get a mild tummy ache, so obviously these things are tested. There are different things that are being brought to bear, to try and solve the problem of resistance rather than greater efficacy. They're fantastically effective as long as resistance isn't an issue, so I don't have... back to my volunteering thing. We need somebody to go and help the team by working at briefing us.</p> <p><em>Question</em>: You said you spend zero pounds on marketing, is that right? A lot of advertising agencies can demonstrate the effectiveness of their campaigns. Quite similarly you said that 1 million US dollars spent can save about 12 million for a country's local economy. That's about the same ratio as the John Lewis Christmas Advertising Campaign. So is there not a bit of an ethical dilemma in not spending money on marketing and advertising, given that you could get more money if you did it?</p> <p><em>Rob</em>: I think it's not an ethical dilemma. It's a sort of commercial one. If I put my commercial hat on as to how do I spend the million dollars that somebody gives me: do I buy a million dollars worth of net? I think the way we've come out at AMF is that I think we're getting into some tricky territory. If we were to announce to you that we were spending $5 million over Christmas on a marketing campaign I think there would be some people that would go, "Really?" And the problem with advertising is that 50% is effective. The problem is which 50%.</p> <p>The way we prefer to look at marketing and the most general thing is that let's put in front of people what we do and how we do it and the results we have, and hopefully that will encourage people to support what we do. If somebody from an advertising firm said, "We're prepared to put together a marketing campaign for you all for free. Here's a case team. And we've got a million dollar budget. Are you interested?" I'd say maybe rather than yes because it depends on the nature of the marketing. Because I think one of the things that we value at AMF is, if somebody gave me a billion dollars tomorrow I couldn't spend it all. It's a capacity issue. So our growth has been managed in a way. Boy, it took eight years to here, but now we're at the stage where we could scale, and we could take $150 million easily.</p> <p>I would be interested in an advertising campaign, but we're not about celebrities. This is back to this grassroots thing. I think celebrities, they're interested for a while and then they go. And I think that can often be disempowering. It can obviously be empowering because it gets the message out. If Oprah Winfrey said, "Look, I'm interested in supporting you all. What can I do?" I think we'd be really interested in having a conversation. But there's an element of, maybe I'm mistaken and I'm not a marketer, but I feel there's a sort of... I don't like the word brand, associated with AMF.</p> <p>At AMF we're sort of quite family, we're quite grassrootsy. We want to try and engage as many people as we have, but we just want to get on with the job and do it as well as we can. And I think that's hopefully the best way that will allow us to expand and maybe DFID will come to us and say, "Look, you've got a track record now operationally. We'd like to talk to you about big dollars. Make the case to us. Maybe we should give you some money." I think that's more the way we put our limited hours than thinking about getting... I'm being pejorative... getting sucked into the marketing side of it. But we're not completely closed to it, and we're actually dealing with something at the moment with, do we AB test something, and we've got free stuff given to us. And we're even sort of sitting here going, "Ooh, do we want to do this?" So we probably need somebody to advise us because we're not very good in this area. Sorry, that's a rather inadequate response.</p> <p><em>Question</em>: I'm quite interested in AMF's monitoring and evaluation. It's so exceptional. Why do you think that most NGOs actually don't do monitoring and evaluation, and do you think that you can advise other NGOs with different types of programs in low income countries to do monitoring and evaluation better?</p> <p><em>Rob</em>: I think there are three things. Firstly, it can be expensive. Secondly, it can be difficult and time consuming. And I think there's attitude as well. I think that many start with the attitude that they don't want to do <em>no</em> monitoring and evaluation, but they don't want to do much because it requires effort and time. So surely the best thing is just get 10 million nets out to the country. And even if some of them get stolen and so on, "Hey, look, like seeds being cast they'll cover most people, right." And my answer to that, is that I've heard of, on very good authority, tens of thousands and in one case 1.4 million nets being stolen and sold to another country, which means that the two million people that were going to be protected with those 1.4 million nets, they didn't get any nets. And nobody was coming in afterwards. None of these were AMF nets, to be clear. So I think monitoring is really important to stop these sorts of things happening.</p> <p>But a lot of organizations think there are so many challenges we have already. If we're going to do monitoring in this way then we're gonna have to have a team of people doing it. There's money involved. So I think generally things focus on the statistical. So let's do a survey and see how many households are covered or not covered rather than what I would almost call proactive monitoring where you're trying to influence behavior upstream by letting people know you're actually gonna monitor after the fact what goes on.</p> the-centre-for-effective-altruism C8pNyigL9pCh6ufkt 2019-02-12T15:59:04.429Z Ben Garfinkel: How sure are we about this AI stuff? <p><em>This transcript of an EA Global talk, which CEA has lightly edited for clarity, is crossposted from <a href=""></a>. You can also watch the talk on YouTube <a href="">here</a>.</em></p> <p><em>It is increasingly clear that artificial intelligence is poised to have a huge impact on the world, potentially of comparable magnitude to the agricultural or industrial revolutions. But what does that actually mean for us today? Should it influence our behavior? In this talk from EA Global 2018: London, Ben Garfinkel makes the case for measured skepticism.</em></p> <h2>The Talk</h2> <p>Today, work on risks from artificial intelligence constitutes a noteworthy but still fairly small portion of the EA portfolio.</p> <p><img src="//" alt="How sure are we about this AI stuff"></p> <p>Only a small portion of donations made by individuals in the community are targeted at risks from AI. Only about 5% of the grants given out by the Open Philanthropy Project, the leading grant-making organization in the space, target risks from AI. And in surveys of community members, most do not list AI as the area that they think should be most prioritized.</p> <p><img src="//" alt="How sure are we about this AI stuff (1)"></p> <p>At the same time though, work on AI is prominent in other ways. Leading career advising and community building organizations like 80,000 Hours and CEA often highlight careers in AI governance and safety as especially promising ways to make an impact with your career. Interest in AI is also a clear element of community culture. And lastly, I think there's also a sense of momentum around people's interest in AI. I think especially over the last couple of years, quite a few people have begun to consider career changes into the area, or made quite large changes in their careers. I think this is true more for work around AI than for most other cause areas.</p> <p><img src="//" alt="How sure are we about this AI stuff (2)"></p> <p>So I think all of this together suggests that now is a pretty good time to take stock. It's a good time to look backwards and ask how the community first came to be interested in risks from AI. It's a good time look forward and ask how large we expect the community's bet on AI to be: how large a portion of the portfolio we expect AI to be five or ten years down the road. It's a good time to ask, are the reasons that we first got interested in AI still valid? And if they're not still valid, are there perhaps other reasons which are either more or less compelling?</p> <p><img src="//" alt="How sure are we about this AI stuff (3)"></p> <p>To give a brief talk roadmap, first I'm going to run through what I see as an intuitively appealing argument for focusing on AI. Then I'm going to say why this argument is a bit less forceful than you might anticipate. Then I'll discuss a few more concrete arguments for focusing on AI and highlight some missing pieces of those arguments. And then I'll close by giving concrete implications for cause prioritization.</p> <h3>The intuitive argument</h3> <p>So first, here's what I see as an intuitive argument for working on AI, and that'd be the sort of, "AI is a big deal" argument.</p> <p><img src="//" alt="How sure are we about this AI stuff (4)"></p> <p>There are three concepts underpinning this argument:</p> <ol> <li>The future is what matters most in the sense that, if you could have an impact that carries forward and affects future generations, then this is likely to be more ethically pressing than having impact that only affects the world today.</li> <li>Technological progress is likely to make the world very different in the future: that just as the world is very different than it was a thousand years ago because of technology, it's likely to be very different again a thousand years from now.</li> <li>If we're looking at technologies that are likely to make especially large changes, then AI stands out as especially promising among them.</li> </ol> <p>So given these three premises, we have the conclusion that working on AI is a really good way to have leverage over the future, and that shaping the development of AI positively is an important thing to pursue.</p> <p><img src="//" alt="How sure are we about this AI stuff (5)"></p> <p>I think that a lot of this argument works. I think there are compelling reasons to try and focus on your impact in the future. I think that it's very likely that the world will be very different in the far future. I also think it's very likely that AI will be one of the most transformative technologies. It seems at least physically possible to have machines that eventually can do all the things that humans can do, and perhaps do all these things much more capably. If this eventually happens, then whatever their world looks like, we can be pretty confident the world will look pretty different than it does today.</p> <p><img src="//" alt="How sure are we about this AI stuff (6)"></p> <p>What I find less compelling though is the idea that these premises entail the conclusion that we ought to work on AI. Just because a technology will produce very large changes, that doesn't necessarily mean that working on that technology is a good way to actually have leverage over the future. Look back at the past and consider the most transformative technologies that have ever been developed. So things like electricity, or the steam engine, or the wheel, or steel. It's very difficult to imagine what individuals early in the development of these technologies could have done to have a lasting and foreseeably positive impact. An analogy is sometimes made to the industrial revolution and the agricultural revolution. The idea is that in the future, impacts of AI may be substantial enough that there will be changes that are comparable to these two revolutionary periods throughout history.</p> <p><img src="//" alt="How sure are we about this AI stuff (7)"></p> <p>The issue here, though, is that it's not really clear that either of these periods actually were periods of especially high <em>leverage</em>. If you were, say, an Englishman in 1780, and trying to figure out how to make this industry thing go well in a way that would have a lasting and foreseeable impact on the world today, it's really not clear you could have done all that much. The basic point here is that from a long-termist perspective, what matters is leverage. This means finding something that could go one way or the other, and that's likely to stick in a foreseeably good or bad way far into the future. Long-term importance is perhaps a necessary condition for leverage, but certainly not a sufficient one, and it's a sort of flawed indicator in its own right.</p> <h3>Three concrete cases</h3> <p>So now I'm going to move to three somewhat more concrete cases for potentially focusing on AI. You might have a few concerns that lead you to work in this area:</p> <p><img src="//" alt="How sure are we about this AI stuff (8)"></p> <ol> <li><strong>Instability.</strong> You might think that there are certain dynamics around the development or use of AI systems that will increase the risk of permanently damaging conflict or collapse, for instance war between great powers.</li> <li><strong>Lock-in.</strong> Certain decisions regarding the governance or design of AI systems may permanently lock in, in a way that propagates forward into the future in a lastingly positive or negative way.</li> <li><strong>Accidents.</strong> It might be quite difficult to use future systems safely. And that there may be accidents that occur in the future with more advanced systems that cause lasting harm that again carries forward into the future.</li> </ol> <h4>Instability</h4> <p><img src="//" alt="How sure are we about this AI stuff (9)"></p> <p>First, the case from instability. A lot of the thought here is that it's very likely that countries will compete to reap the benefits economically and militarily from the applications of AI. This is already happening to some extent. And you might think that as the applications become more significant, the competition will become greater. And in this context, you might think that this all increases the risk of war between great powers. So one concern here is that there may be a potential for transitions in terms of what countries are powerful compared to which other countries.</p> <p>A lot of people in the field of international security think that these are conditions under which conflict becomes especially likely. You might also be concerned about changes in military technology that, for example, increase the odds of accidental escalation, or make offense more favorable compared to defense. You may also just be concerned that in periods of rapid technological change, there are greater odds of misperception or miscalculation as countries struggle to figure out how to use the technology appropriately or interpret the actions of their adversaries. Or you could be concerned that certain applications of AI will in some sense damage domestic institutions in a way that also increases instability. That rising unemployment or inequality might be quite damaging, for example. And lastly, you might be concerned about the risks from terrorism, that certain applications might make it quite easy for small actors to cause large amounts of harm.</p> <p><img src="//" alt="How sure are we about this AI stuff (10)"></p> <p>In general, I think that many of these concerns are plausible and very clearly important. Most of them have not received very much research attention at all. I believe that they warrant much, much more attention. At the same time though, if you're looking at things from a long-termist perspective, there are at least two reservations you could continue to have. The first is just we don't really know how worried to be. These risks really haven't been researched much, and we shouldn't really take it for granted that AI will be destabilizing. It could be or it couldn't be. We just basically have not done enough research to feel very confident one way or the other.</p> <p>You may also be concerned, if you're really focused on long term, that lots of instability may not be sufficient to actually have a lasting impact that carries forward through generations. This is a somewhat callous perspective. If you really are focused on the long term, it's not clear, for example, that a mid-sized war by historical standards would be sufficient to have a big long term impact. So it may be actually a quite high bar to achieve a level of instability that a long-termist would really be focused on.</p> <h4>Lock-in</h4> <p><img src="//" alt="How sure are we about this AI stuff (11)"></p> <p>The case from lock-in I'll talk about just a bit more briefly. Some of the intuition here is that certain decisions have been made in the past about, for instance the design of political institutions, software standards, or certain outcomes of military or economic competitions, which seem to produce outcomes that carry forward into the future for centuries. Some examples would be the design of the US Constitution, or the outcome of the Second World War. You might have the intuition that certain decisions about the governance or design of AI systems, or certain outcomes of strategic competitions, might carry forward into the future, perhaps for even longer periods of time. For this reason, you might try and focus on making sure that whatever locks in is something that we actually want.</p> <p><img src="//" alt="How sure are we about this AI stuff (12)"></p> <p>I think that this is a somewhat difficult argument to make, or at least it's a fairly non-obvious one. I think the standard skeptical reply is that with very few exceptions, we don't really see many instances of long term lock-in, especially long term lock-in where people really could have predicted what would be good and what would be bad. Probably the most prominent examples of lock-in are choices around major religions that have carried forward for thousands of years. But it's quite hard to find examples that last for hundreds of years. Those seem quite few. It's also generally hard to judge what you would want to lock in. If you imagine fixing some aspect of the world, as the rest of world changes dramatically, it's really hard to guess what would actually be good under quite different circumstances in the future. I think my general feeling on this line of argument is that, I think it's probably not that likely that we should expect any truly irreversible decisions around AI to be made anytime soon, even if progress is quite rapid, although other people certainly might disagree.</p> <h4>Accidents</h4> <p><img src="//" alt="How sure are we about this AI stuff (13)"></p> <p>Last, we have the case from accidents. The idea here is that, we know that there are certain safety engineering challenges around AI systems. It's actually quite difficult to design systems that you can feel confident will behave the way you want them to in all circumstances. This has been laid out most clearly in the paper 'Concrete Problems in AI Safety,' from a couple of years ago by Dario Amodei and others. I'd recommend for anyone interested in safety issues to take a look at that paper. Then we might think, given the existence of these safety challenges, and given the belief or expectation that AI systems will become much more powerful in the future or be given much more responsibility, we might expect that these safety concerns will become more serious as time goes on.</p> <p><img src="//" alt="How sure are we about this AI stuff (14)"></p> <p>At the limit you might worry that these safety failures could become so extreme that they could perhaps derail civilization on the whole. In fact, there is a bit of writing arguing that we should be worried about these sort of existential safety failures. The main work arguing for this is still the book 'Superintelligence' by Nick Bostrom, published in 2014. Before this, essays by Eliezer Yudkowsky were the main source of arguments along these lines. And then a number of other writers such as Stuart Russell or, a long time ago, IJ Goods or David Chalmers have also expressed similar concerns, albeit more briefly. The writing on existential safety accidents definitely isn't homogeneous, but often there's a sort of similar narrative that appears in these essays expressing these concerns. There's this basic standard disaster scenario that has a few common elements.</p> <p><img src="//" alt="How sure are we about this AI stuff (15)"></p> <p>First, the author imagines that a single AI system experiences a massive jump in capabilities. Over some short period of time, a single system becomes much more general or much more capable than any other system in existence, and in fact any human in existence. Then given the system, researchers specify a goal for it. They give it some input which is meant to communicate what behavior it should engage in. The goal ends up being something quite simple, and the system goes off and single-handedly pursues this very simple goal in a way that violates the full nuances of what its designers intended.</p> <p>There's a classic sort of toy example, which is often used to illustrate this concern. We imagine that some poor paperclip factory owner receives a general super-intelligent AI on his doorstep. There's a slot that's to stick in a goal. He writes down the goal "maximize paperclip production," puts it in the AI system, and then lets it go off and do that. The system figures out the best way to maximize paperclip production is to take over all the world's resources, just to plow them all into paperclips. And the system is so capable that designers can do nothing to stop it, even though it's doing something that they actually really do not intend.</p> <p><img src="//" alt="How sure are we about this AI stuff (16)"></p> <p>I have some general concerns about the existing writing on existential accidents. So first there's just still very little of it. It really is just mostly <em>Superintelligence</em> and essays by Eliezer Yudkowsky, and then sort of a handful of shorter essays and talks that express very similar concerns. There's also been very little substantive written criticism of it. Many people have expressed doubts or been dismissive of it, but there's very little in the way of skeptical experts who are sitting down and fully engaging with it, and writing down point by point where they disagree or where they think the mistakes are. Most of the work on existential accidents was also written before large changes in the field of AI, especially before the recent rise of deep learning, and also before work like 'Concrete Problems in AI Safety,' which laid out safety concerns in a way which is more recognizable to AI researchers today.</p> <p>Most of the arguments for existential accidents often rely on these sort of fuzzy, abstract concepts like optimization power or general intelligence or goals, and toy thought experiments like the paper clipper example. And certainly thought experiments and abstract concepts do have some force, but it's not clear exactly how strong a source of evidence we should take these as. Then lastly, although many AI researchers actually have expressed concern about existential accidents, for example Stuart Russell, it does seem to be the case that many, and perhaps most AI researchers who encounter at least abridged or summarized versions of these concerns tend to bounce off them or just find them not very plausible. I think we should take that seriously.</p> <p>I also have some more concrete concerns about writing on existential accidents. You should certainly take these concerns with a grain of salt because I am not a technical researcher, although I have talked to technical researchers who have essentially similar or even the same concerns. The general concern I have is that these toy scenarios are quite difficult to map onto something that looks more recognizably plausible. So these scenarios often involve, again, massive jumps in the capabilities of a single system, but it's really not clear that we should expect such jumps or find them plausible. This is a wooly issue. I would recommend checking out writing by Katja Grace or Paul Christiano online. That sort of lays out some concerns about the plausibility of massive jumps.</p> <p>Another element of these narratives is, they often imagine some system which becomes quite generally capable and then is given a goal. In some sense, this is the reverse of the way machine learning research tends to look today. At least very loosely speaking, you tend to specify a goal or some means of providing feedback. You direct the behavior of a system and then allow it to become more capable over time, as opposed to the reverse. It's also the case that these toy examples stress the nuances of human preferences, with the idea being that because human preferences are so nuanced and so hard to state precisely, it should be quite difficult to get a machine that can understand how to obey them. But it's also the case in machine learning that we can train lots of systems to engage in behaviors that are actually quite nuanced and that we can't specify precisely. Recognizing faces from images is an example of this. So is flying a helicopter.</p> <p>It's really not clear exactly why human preferences would be so fatal to understand. So it's quite difficult to figure out how to map the toy examples onto something which looks more realistic.</p> <h3>Caveats</h3> <p>Some general caveats on the concerns expressed. None of my concerns are meant to be decisive. I've found, for example, that many people working in the field of AI safety in fact list somewhat different concerns as explanations for why they believe the area is very important. There are many more arguments that I believe are shared individually, or inside people's heads and currently unpublished. I really can't speak exactly to how compelling these are. The main point I want to stress here is essentially that when it comes to the writing which has actually been published, and which is out there for analysis, I don't think it's necessarily that forceful, and at the very least it's not decisive.</p> <p><img src="//" alt="How sure are we about this AI stuff (17)"></p> <p>So now I have some brief, practical implications, or thoughts on prioritization. You may think, from all the stuff I've just said, that I'm quite skeptical about AI safety or governance as areas to work in. In fact, I'm actually fairly optimistic. My reasoning here is that I really don't think that there are any slam-dunks for improving the future. I'm not aware of any single cause area that seems very, very promising from the perspective of offering high assurance of long-term impact. I think that the fact that there are at least plausible pathways for impact by working on AI safety and AI governance puts it head and shoulders above most areas you might choose to work in. And AI safety and AI governance also stand out for being pretty extraordinarily neglected.</p> <p>Depending on how you count, there are probably fewer than a hundred people in the world working on technical safety issues or governance challenges with an eye towards very long-term impacts. And that's just truly, very surprisingly small. The overall point though, is that the exact size of the bet that EA should make on artificial intelligence, sort of the size of the portfolio that AI should take up will depend on the strength of the arguments for focusing on AI. And most of those arguments still just aren't very fleshed out yet.</p> <p><img src="//" alt="How sure are we about this AI stuff (18)"></p> <p>I also have some broader epistemological concerns which connect to the concerns I've expressed. I think it's also possible that there are social factors relating to EA communities that might bias us to take an especially large interest in AI.</p> <p>One thing is just that AI is especially interesting or fun to talk about, especially compared to other cause areas. It's an interesting, kind of contrarian answer to the question of what is most important to work on. It's surprising in certain ways. And it's also now the case that interest in AI is to some extent an element of community culture. People have an interest in it that goes beyond just the belief that it's an important area to work in. It definitely has a certain role in the conversations that people have casually, and what people like to talk about. I think these wouldn't necessarily be that concerning, except people sometimes also think that we can't really count on external feedback to push us back if we sort of drift a bit.</p> <p>So first it just seems to be empirically the case that skeptical AI researchers generally will not take the time to sit down and engage with all of the writing, and then explain carefully why they disagree with our concerns. So we can't really expect that much external feedback of that form. People who are skeptical or confused, but not AI researchers, or just generally not experts may be concerned about sounding ignorant or dumb if they push back, and they also won't be inclined to become experts. We should also expect generally very weak feedback loops. If you're trying to influence the very long-run future, it's hard to tell how well you're doing, just because the long-run future hasn't happened yet and won't happen for a while.</p> <p>Generally, I think one thing to watch out for is justification drift. If we start to notice that the community's interest in AI stays constant, but the reasons given for focusing on it change over time, then this would be sort of a potential check engine light, or at least a sort of trigger to be especially self-conscious or self-critical, because that may be some indication of motivated reasoning going on.</p> <h3>Conclusion</h3> <p><img src="//" alt="How sure are we about this AI stuff (19)"></p> <p>I have just a handful of short takeaways. First, I think that not enough work has gone into analyzing the case for prioritizing AI. Existing published arguments are not decisive. There may be many other possible arguments out there, which could be much more convincing or much more decisive, but those just aren't out there yet, and there hasn't been much written criticizing the stuff that's out there.</p> <p>For this reason, thinking about the case for prioritizing AI may be an especially high impact thing to do, because it may shape the EA portfolio for years into the future. And just generally, we need to be quite conscious of possible community biases. It's possible that certain social factors will lead us to drift in what we prioritize, that we really should not be allowing to influence us. And just in general, if we're going to be putting substantial resources into anything as a community, we need to be especially certain that we understand why we're doing this, and that we stay conscious that our reasons for getting interested in the first place continue to be good reasons. Thank you.</p> <h2>Questions</h2> <p><em>Question</em>: What advice would you give to one who wants to do the kind of research that you are doing here about the case for AI potentially, as opposed to the AI itself?</p> <p><em>Ben</em>: Something that I believe would be extremely valuable is just basically talking to lots of people who are concerned about AI and asking them precisely what reasons they find compelling. I've started to do this a little bit recently and it's actually been quite interesting that people seem to have pretty diverse reasons, and many of them are things that people want to write blog posts on, but just haven't done. So, I think this is a low-hanging fruit that would be quite valuable. Just talking to people who are concerned about AI, trying to understand exactly why they're concerned, and either writing up their ideas or helping them to do that. I think that would be very valuable and probably not that time intensive either.</p> <p><em>Question</em>: Have you seen any of the justification drift that you alluded to? Can you pinpoint that happening in the community?</p> <p><em>Ben</em>: Yeah. I think that's certainly happening to some extent. Even for myself, I believe that's happened for me to some extent. When I initially became interested in AI, I was especially concerned about these existential accidents. I think I now place relatively greater prominence on sort of the case from instability as I described it. And that's certainly, you know, one possible example of justification drift. It may be the case that this was actually a sensible way to shift emphasis, but would be something of a warning sign. And I've also just spoken to technical researchers, as well, who used to be especially concerned about this idea of an intelligence explosion or recursive self improvement. These very large jumps. I now have spoken to a number of people who are still quite concerned about existential accidents, but make arguments that don't hinge on there being this one single massive jump into a single system.</p> <p><em>Question</em>: You made the analogy to the industrial revolution, and the 1780 Englishman who doesn't really have much ability to shape how the steam engine is going to be used. It seems intuitively quite right. The obvious counterpoint would be, well AI is a problem-solving machine. There's something kind of different about it. I mean, does that not feel compelling to you, the sort of inherent differentness of AI?</p> <p><em>Ben</em>: So I think probably the strongest intuition is, you might think that there will eventually be a point where we start turning more and more responsibility over to automated systems or machines, and that there might eventually come a point where humans have almost no control over what's happening whatsoever, that we keep turning over more and more responsibility and there's a point where machines are in some sense in control and you can't back out. And you might have some sort of irreversible juncture here. I definitely, to some extent, share that intuition that if you're looking over a very long time span, that that is probably fairly plausible. I suppose the intuition I don't necessarily have is that unless things go, I suppose quite wrong or if they happen in somewhat surprising ways, I don't necessarily anticipate that there will be this really irreversible juncture coming anytime soon. If let's say it takes a thousand years for control to be handed off, then I am not that optimistic about people having that much control over what that handoff looks like by working on things today. But I certainly am not very confident.</p> <p><em>Question</em>: Are there any policies that you think a government should implement at this stage of the game, in light of the concerns around AI safety? And how would you allocate resources between existing issues and possible future risks?</p> <p><em>Ben</em>: Yeah, I am still quite hesitant, I think, to recommend very substantive policies that I think governments should be implementing today. I currently have a lot of agnosticism about what would be useful, and I think that most current existing issues that governments are making decisions on aren't necessarily that critical. I think there's lots of stuff that can be done that would be very valuable, like having stronger expertise or stronger lines of dialogue between the public and private sector, and things like this. But I would be hesitant at this point to recommend a very concrete policy that at least I'm confident would be good to implement right now.</p> <p><em>Question</em>: You mentioned the concept of kind of a concrete decisive argument. Do you see concrete, decisive arguments for other cause areas that are somehow more concrete and decisive than for AI, and what is the difference?</p> <p><em>Ben</em>: Yeah. So I guess I tried to allude to this a little bit, but I don't think that really any cause area has an especially decisive argument for being a great way to influence the future. There's some that I think you can put sort of a lower bound on at least how likely it is to be useful that's somewhat clear. So for example, risk from nuclear war. It's fairly clear that it's at least plausible this could happen over the next century. You know, nuclear war has almost happened in the past, the climate effects are speculative, but at least somewhat well understood. And then there's this question of if there were nuclear war, how damaging is this? Do people eventually come back from this? And that's quite uncertain, but I think it'd be difficult to put above 99% chance that people would come back from a nuclear war.</p> <p>So, in that case you might have some sort of a clean lower bound on, let's say working on nuclear risk. Or, quite similarly, working on pandemics. And I think for AI it's difficult to have that sort of confident lower bound. I actually tend to think, I guess as I alluded to, that AI is probably or possibly still the most promising area based on my current credences, and its extreme neglectedness. But yeah, I don't think any cause area stands out as especially decisive as a great place to work.</p> <p><em>Question</em>: I'm an AI machine learning researcher PhD student currently, and I'm skeptical about the risk of AGI. How would you suggest that I contribute to the process of providing this feedback that you're identifying as a need?</p> <p><em>Ben</em>: Yeah, I mean I think just a combination of in-person conversations and then I think even simple blog posts can be quite helpful. I think there's still been surprisingly little in the way of just, let's say something written online that I would point someone to who wants the skeptical case. This actually is a big part of the reason I suppose I gave this talk, even though I consider myself not extremely well placed to give it, given that I am not a technical person. There's so little out there along these lines that there's low hanging fruit, essentially.</p> <p><em>Question</em>: Prominent deep learning experts such as Yann Lecun and Andrew Ng do not seem to be worried about risks from superintelligence. Do you think that they have essentially the same view that you have or are they coming at it from a different angle?</p> <p><em>Ben</em>: I'm not sure of their specific concerns. I know this classic thing that Andrew Ng always says is he compares it to worrying about overpopulation on Mars, where the suggestion is that these risks, if they materialize, are just so far away that it's really premature to worry about them. So it seems to be sort of an argument from timeline considerations. I'm actually not quite sure what his view is in terms of, if we were like, let's say 50 years in the future, would he think that this is a really great area to work on? I'm really not quite sure.</p> <p>I actually tend to think that the line of thinking that says, "Oh, this is so far away so we shouldn't work on it" just really isn't that compelling. It seems like we have a load of uncertainty about AI timelines. It seems like no one can be very confident about that. So yeah, it'd be hard to get under, let's say one percent that interesting things won't happen in the next 30 years or so. So I'm not quite sure about the extent of his concerns, but if they're based on timelines, I actually don't find them that compelling.</p> the-centre-for-effective-altruism 9sBAW3qKppnoG3QPq 2019-02-09T19:17:31.671Z Will MacAskill: Why should effective altruists embrace uncertainty? <p><em>This transcript of an EA Global talk, which CEA has lightly edited for clarity, is crossposted from <a href=""></a>. You can also watch the talk on YouTube <a href="">here</a>.</em></p><p><em>Probabilistic thinking is only a few centuries old, we have very little understanding on how most of our actions affect the long-term future, and prominent members of the effective altruism community have changed their minds on crucial considerations before. These are just three of the reasons that Will Macaskill urges effective altruists to embrace uncertainty, and not become too attached to present views. This talk was the closing talk for Effective Altruism Global 2018: San Francisco.</em></p><h2>The Talk</h2><p>Thanks so much for an awesome conference. I think this may be my favorite EAG ever actually, and we have certain people to thank for that. So let&#x27;s put a big round of applause for Katie, Amy, Julia, Barry, and Kerri did an awesome job. Thank you.</p><p></p><span><figure><img src="" class="draft-image " style="" /></figure></span><p></p><p>Now that I&#x27;ve been to the TED conference, I know that they have an army of 500 people running it, and we have five, which shows just how dedicated they are. But we also had an amazing team of volunteers, led by Tessa. So, a round of applause for their help as well. You all crushed it, so thank you.</p><p></p><span><figure><img src="" class="draft-image " style="" /></figure></span><p></p><p>Let&#x27;s look at a few conference highlights. There was tons of good stuff at the conference, can&#x27;t talk about it all, but there were many amazing talks. Sadly, every EAG, I end up going to about zero, but I heard they were really good. So, I hope you had a good time there. We had awesome VR.</p><p></p><span><figure><img src="" class="draft-image " style="" /></figure></span><p></p><p>I talked about, from animal equality, I talked about the importance of, or the idea of like really trying to get in touch with particular intuitions. So I hope many of you had a chance to experience that.</p><p></p><span><figure><img src="" class="draft-image " style="" /></figure></span><p></p><p>We also had loads of fun along the way. This photo makes us look like we had a kind of rave room going on. I want to throw particular attention to Igor&#x27;s blank stare, but like a little smile. So you know, I want to know what he was having. And then, most importantly, we had great conversations.</p><p></p><span><figure><img src="" class="draft-image " style="" /></figure></span><p></p><p>So look at this photo. Look how nice Max and Becky look. Just like, you know, you want them to be your kids or something like it. It&#x27;s kind of heartwarming.</p><p></p><span><figure><img src="" class="draft-image " style="" /></figure></span><p></p><p>My own personal highlight was getting to talk with Holden, but in particular him telling us about his love of stuffed animals. You might not know that from his Open Philanthropy posts, but he&#x27;s going to write about it in the future.</p><p>I talked about like having a different kind of gestalt, a different worldview. The aspect of that, that feeling of gestalt shift, was actually most present for me in some stuff Holden said. In particular, he emphasized the importance of self care, this idea that he worked out what&#x27;s the average number of hours he works in a week, and that&#x27;s his fixed point, he can&#x27;t work harder than that, really. And that there&#x27;s no reason to feel bad about it. And yeah, in my own case, I was like, &quot;Well obviously I kind of know that on an abstract level, or something.&quot; But hearing it from someone who I admire as much, and I know is as productive as Holden is, really helped turn that into something that I feel... now I think I am able to feel that on more of a gut level.</p><p>So, the theme of the conference was Stay Curious. And I talked earlier on about the contrast between Athens and Sparta. I think we definitely got a good demonstration that you are excellent Athenians, excellent philosophers. In particular, I told the story about philosophers at this old conference not being able to make it to the bar after the conference. Well, last night, attempting to go to the speakers&#x27; reception, two groups of us, one goes into an elevator before us, me and my group go in, go down, and the others just aren&#x27;t there. Scott Garrabrant tells me they went from the fourth floor down to the first, doors opened, doors closed again, and they went right back up to the fourth. So, I don&#x27;t want to say I told you so, but yeah, we&#x27;re definitely doing well on the philosopher side of things.</p><p>So, we talked about being curious over the course of this conference. Now I&#x27;m going to talk a bit about taking that attitude and continuing it over to the following year. And I&#x27;m going to give quickly three arguments, or ways of thinking, just to emphasize how little we know, and how important it therefore is to keep such an open mind.</p><p></p><span><figure><img src="" class="draft-image " style="" /></figure></span><p></p><p>The first argument is just how recent many intellectual innovations were. The idea of probability theory is only a few centuries old. So for most of human civilization, we just didn&#x27;t really have the concept of thinking probabilistically. So, if we&#x27;d made the argument like, &quot;Oh, we&#x27;re really concerned about risk of human extinctions, not that we think it&#x27;s definitely going to happen, but there&#x27;s some chance it&#x27;d be really bad.&quot; People would have said just, &quot;I don&#x27;t get it.&quot;</p><p>I can&#x27;t even really imagine what it&#x27;d be like to just not have the concept of probability, but yet for thousands of years people were operating without that. Simple utilitarianism, again. I mean, this kind of goes back a little bit to the Mohists in early China, but at least in its modern form, it was only developed in the 18th century. And while effective altruism is definitely not utilitarianism, it&#x27;s clearly part of a similar intellectual current. And the fact that this moral view that I think has one of the best shots of being the correct moral view was only developed a few centuries ago, well, who knows what the coming centuries have?</p><p>More recently as well, the idea of evidence-based medicine. The term evidence-based medicine only arose in the 90s. It actually only really started to be practiced in the late 1960s. There was almost no attempt to apply the experimental method more than 80 years ago. And again, this is just such an obvious part of our worldview. It&#x27;s amazing that this didn&#x27;t exist before that point. The whole field of population ethics, again, what we think of as among the most fundamental crucial considerations, only really came to be discussed with Parfit&#x27;s Reasons and Persons, published in 1984. The use of randomized controlled trials in development economics, at least outside the area of health care, again, only in the 1990s, still very recent by societal terms.</p><p>And then the whole idea of AI safety, or the importance of ensuring that artificial intelligence doesn&#x27;t have very bad consequences, again, really from the early 2000s. So this trend should really make us appreciate, there are so many developments that should cause radical worldview changes. I think it should definitely usher in the question of &quot;Well, what are the further developments over the coming decades that might really switch our views again?&quot;</p><p></p><span><figure><img src="" class="draft-image " style="" /></figure></span><p></p><p>The second argument is, more narrowly, really big updates that people in the EA community have made in the past. So again, with my conversation with Holden, he talked about how for very many years he did not take seriously the loopy ideas of effective altruism. But, as he&#x27;s written about publicly, he&#x27;s really massively changed his view on things like considerations of the long-term future, and the moral status of nonhuman animals as well. And again, these are huge, worldview-changing things.</p><p>In my own case as well, certainly when I started out with effective altruism, I really thought, there&#x27;s a body of people who form the scientific establishment, and they work on stuff, and then they produce answers, and that&#x27;s knowledge. Then, I thought you could just act on that, and that was the way the scientific establishment worked. Turns out things are a little bit more complicated than that, a little bit more human, and that just, unfortunately, the state of empirical science is a lot less robust than I thought. And that came out in the early days of relying on, say the Disease Control Priorities Project, which had much shakier methodology, and in fact mistakes that I really, really wouldn&#x27;t have predicted at the time. And that&#x27;s definitely been a big shift in my own way of understanding the world.</p><p>And then, in two different ways, from my colleagues at FHI, their views on nanotechnology. Where it really used to be the case that nanotechnology was... atomically precise manufacturing was regarded as one of the existential risks. And I think people just converged on thinking that actually that argument was very much overblown. On the other side, Eric Drexler spent most of his life saying like, &quot;Actually, atomically precise manufacturing is the panacea. We can be at a post-scarcity world. We can have radical abundance. This is going to be amazing.&quot; And then was able to change his mind and actually think, &quot;Well actually, I&#x27;m not sure if it&#x27;s... it might be good, it might be bad. I&#x27;m not sure,&quot; despite having kind of worked and promoted these ideas for decades. This is actually kind of amazing, that people in the community are able to have shifts like that.</p><p></p><span><figure><img src="" class="draft-image " style="" /></figure></span><p></p><p>Then the third argument I&#x27;ll give you, is that if we&#x27;ve made these updates, perhaps we will make such significant updates again in the future. So the third class of arguments is just all the categories of things that we still really don&#x27;t understand. So, I mean, the thing I&#x27;m focused on most at the moment is trying to build this field of global priorities research to try and address some of these questions, get more smart people working on them. But one is just how we should weigh probabilities against very large amounts of value. So, we clearly think that most of the time something like expected utility theory gets the right answers. But then people get to start a bit antsy about it when it comes to very, very low probabilities of sufficiently large amounts of value.</p><p>When we then start thinking about, well, what about infinite amounts of value? If we&#x27;re happy to think about very, very large amounts of value, as long-termists often are thinking about, if we think it&#x27;s not wacky to talk about that, why not about infinite amounts? But then, you&#x27;re really starting to throw a spanner in the works of any sort of reasonable decision theory.</p><p>And it just is the case, we just have like no idea at the moment really, how to handle this problem. Similarly with something Open Phil has worked a lot on: which entities are morally relevant? We&#x27;re very positive about expanding the moral circle, but how far should that go? Nonhuman animals, of course. But what about insects? What about plants or something? Seems like we have a strong intuition that plants don&#x27;t have consciousness, and perhaps they don&#x27;t count. We don&#x27;t really have any good underlying understanding of why that is the case. There&#x27;s plenty of people trying to work on this on the cutting edge, like Qualia Research Institute, among others, but it&#x27;s exceptionally difficult. And if we don&#x27;t know that, then there&#x27;s a ton we don&#x27;t know about doing good.</p><p>Another category that we&#x27;re ignorant about is indirect effects and moral cluelessness. So we know that like most of the impact of our actions are in unpredictable effects over the very, very long term, because of butterfly effects and so on, because of the ways that our actions will change who is born in the future. We know that that&#x27;s actually where most of the action is, and it&#x27;s just that we can&#x27;t predict it at all. So we know we&#x27;re just peering very dimly into the fog of the future. And there&#x27;s been basically almost no work on really trying to model that, really trying to think, well, you take this sort of action in this country, how does that differ from this other sort of action in this other country, in terms of its very long-run effects?</p><p>So it&#x27;s not just that we&#x27;ve got this general abstract argument, looking inductively from experience in terms of how we&#x27;ve, as a society and as a community, changed our mind in the past. It&#x27;s also that we just know that there&#x27;s tons of things that we don&#x27;t understand. So I think what&#x27;s appropriate is a attitude of deep, kind of radical uncertainty when we&#x27;re trying our best to do good. But what kind of concrete implications does this have? Well, I think there&#x27;s kind of three main things.</p><p></p><span><figure><img src="" class="draft-image " style="" /></figure></span><p></p><p>One is just actually trying to get more information, so continuing to do research, continuing to engage in intellectual inquiry. Second is to keep our options open as much as possible, ensuring that we&#x27;re not closing doors, that even though they look not too promising, they might actually turn out to be much more promising than they were, when we gain more information going into the future, and when we change our minds. Thirdly is plausibly pursuing things that are convergently good. So things that look like, &quot;Yeah, this is a really robustly good thing to do from a wide variety of perspectives or worldviews.&quot; So, reducing the chance of a great power war for example. Even if my empirical beliefs about the future changed a lot, even if my moral beliefs changed a lot, I&#x27;d still feel very confident that reducing the chance of major war in our lifetime would be a very good thing to do.</p><p>So, the thing I want to emphasize to you most is keeping this attitude of kind of uncertainty and exploration through what you&#x27;re doing over the coming year. I&#x27;ve emphasized Athens in response to this Athens versus Sparta dilemma, trying to bear in mind that we want to stay uncertain. We want to keep conformity at the meta-level and cooperate and sympathize with people who have very different object-level beliefs to us. And so, above all, we want to keep exploring and stay curious.</p> the-centre-for-effective-altruism CWT3FfG7eEDrv7aHQ 2018-12-04T16:23:44.260Z