milan_griffes feed - EA Forum Reader milan_griffes’s posts and comments on the Effective Altruism Forum en-us Comment by Milan_Griffes on Confused about AI research as a means of addressing AI risk <p>It&#x27;s in <a href="">their charter</a>: </p><blockquote>Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”</blockquote> milan_griffes mesDjHTYa3qnpPaMM 2019-02-21T01:07:26.369Z Comment by Milan_Griffes on Confused about AI research as a means of addressing AI risk <blockquote>I&#x27;d like to understand in more detail how this analogy breaks down.</blockquote><p></p><p>I think the important disanalogy is that once you&#x27;ve created a safe AGI of sufficient power, you win. (Because it&#x27;s an AGI, so it can go around doing powerful AGI stuff – other projects could be controlled or purchased, etc.)</p><p>It&#x27;s not for sure the case that first-past-the-post will be the end-of-the-day winner, but being first-past-the-post is probably a big advantage. Bostrom has some discussion of this in the multipolar / singleton section of <em>Superintelligence</em>, if I recall correctly.</p><p>Drexler&#x27;s <a href="">Comprehensive AI Services</a> is an alternative framing for what we mean by AGI. Probably relevant here, though I haven&#x27;t engaged closely with it yet.</p> milan_griffes heXFWKHEdpy27A5Hy 2019-02-21T00:50:34.503Z Comment by Milan_Griffes on Confused about AI research as a means of addressing AI risk <blockquote>From how Paul Christiano frames it, it seems like it&#x27;s &quot;create AGI, and make sure it&#x27;s aligned.&quot;</blockquote><p></p><p>I think that&#x27;s basically right. I believe something like was Eliezer&#x27;s plan too, way back in the day, but then he updated to believing that we don&#x27;t have the basic ethical, decision theoretic, and philosophical stuff figured out that&#x27;s prerequisite to actually making a safe AGI. More on that in his <a href="">Rocket Alignment Dialogue</a>.</p> milan_griffes aZJQNWAusMQsWh7o4 2019-02-21T00:44:55.407Z Comment by Milan_Griffes on Confused about AI research as a means of addressing AI risk <p>+1 to Paul&#x27;s 80,000 Hours interview being awesome</p> milan_griffes RyqFtQZ8eRySe2uhZ 2019-02-21T00:42:06.435Z Comment by Milan_Griffes on Time-series data for income & happiness? <p>That definitely matches my intuition too.</p> milan_griffes 3PdDLTmwpr5eybNHc 2019-02-20T18:05:42.364Z Comment by Milan_Griffes on Impact Prizes as an alternative to Certificates of Impact <p>Is there a postmortem somewhere on <a href="">Certificates of Impact</a> &amp; challenges they faced when implementing?</p> milan_griffes enazzjAfvJ34Bpi4P 2019-02-20T17:42:14.311Z Comment by Milan_Griffes on Open Thread: What’s The Second-Best Cause? <p>I think causes that are more <a href="">robust to cluelessness</a> should be higher priority than causes that are less so.</p><p>I feel pretty uncertain about which cause in the &quot;robust-to-cluelessness&quot; class should be second priority. </p><p>If I had to give an ordered list, I&#x27;d say:</p><p>1. AI alignment work</p><p>2. Work to increase the number of people that are both well-intentioned &amp; highly capable</p><p>3. ...</p><p></p> milan_griffes dzp9xmw7exwsvzMkg 2019-02-20T17:14:49.504Z Comment by Milan_Griffes on Impact Prizes as an alternative to Certificates of Impact <p>Got it. So this would go something like:</p><ul><li>There&#x27;s a prize!</li><li>I&#x27;m going to do <em>X</em>, which I think will win the prize!</li><li>Do you want to buy my rights to the prize, once I win it after doing <em>X </em>?</li></ul><p>Seems like this will select for sales &amp; persuasion ability (which could be an important quality for successfully executing projects).</p> milan_griffes jxFjRhRc7rE2mxm9p 2019-02-20T15:21:49.627Z Comment by Milan_Griffes on Impact Prizes as an alternative to Certificates of Impact <p>So the prize money gets paid out in 2022, in the tl;dr example? (I&#x27;m a little unclear about that from my quick read.)</p><p>This means that the Impact Prize wouldn&#x27;t help teams fund their work during the 2019-22 period. Am I understanding that correctly?</p> milan_griffes hsR8FFSG6JsvNCb8Q 2019-02-20T05:43:31.524Z Time-series data for income & happiness? <p><em>Previous: <a href="">Giving more won&#x27;t make you happier</a></em></p><p>This evening, I was listening to <a href="">this old episode of EconTalk</a> (<a href="">a</a>), wherein Richard Epstein discusses income, wealth, and happiness:</p><blockquote>Theory of revealed preferences. We see people working hard to get raises. The happiness literature suggests that everyone is under a deep delusion about what makes them happy, and the guys running the survey know better. </blockquote><blockquote>Methodological fallacy: data seem to suggest that when you have higher incomes you don&#x27;t necessarily have a whole lot higher level of happiness... People make a pact – I&#x27;ll be miserable for a few years if you&#x27;ll make me rich in the longer run. </blockquote><blockquote>So, in the short run they report being less happy. They don&#x27;t want to be unhappy forever, so eventually they&#x27;ll take a lower paying job – and report being happier.</blockquote><p>This consideration – people with intense, high-paying jobs being less happy when surveyed but (knowingly) doing this for increased future happiness – seems important when thinking about the happiness&lt;&gt;income relationship. We totally overlooked it in <a href="">the recent Forum post</a>.</p><p>Does anyone know of longitudinal studies that look at happiness of people over time, as they move into and out of high-intensity jobs? </p><p>I&#x27;d like to learn more about this.</p> milan_griffes DtSXXdZnEb2mH3jGs 2019-02-20T05:38:23.800Z Comment by Milan_Griffes on You have more than one goal, and that's fine <p>Could you say a little more about how you decide what size each pot of money should be?</p> milan_griffes CLQmchz8Ar24DvyuW 2019-02-20T05:18:33.570Z Comment by Milan_Griffes on Major Donation: Long Term Future Fund Application Extended 1 Week <p>If someone&#x27;s already applied to the Fund for this round, do they need to take any further action? (in light of the new donation &amp; deadline extension)</p> milan_griffes WgjpYGi6Pdgx7qbcm 2019-02-17T15:31:35.038Z Comment by Milan_Griffes on The Need for and Viability of an Effective Altruism Academy <p>The <a href="">whole thread</a> around the comment you linked to seems relevant to this.</p> milan_griffes z43mXPyruSb5n4b97 2019-02-16T00:42:24.561Z Comment by Milan_Griffes on The Need for and Viability of an Effective Altruism Academy <p>Oh yeah, good call. Forgot about the Pareto Fellowship.</p> milan_griffes wPykAoSp2MpfGQZyE 2019-02-16T00:39:07.802Z Comment by Milan_Griffes on The Need for and Viability of an Effective Altruism Academy <p><a href="">Paradigm Academy</a> comes to mind. Curious about how you see your proposal as being different from that.</p> milan_griffes N3d7XBfiCsSXhuEdr 2019-02-15T23:10:01.746Z Comment by Milan_Griffes on EA Community Building Grants Update <p>Thanks for all that you&#x27;re doing to make REACH happen!</p> milan_griffes rpTNcSCymtevP74Gk 2019-02-15T22:57:10.607Z Comment by Milan_Griffes on Introducing GPI's new research agenda <p>Very nice. </p><p>Is there a quick way to use the agenda to see GPI&#x27;s research prioritization? (e.g. perhaps the table of contents is ordered from high-to-low priority?)</p> milan_griffes MjQBSTSqmu4wC4EDR 2019-02-15T22:55:34.555Z Comment by Milan_Griffes on Three Biases That Made Me Believe in AI Risk <p>Me too!</p> milan_griffes ioAGskdnd5EkzjkJj 2019-02-14T16:28:10.073Z Comment by Milan_Griffes on A system for scoring political candidates. RFC (request for comments) on methodology and positions <blockquote>Comments on any issue are generally welcome but naturally you should try to focus on major issues rather than minor ones. If you post a long line of arguments about education policy for instance, I might not get around to reading and fact-checking the whole thing, because the model only gives a very small weight to education policy right now (0.01) so it won&#x27;t make a big difference either way. But if you say something about immigration, no matter how nuanced, I will pay close attention because it has a very high weight right now (2).</blockquote><p></p><p>I think this begs the question. </p><p>If modeler attention is distributed proportional to the model&#x27;s current weighting (such that discussion of high-weighted issues receive more attention than discussion of low-weighted issues), it&#x27;ll be hard to identify mistakes in the current weighting.</p> milan_griffes vrzjapFaQPFb3vkFy 2019-02-13T17:45:09.289Z Comment by Milan_Griffes on EA grants available to individuals (crosspost from LessWrong) <p><a href="">YC 120</a> isn&#x27;t quite a funding source, but getting in would connect you with a bunch of possible funders. Applications close on Feb 18th.</p> milan_griffes XwXeGxyBdRaeTs2wL 2019-02-13T06:28:33.914Z Comment by Milan_Griffes on EA grants available to individuals (crosspost from LessWrong) <p>For sure. Also <a href="">check with Tyler before applying</a> because there&#x27;s some stuff he definitely won&#x27;t fund (and he replies to his email).</p> milan_griffes SpNSQdRszdpWnaZFZ 2019-02-13T06:26:53.215Z Comment by Milan_Griffes on The Narrowing Circle (Gwern) <p>Eh, but nowadays we&#x27;re &quot;responsible&quot; in a way that carries dark undertones. </p><p>Many US elderly aren&#x27;t embedded in multigenerational communities, but instead warehoused in nursing homes (where they aren&#x27;t in regular contact with their families &amp; don&#x27;t have a clear role to play in society).</p><p>Hard to say whether this is an improvement over how things were 100 years ago. I do know that I&#x27;m personally afraid of ending up in a nursing home &amp; plan to make arrangements to reduce the probability of such.</p> milan_griffes niKnbeDBASMhewMut 2019-02-13T06:21:04.000Z Comment by Milan_Griffes on The Narrowing Circle (Gwern) <p>Seems like a real shift. (Perhaps driven by the creation of a nursing home industry?)</p> milan_griffes 4xmErdXGt48kDw4ik 2019-02-12T15:29:11.766Z Comment by Milan_Griffes on What we talk about when we talk about life satisfaction <p>Thanks! This is from the Oxford Handbook of Happiness?</p> milan_griffes ivSrryic9Pw47mCEo 2019-02-11T19:00:10.650Z Comment by Milan_Griffes on Arguments for moral indefinability <p>This is great – thank you for taking the time to write it up with such care.</p><p>I see overlap with <a href="">consequentialist cluelessness</a> (perhaps unsurprising as that&#x27;s been a hobbyhorse of mine lately). </p> milan_griffes tnoAGHCPjbA3boqwQ 2019-02-11T18:54:55.100Z Comment by Milan_Griffes on My positive experience taking the antidepressant Wellbutrin / Bupropion, & why maybe you should try it too <p>Was chatting with <a href="">Gwern</a> about this. An excerpt of their thoughts (published with permission):</p><blockquote>Wellbutrin/Buproprion is a weird one. Comes up <a href="">often on SSC</a> and elsewhere as surprisingly effective and side-effect free, but also with a very wide variance and messy mechanism (even for antidepressants) so anecdotes differ dramatically. </blockquote><blockquote>With antidepressants, you&#x27;re usually talking multi-week onset and washout periods, so blinded self-experiments would take something like half a year for decent sample sizes. It&#x27;s not that easy to get if you aren&#x27;t willing to go through a doctor (ie no easy ordering online from a DNM or clearnet site, like modafinil)... </blockquote><blockquote>Finally, as far as I can tell, my personal problems have more to do with anxiety than depression, and anti-anxiety is not what buproprion is generally described as best for, so my own benefit is probably less than usual. I thought about it a little but decided it was too weird and hard to get, and self-experiments would take too long.</blockquote> milan_griffes Qeya6qfndGnKKmHJc 2019-02-06T18:37:17.660Z Comment by Milan_Griffes on My positive experience taking the antidepressant Wellbutrin / Bupropion, & why maybe you should try it too <p>Nice. I love ideas with the shape of &quot;Consider trying this thing because the costs are low, even if you&#x27;re not sure if it will help or pretty sure it won&#x27;t.&quot;</p><blockquote>The main confounder I worry about is that I changed what I spent most of my time doing at work around that time, and I think that also improved my life.</blockquote><p>Given confounders like this, it&#x27;d be great to see someone run a <a href="">Gwern-style</a> controlled trial on their Wellbutrin use. The value of information would probably be quite high.</p><p>It&#x27;d be sorta tricky to do given the on- and off-ramping effects of the drug, so perhaps should only be undertaken by someone with sufficient <a href="">slack</a> to accommodate.</p> milan_griffes q6RmBtPMPo7EFGSXu 2019-02-06T00:29:04.235Z Comment by Milan_Griffes on EA Boston 2018 Year in Review <p>Co-authorship! </p><p>To mods: how is karma distributed for co-authored posts?</p> milan_griffes idBDnn72uYzfQ5gX4 2019-02-06T00:20:55.974Z Comment by Milan_Griffes on What we talk about when we talk about life satisfaction <p>Got it. I&#x27;m somewhat more bearish than you re: academic philosophers sharing my goals here. (Though some definitely do! Generalizations are hard.)</p> milan_griffes p6TwgF6jqix72qqwi 2019-02-06T00:18:09.271Z Comment by Milan_Griffes on What we talk about when we talk about life satisfaction <p>Huh, I feel like the same issue would arise for (e.g.) eudaimonia, if we tried to spec out what it is we mean exactly by &quot;eudaimonia.&quot;</p><p>(My model here is that the psychological constructs are an attempt at specifying + making quantifiable concepts that philosophy had identified but left vague.) </p> milan_griffes ZzkKxsddoe8zj9xSF 2019-02-05T20:29:56.631Z Comment by Milan_Griffes on Near-term focus, robustness, and flow-through effects <p>Most of my impulse towards short-termism arises from concerns about cluelessness, which I wrote about <a href="">here</a>.</p><p>Holding a person-affecting ethic is another reason to prioritize the short-term; Michael Plant argues for the person-affecting view <a href="">here</a>.</p> milan_griffes 3MbTbeBp9nnKDExwQ 2019-02-05T15:12:42.959Z What we talk about when we talk about life satisfaction <p><em>Epistemic status: exploring. Previous <a href="">related discussion</a>.</em></p><p>I feel confused about what people are talking about when they talk about life satisfaction scales.</p><p>You know, this kind of question: &quot;how satisfied are you with your life, on a scale of 0 to 10?&quot;</p><p>(Actual life satisfaction scales are <u><a href="">somewhat more nuanced</a></u> (<u><a href="">a</a></u>), but the confusion I&#x27;m pointing to persists.)</p><p><strong>The most satisfying life imaginable</strong></p><p>On a 0-to-10 scale, does 10 mean &quot;the most satisfying life I can imagine?&quot;</p><p>But given how poor our introspective access is, why should we trust our judgments about what possible life-shape would be most satisfying?</p><p>The difficulty here sharpens when reflecting on how satisfaction preferences morph over time: my 5-year-old self had a very different preference-set than my 20-something self, and I&#x27;d expect my middle-aged self to have quite a different preference-set than my 20-something self.</p><p>Perhaps we mean something like &quot;the most satisfying life I can imagine for myself at this point in my life, given what I know about myself &amp; my preferences.&quot; But this is problematic – if someone was extremely satisfied (such that they&#x27;d rate themselves a 10), but would become even more satisfied if Improvement <em>X</em> were introduced, shouldn&#x27;t the scale be able to accommodate their perceived increase in satisfaction? (i.e. They weren&#x27;t really at a 10 before receiving Improvement <em>X</em> after all, if their satisfaction improved upon receiving it. But under this definition, the extremely satisfied person was appropriately rating themselves a 10 beforehand.)</p><p><strong>The most satisfying life, objectively</strong></p><p>On a 0-to-10 scale, does 10 mean &quot;the most satisfying life, objectively?&quot;</p><p>But given the enormous <u><a href="">state-space</a></u> of reality (which remains truly enormous even after being reduced by qualifiers like &quot;reality ordered such that humans exist&quot;), why should we be confident that the states we&#x27;re familiar with overlap with the states that are objectively most satisfying?</p><p>The difficulty here sharpens when we factor in reports of extremely satisfying states unlocked by esoteric practices. (Sex! Drugs! Enlightenment!) Reports like this crop up frequently enough that it seems hasty to dismiss them out of hand without first investigating (e.g. reports of enlightenment states from this neighborhood of the social graph: <u><a href="">1</a></u>, <u><a href="">2</a></u>, <u><a href="">3</a></u>, <u><a href="">4</a></u>, <u><a href="">5</a></u>).</p><p>The difficulty sharpens even further given the lack of consensus around what life satisfaction is – the Evangelical model of a satisfying life is very different than the Buddhist.</p><p><strong>The most satisfying life, in practice</strong></p><p>I think that in practice, a 10 on a 0-to-10 scale means something like &quot;the most satisfying my life can be, benchmarked on all the ways my life has been so far plus the nearest neighbors of those.&quot;</p><p>This seems okay, but plausibly forecloses on a large space of awesomely satisfying lives that look very different than one&#x27;s current benchmark.</p><p>So I don&#x27;t really know what we&#x27;re talking about when we talk about life satisfaction scales.</p><hr class="dividerBlock"/><p><em>Cross-posted to <a href="">LessWrong</a> &amp; <a href="">my blog</a>.</em></p> milan_griffes zAPWr9eGtWkc8nuyH 2019-02-04T23:51:06.245Z Comment by Milan_Griffes on EA Hotel Fundraiser 2: current guests and their projects <p>Great to see so many folks working at cool stuff at the EA Hotel! </p><p>Thank you for taking the time to write this up, and for everything else you&#x27;ve done to make this happen.</p> milan_griffes 9THTsH2y5znJ35HpM 2019-02-04T22:35:38.247Z Comment by Milan_Griffes on If slow-takeoff AGI is somewhat likely, don't give now <blockquote>The same neglect that potentially makes AI investments a good deal can also make AI philanthropy a better deal.</blockquote><p>Makes sense.</p><p>I realize I was writing from the perspective of a small-scale donor (whose donations trade off meaningfully against their saving &amp; consumption goals). </p><p>From the perspective of a fully altruistic donor (who&#x27;s not thinking about such trade-offs), doing current AI philanthropy seems really good (if the donor thinks current opportunities are sensible bets).</p> milan_griffes McDwgWrmBk4Lj8zR6 2019-02-04T18:26:30.344Z Comment by Milan_Griffes on [deleted post] <p>SNP = &quot;single-nucleotide polymorphism&quot;?</p><p>Perhaps add a definition in-line at first use as I had to google for that &amp; don&#x27;t know enough genetics to be confident that it&#x27;s right.</p> milan_griffes TbfjZEqRxYCdAjsir 2019-02-04T17:34:22.533Z Comment by Milan_Griffes on Latest Research and Updates for January 2019 <p>+1</p><p>Does anyone know how the BCC vertical came about?</p><p>Does it have any direct connection with EA?</p> milan_griffes 8NZNkEQvL6Sp7NjHb 2019-01-31T17:18:33.637Z Comment by Milan_Griffes on Latest Research and Updates for January 2019 <blockquote>Emergent Ventures is <a href="">looking to fund projects</a> on &quot;advancing humane solutions to those facing adversity – based on tolerance, universality, and cooperative processes&quot;</blockquote><p>Recommend shooting Tyler Cowen an email checking about the chances for your idea before submitting an app. He&#x27;s pretty responsive to email &amp; there are areas Emergent Ventures almost certainly won&#x27;t fund, so checking first can save a bunch of time.</p> milan_griffes vy2s8N7R3gCoQ3HkE 2019-01-31T17:17:26.510Z Comment by Milan_Griffes on Cost-Effectiveness of Aging Research <blockquote>GiveWell <a href="">estimates</a> a cost of $1965 for a gain of <a href="">~8</a> DALY-equivalents, or $437.50 per DALY, from giving malaria-preventing mosquito nets to children in developing countries.</blockquote><p>Just flagging that GiveWell&#x27;s view about mosquito net DALYs has changed a lot:</p><ul><li>In 2015, I believe they were modeling each life saved by mosquito nets as being <a href="">equivalent to 36.53 DALYs</a>, following <a href="">Lopez et al. 2006</a>, Table 5.1 p. 402</li><li>In 2016, they modeled each under-age-5 life saved by mosquito nets as being <a href="">equivalent to 7 DALYs</a> (presumably following an intuition that young infants don&#x27;t yet have a fully formed personhood &amp; thus have less moral patienthood than people above the age of 5)</li><li>In 2017, they <a href="">stopped using DALYs altogether</a>, noting &quot;We felt that using the DALY framework in an unconventional way could lead to confusion, while strictly adhering to the conventional version of the framework would prevent some individuals from adequately accounting for their views within our CEA.&quot; </li></ul> milan_griffes BcZjMdMB7siusHCdK 2019-01-31T17:13:15.087Z Comment by Milan_Griffes on What Are Effective Alternatives to Party Politics for Effective Public Policy Advocacy? <p>Ballot initiatives, at least in the US.</p> milan_griffes 4avpZn7sqs7eskEMH 2019-01-30T20:00:04.002Z Comment by Milan_Griffes on Is intellectual work better construed as exploration or performance? <p>Yeah, I&#x27;ve realized I&#x27;m most interested in the question of which metaphor is better to be holding while doing intellectual work.</p><p>See <a href="">this comment</a>.</p> milan_griffes YjoLFYYrLq94X3JA2 2019-01-29T18:13:22.085Z Comment by Milan_Griffes on Is intellectual work better construed as exploration or performance? <p>Thanks, I found this helpful. TED talks are a great example of intellectual performance without a negative connotation.</p><p>I&#x27;ve realized I&#x27;m most interested in the question of which metaphor to be holding while doing intellectual work. </p><p>On that, I think it makes sense to be (almost) exclusively using the &quot;exploration&quot; metaphor when doing intellectual work.</p><p>Then, it seems good to switch to the &quot;performance&quot; metaphor when it&#x27;s time to propagate ideas (or hand off to a partner specialized in intellectual performance). </p><p><strong>Open question for me:</strong> Is it costly to grow skillful in intellectual performance? Does it make one&#x27;s intellectual work worse / less truth-seeking? (My intuition is &quot;yes, it&#x27;s costly&quot; but seems plausible that the performance skill could be safely compartmentalized.)</p> milan_griffes yvBq8Gbjc7ftf2WjK 2019-01-28T18:08:11.487Z Comment by Milan_Griffes on If slow-takeoff AGI is somewhat likely, don't give now <blockquote>I think the more important question, which Richard brought up, is whether having X times more cash after a suboptimal/dangerous AI takeoff begins is better than simply donating the money now in an attempt to avert bad outcomes.</blockquote><p>Agree this is important. As I&#x27;ve thought about it some more, it appears quite complicated. Also seems important to have a view based on more than rough intuition, as it bears on the donation behavior of a lot of EA.</p><p>I&#x27;d probably benefit from having a formal model here, so I might make one.</p> milan_griffes mGKKZpTeiqHhosuQJ 2019-01-27T18:04:07.785Z Comment by Milan_Griffes on If slow-takeoff AGI is somewhat likely, don't give now <p>Thanks for tying this to mission hedging – definitely seems related.</p> milan_griffes trbzkmkoYshoywX7p 2019-01-27T18:01:52.199Z Comment by Milan_Griffes on If slow-takeoff AGI is somewhat likely, don't give now <blockquote>Milan is asserting a company that develops advanced AI capabilities in the future will likely generate higher returns than the stock market <em>after</em> it has developed these capabilities.</blockquote><p>Perhaps that, but even if they don&#x27;t, the returns from a market-tracking index fund could be very high in the case of transformative AI.</p><p>I&#x27;m imagining two scenarios:</p><p>1. AI research progresses &amp; AI companies start to have higher-than-average returns </p><p>2. AI research progresses &amp; the returns from this trickle through the whole market (but AI companies don&#x27;t have higher-than-average returns)</p><p>A version of the argument applies to either scenario.</p> milan_griffes eeS6XJD38vwS45Zqm 2019-01-27T18:00:44.982Z Comment by Milan_Griffes on If slow-takeoff AGI is somewhat likely, don't give now <blockquote>The second is that the growth rate of the companies you invest in must exceed the rate at which the marginal cost of doing good increases, due to low-hanging fruit getting picked and due to lost opportunities for compounding.</blockquote><p></p><p>Michael Dickens engages with something similar in <a href="">this post</a>.</p><p>In the case of transformative, slow-takeoff AI driven by for-profit companies, it seems reasonable to assume that the economy is going to grow faster than the marginal cost of doing good, because gains from AI seem unlikely to be evenly distributed.</p><p></p><blockquote>The third is that the growth potential of AI companies isn&#x27;t already priced in, in a way that reduces your expected returns to be no better than index funds.</blockquote><p>I&#x27;m unsure whether AI company growth is adequately priced in or not. </p><p>If it is, I think the argument still holds. The returns from an index fund could be very high in the case of transformative AI, so holding index funds would probably be better than donating now in that case. </p><p>See also the discussion <a href="">here</a> &amp; <a href="">here</a>.</p> milan_griffes XpMbvoGEg2xdRwCaz 2019-01-26T16:42:52.008Z Is intellectual work better construed as exploration or performance? <p><em>Cross-posted to <a href="">LessWrong</a>.</em></p><p>I notice I rely on two metaphors of intellectual work:</p><p>1. <strong>intellectual work as exploration – </strong>intellectual work is an expedition through unknown territory (a la <a href="">Meru</a>, a la <a href="">Amundsen &amp; the South Pole</a>). It&#x27;s unclear whether the expedition will be successful; the explorers band together to function as one unit &amp; support each other; the value of the work is largely &quot;in the moment&quot; / &quot;because it&#x27;s there&quot;, the success of the exploration is mostly determined by objective criteria.</p><p>Examples: Andrew Wiles spending six years in secrecy <a href="'s_Last_Theorem">to prove Fermat&#x27;s Last Theorem</a>, Distill&#x27;s <a href="">essays on machine learning</a>, Robert Caro&#x27;s <a href="">books</a>, Donne Martin&#x27;s <a href="">data science portfolio</a> (clearly a labor of love)</p><p>2. <strong>intellectual work as performance</strong> – intellectual work is a performative act with an audience (a la <a href="">Black Swan</a>, a la Super Bowl halftime shows). It&#x27;s not clear that any given performance will succeed, but there will always be a &quot;best performance&quot;; performers tend to compete &amp; form factions; the value of the work accrues afterward / the work itself is instrumental; the success of the performance is mostly determined by subjective criteria.</p><p>Examples: journal <a href="">impact factors</a>, any social science result that&#x27;s <a href="">published but fails to replicate</a>, academic dogfights on Twitter, <a href="">TED talks</a></p><hr class="dividerBlock"/><p>Clearly both metaphors do work – I&#x27;m wondering which is better to cultivate on the margin. </p><p>My intuition is that it&#x27;s better to lean on the image of intellectual work as exploration; curious what folks here think.</p> milan_griffes QwLKTcte8LgTNmCM2 2019-01-25T22:00:52.792Z Comment by Milan_Griffes on [Offer, Paid] Help me estimate the social impact of the startup I work for. <p>Your comment made me think of <a href="">this essay by Zvi</a> (<a href="">a</a>), especially his Part II.</p> milan_griffes H5Zfbo3iXKFjjKTCK 2019-01-25T04:30:54.283Z Comment by Milan_Griffes on If slow-takeoff AGI is somewhat likely, don't give now <blockquote>You agree that we shouldn&#x27;t expect AI companies to generate higher-than-average returns in the long run.</blockquote><p>I feel somewhat confused about whether to expect that AI companies will beat the broader market. </p><p>On one hand, I have an intuition that the current market price hasn&#x27;t fully baked in the implications of future AI development. (Especially when I see things like <a href="">most US executives thinking that AI will have less of an impact than the internet did</a>.)</p><p>On the other, I accord with your point about it being very hard to &quot;beat the market&quot; and generally have a high prior about markets being efficient.</p><p><a href="">Inadequate Equilibria</a> seems relevant here.</p><p></p><blockquote>Therefore, your choice to invest or donate should be completely independent of your AI beliefs, because no matter your AI predictions, you don&#x27;t expect AI companies to have higher-than-average future returns.</blockquote><p>I do think that your AI predictions should bear on your decision to invest or donate now, as even if AI companies won&#x27;t have higher-than-average returns, the average return of future firms could be <strong>extremely</strong> high (given productivity gains unlocked by AI), and it would be a shame to miss out on that return because you donated the money you otherwise would have invested (in a basket AI companies or a broader index fund like VTSAX, wherever).</p> milan_griffes 4uzrmPp8hx8RoFjkX 2019-01-24T21:12:56.435Z Comment by Milan_Griffes on Announcing PriorityWiki: A Cause Prioritization Wiki <p><strong>Update:</strong> seems like <a href=""></a> has been deprioritized :-/</p> milan_griffes MigYFYdQoCn4GwkMX 2019-01-24T17:58:29.596Z Comment by Milan_Griffes on If slow-takeoff AGI is somewhat likely, don't give now <blockquote>...when people talk about a company being undervalued I think that typically includes both unrecognised growth potential and unrecognised current value.</blockquote><p>I think it&#x27;s a spectrum:</p><ul><li>Value stocks are where most of the case for investment is from the market is mis-pricing the firm&#x27;s current operations</li><li>Growth stocks are where most of the case for investment is from the future (expected) growth of the firm</li></ul> milan_griffes gyvvsZddHznswZJzT 2019-01-24T17:38:05.211Z Comment by Milan_Griffes on If slow-takeoff AGI is somewhat likely, don't give now <blockquote>For example, premise 4 doesn&#x27;t actually follow directly from premise 3 because the returns could be large but not outsized compared with other investments.</blockquote><p>Agreed; I clarified my position after Aidan pointed this out: (<a href="">1</a>, <a href="">2</a>)</p> milan_griffes ukS7g6jbYGkMz3Wf7 2019-01-24T16:42:50.710Z Comment by Milan_Griffes on If slow-takeoff AGI is somewhat likely, don't give now <p>Clarified my view somewhat in <a href="">this reply to Aidan</a>.</p> milan_griffes Q9nxP9QcdMqy6jtuk 2019-01-24T16:38:26.084Z Comment by Milan_Griffes on If slow-takeoff AGI is somewhat likely, don't give now <p>Also, I was being somewhat sloppy in the post on this point – thanks for pushing on it! </p><p>I&#x27;ve edited the post to better reflect my view.</p> milan_griffes kEQohAG2h5XFNwnAg 2019-01-24T07:00:07.631Z If slow-takeoff AGI is somewhat likely, don't give now <p>There&#x27;s a longstanding debate in EA about whether to emphasizing giving now or giving later – see <a href="">Holden in 2007</a> (<a href="">a</a>), <a href="">Robin Hanson in 2011</a> (<a href="">a</a>), <a href="">Holden in 2011 (updated 2016)</a> (<a href="">a</a>), <a href="">Paul Christiano in 2013</a> (<a href="">a</a>), <a href="">Robin Hanson in 2013</a> (<a href="">a</a>), <a href="">Julia Wise in 2013</a> (<a href="">a</a>), <a href="">Michael Dickens in 2019</a> (<a href="">a</a>). </p><p>I think answers to the &quot;give now vs. give later&quot; question rest on deep worldview assumptions, which makes it fairly insoluble (though <a href="">Michael Dickens&#x27; recent post</a> (<a href="">a</a>) is a nice example of someone changing their mind about the issue). So here, I&#x27;m not trying to answer the question once and for all. Instead, I just want to make an argument that seems fairly obvious but I haven&#x27;t seen laid out anywhere.</p><p>Here&#x27;s a sketch of the argument –</p><p><strong>Premise 1:</strong> If AGI happens, it will happen via a slow takeoff.</p><ul><li>Here&#x27;s <a href="">Paul Christiano on slow vs. fast takeoff</a> (<a href="">a</a>) – the following doesn&#x27;t hold if you think AGI is more likely to happen via a fast takeoff.</li></ul><p><strong>Premise 2:</strong> The frontier of AI capability research will be pushed forward by research labs at publicly-traded companies that can be invested in. </p><ul><li>e.g. <a href="">Google Brain</a>, <a href="">Google DeepMind</a>, <a href="">Facebook AI</a>, <a href="">Amazon AI</a>, <a href="">Microsoft AI</a>, <a href="">Baidu AI</a>, <a href="">IBM Watson</a></li><li><a href="">OpenAI</a> is a confounder here – it&#x27;s unclear who will control the benefits realized by the OpenAI capabilities research team. </li><ul><li>From the <a href="">OpenAI charter</a> (<a href="">a</a>): &quot;Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.&quot;</li></ul><li>Chinese companies that can&#x27;t be accessed by foreign investment are another confounder – I don&#x27;t know much about that space yet.</li></ul><p><strong>Premise 3:</strong> A large share of the returns unlocked by advances in AI will accrue to shareholders of the companies that invent &amp; deploy the new capabilities. </p><p><strong>Premise 4:</strong> Being an investor in such companies will generate outsized returns on the road to slow-takeoff AGI.</p><ul><li>It&#x27;d be difficult to identify the particular company that will achieve a particular advance in AI capabilities, but relatively simple to hold a basket of the companies most likely to achieve an advance (similar to an index fund).</li><li>If you&#x27;re skeptical of being able to select a basket of AI companies that will track AI progress, investing in a broader index fund (e.g. <a href="">VTSAX</a>) could be about as good. During a slow takeoff the returns to AI may well ripple through the whole economy. </li></ul><p><strong>Conclusion:</strong> If you&#x27;re interested in maximizing your altruistic impact, and think slow-takeoff AGI is somewhat likely (and more likely than fast-takeoff AGI), then investing your current capital is better than donating it now, because you may achieve (very) outsized returns that can later be deployed to greater altruistic effect as AI research progresses.</p><ul><li>Note that this conclusion holds for both <a href="">person-affecting and longtermist views</a>. All you need to believe for it to hold is that a slow takeoff is somewhat likely, and more likely than a fast takeoff. </li><li>If you think a fast takeoff is more likely, it probably makes more sense to either invest your current capital in tooling up as an AI alignment researcher, or to donate now to your favorite AI alignment organization (<a href="">Larks&#x27; 2018 review</a> (<a href="">a</a>) is a good starting point here).</li></ul><hr class="dividerBlock"/><p><em>Cross-posted to <a href="">my blog</a>. I&#x27;m not an investment advisor, and the above isn&#x27;t investment advice.</em></p><p></p> milan_griffes JimLnG3sbYqPF8rKJ 2019-01-23T20:54:58.944Z Giving more won't make you happier <p>At first approximation, there are two motivations for donating money – egoistic &amp; altruistic. </p><p>The egoistic motivation relates to the personal benefit you accrue from giving your money away. The altruistic motivation relates to the benefits that other people receive from your donations. (This roughly maps to the <a href="">fuzzies vs. utilons</a> (<a href="">a</a>) distinction.)</p><h2>The egoistic motivation for donating is scope insensitive</h2><p>The egoistic motivation for donating is highly <a href="">scope insensitive</a> – giving away $500 feels roughly as good as giving away $50,000. I haven’t found any academic evidence on this, but it’s been robustly true in my experience.</p><p>This scope insensitivity seems pretty baked in – knowing about it doesn’t make it go away. I can remind myself that I’m having 100x the impact when I donate $50,000 than when I donate $500, but I find that when I reflect casually about my donations, I feel about as satisfied at my small donations as I do about my large ones, even after repeatedly reminding myself about the 100x differential.</p><p>We’re probably also scope insensitive qualitatively – giving $5,000 to a low-impact charity feels about as good as giving $5,000 to an effective charity (especially if you don’t reflect very much about the impact of the donation, and especially especially if the low-impact charity tells you a compelling story about the particular people your donation is helping).</p><h2>Effective giving increases happiness, but so does low-impact giving</h2><p>EA sometimes advocates that giving will increase your happiness. Here’s an <a href="">80,000 Hours article</a> (<a href="">a</a>) to that effect. Here’s a <a href="">piece by Giving What We Can</a> (<a href="">a</a>).</p><p>I think sometimes implicit here is the claim that giving <em>effectively </em>will increase your happiness (I think this because almost all other discussion of giving in EA spaces is about effective giving, and why effective giving is something to get excited about).</p><p>It seems pretty clear that donating some money to charity will increase your happiness. It’s less clear that donating to an effective charity will make you happier than donating to a low-impact charity. </p><p>Given the scope insensitivity of the egoistic motivation, it’s also unclear that giving away a lot of money will make you happier than giving away a small amount of money. </p><p>It seems especially unclear that the donation-to-happiness link scales anywhere linearly. Perhaps donating $1,000 makes you happier than donating $100, but does it make you 10x as happy? Does donating $2,000 make you 2x as donating $1,000? My intuition is that it doesn’t.</p><h2>Income increases happiness, up to a point </h2><p>Okay, so that’s a bunch of discussion from intuition &amp; lived experience. Now let’s look at paper.</p><p><a href="">Jebb et al. 2018</a> analyzed Gallup Worldwide Poll survey data on income &amp; happiness. This dataset had responses from about 1.7 million people in 164 countries, so we don’t have to worry about small sample size.</p><p>Jebb et al. were curious about the income satiation effect – is there a point at which additional income no longer contributes to subjective well-being? And if there is, where is it?</p><p>From the Gallup data, Jebb et al. found that there is indeed an income satiation effect: </p><p></p><span><figure><img src="" class="draft-image center" style="" /></figure></span><p></p><p>Globally, happiness stopped increasing alongside income after $95,000 USD / year.</p><p>For Western European respondents, happiness stopped increasing alongside income after $100,000 USD / year. For North American respondents, the satiation point was $105,000 USD / year.</p><h2>An aside on terminology</h2><p>&quot;<a href="">Subjective well-being</a>&quot; is the term social scientists use to think about happiness. Researchers usually break subjective well-being down into two components – life evaluation &amp; emotional well-being. Here are heavyweights Daniel Kahneman &amp; Angus Deaton on <a href="">how those two things are different</a> (<a href="">a</a>):</p><blockquote>Emotional well-being (sometimes called hedonic well-being or experienced happiness) refers to the emotional quality of an individual&#x27;s everyday experience – the frequency and intensity of experiences of joy, fascination, anxiety, sadness, anger, and affection that make one&#x27;s life pleasant or unpleasant. Life evaluation refers to a person&#x27;s thoughts about his or her life. Surveys of subjective well-being have traditionally emphasized life evaluation. The most commonly asked question in these surveys is the life satisfaction question: “How satisfied are you with your life as a whole these days?” ... Emotional well-being is assessed by questions about the presence of various emotions in the experience of yesterday (e.g., enjoyment, happiness, anger, sadness, stress, worry).</blockquote><p><a href="">Jebb et al.</a> break down emotional well-being further into positive affect &amp; negative affect, which roughly correspond to experiencing positive &amp; negative emotive states.</p><p>Life evaluation seems like the more intuitive metric for our purposes here. (It’s also the more conservative choice due to its higher satiation points.) So when I talk about &quot;happiness,&quot; I&#x27;m actually talking about &quot;subjective well-being as assessed by life evaluation scores.&quot; My main points would still hold if we focused on emotional well-being instead.</p><h2>Income increases happiness up to $115,000 / year</h2><p>Returning to <a href="">Table 1</a>, we can pull out a couple of takeaways: </p><ul><li>The income satiation point for most EAs is at least $100,000 USD / year.</li><ul><li>Most EAs are in North America and Western Europe. </li><ul><li>The satiation point for life evaluation in Western Europe is about $100,000 USD / year.</li><li>The life evaluation satiation point in North America is about $105,000 USD / year.</li></ul></ul><li>Almost all EAs fall into Jebb et al.’s &quot;high education&quot; bracket: 16+ years of education, i.e. on track to complete a Bachelor’s. </li><ul><li>High-education populations have higher satiation points than low-education populations, an effect that the authors attribute to &quot;income aspirations or social comparisons with different groups.&quot;</li><li>The &quot;high education&quot; satiation point is $115,000 USD / year. </li><ul><li>That’s a global figure. The paper doesn’t give a region-by-region breakout of the &quot;high education&quot; cohort; it’s likely that the figure is even higher in the Western Europe &amp; North American regions, which have higher satiation points than the global average.</li></ul></ul></ul><p>Essentially, all income earned up to $115,000 USD / year (for college-educated folks living in North America &amp; Western Europe) contributes to one’s happiness.</p><h2>Putting it all together </h2><p>We can use the <a href="">Jebb et al. paper</a> to infer that donations which put your annual income below $115k will probably make you less happy. (And if you’re giving substantial amounts while earning a total income of less than $115k, those donations will probably contribute to a decrease in your happiness.) </p><p>Correspondingly, donating amounts such that your annual income remains above $115k probably won’t affect your happiness.</p><p>There’s a wrinkle here: it’s possible that much of the happiness benefit of earning a high income comes from the knowledge that you earn a high income, not what you use the money for materially. If this is the case, donating large amounts out of an income above $115k shouldn’t ding your happiness. </p><p>So perhaps only a weaker version of the claim holds: once you achieve an annual income above $115,000, you can give away large portions of it without incurring a happiness penalty (having already realized the happy-making benefit of your earnings). But even in this case, donating large amounts out of an income less than $115k still lowers your happiness (because you never benefit from the knowledge that you earn at least $115k). </p><p>It’s true that the act of donating will generate some personal happiness. But given the <a href="">scope insensitivity</a> at play here, you can realize a lot of this benefit by donating small amounts (and thus keeping a lot more of your money, which can then be deployed in other happy-making ways).</p><p>From a purely egoistic viewpoint, scope insensitivity lets us have our cake &amp; eat it too – we can feel good about our donating behavior while keeping most of our money.</p><h2>Conclusion: EA shouldn’t say that effective giving will make you happy</h2><p>My provisional conclusion here is that EA shouldn&#x27;t recommend effective giving on egoistic grounds.</p><p>There remains a strong altruistic case to be made for effective giving, but I think it’s worth acknowledging the real tradeoff between giving away large amounts of money and one’s personal happiness, at least for people earning less than $115,000 USD / year (on average, for college-educated people in Western Europe &amp; North America). If you want to give large amounts while avoiding this tradeoff, you should achieve a stable annual income of at least $115k before making substantial donations.</p><p>Further, EA should actively discourage people from effective giving if they&#x27;re mainly considering it as a way to become happier. Effective giving probably won&#x27;t make you happier than low-impact giving, and donating large amounts won&#x27;t make you happier than donating small amounts. Saying otherwise would be a false promise.</p><hr class="dividerBlock"/><p><em>Thanks to Gregory Lewis, Howie Lempel, Helen Toner, Benjamin Pence, and an anonymous collaborator for feedback on drafts of this essay. Cross-posted to <a href="">my blog</a>.</em></p> milan_griffes dvHpzesXtZAMFSTk3 2018-12-10T18:15:16.663Z Open Thread #42 <p>Use this thread to post things that are awesome, but not awesome enough to be full posts. This is also a great place to post if you don&apos;t have enough karma to post on the main forum.</p> <p>Consider giving your post a brief title to improve readability.</p> milan_griffes hZxs7cDJ7JcHMH8GZ 2018-10-17T20:10:00.472Z Doing good while clueless <p>This is the fourth (and final) post in a series exploring <a href="">consequentialist cluelessness</a> and its implications for effective altruism:</p><ul><li>The <a href="">first post</a> describes cluelessness &amp; its relevance to EA; arguing that for many popular EA interventions we don’t have a clue about the intervention’s overall net impact.</li><li>The <a href="">second post</a> considers a potential reply to concerns about cluelessness.</li><li>The <a href="">third post</a> examines how tractable cluelessness is – to what extent we can grow more clueful about an intervention through intentional effort?</li><li><strong>This post</strong> discusses how we might do good while being clueless to an important extent.</li></ul><p>Consider reading the previous posts (<a href="">1</a>, <a href="">2</a>, <a href="">3</a>) first.</p><hr class="dividerBlock"/><p>The <a href="">last post</a> looked at whether we could grow more clueful by intentional effort. It concluded that, for the foreseeable future, we will probably remain clueless about the long-run impacts of our actions to a meaningful extent, even after taking measures to improve our understanding and foresight.</p><p>Given this state of affairs, we should act cautiously when trying to do good. This post outlines a framework for doing good while being clueless, then looks at what this framework implies about current EA cause prioritization.</p><p>The following only make sense if you already believe that the far future matters a lot; this argument has been made <a href="">elegantly elsewhere</a> so we won’t rehash it here.[1]</p><h1>An analogy: interstellar travel</h1><p>Consider a spacecraft, journeying out into space. The occupants of the craft are searching for a star system to settle. Promising destination systems are all very far away, and the voyagers don’t have a complete map of how to get to any of them. Indeed, they know very little about the space they will travel through.</p><p>To have a good journey, the voyagers will have to successfully steer their ship (both literally &amp; metaphorically). Let&#x27;s use &quot;steering capacity&quot; as an umbrella term that refers to the capacity needed to have a successful journey.[2] &quot;Steering capacity&quot; can be broken down into the following five attributes:[3]</p><ul><li>The voyagers must have a clear idea of what they are looking for. (<strong>Intent</strong>)</li><li>The voyagers must be able to reach agreement about where to go. (<strong>Coordination</strong>)</li><li>The voyagers must be discerning enough to identify promising systems as promising, when they encounter them. Similarly, they must be discerning enough to accurately identify threats &amp; obstacles. (<strong>Wisdom</strong>)</li><li>Their craft must be powerful enough to reach the destinations they choose. (<strong>Capability</strong>)</li><li>Because the voyagers travel through unmapped territory, they must be able to see far enough ahead to avoid obstacles they encounter. (<strong>Predictive power</strong>)</li></ul><p>This spacecraft is a useful analogy for thinking about our civilization’s trajectory. Like us, the space voyagers are somewhat clueless – they don’t know quite where they should go (though they can make guesses), and they don’t know how to get there (though they can plot a course and make adjustments along the way).</p><p>The five attributes given above – intent, coordination, wisdom, capability, and predictive power – determine how successful the space voyagers will be in arriving at a suitable destination system. These same attributes can also serve as a useful framework for considering which altruistic interventions we should prioritize, given our present situation.  </p><h1>The basic point</h1><p>The basic point here is that interventions whose main known effects do not improve our steering capacity (i.e. our intent, wisdom, coordination, capability, and predictive power) are not as important as interventions whose main known effects do improve these attributes.</p><p>An implication of this is that interventions whose effectiveness is driven mainly by their <a href="">proximate impacts</a> are less important than interventions whose effectiveness is driven mainly by increasing our steering capacity.</p><p>This is because any action we take is going to have indirect &amp; long-run consequences that bear on our civilization’s trajectory. Many of the long-run consequences of our actions are unknown, so the future is unpredictable. Therefore, we ought to prioritize interventions that improve the wisdom, capability, and coordination of future actors, so that they are better positioned to address future problems that we did not foresee.</p><h1>What being clueless means for altruistic prioritization</h1><p>I think the steering capacity framework implies a portfolio approach to doing good – simultaneously pursuing a large number of diverse hypotheses about how to do good, provided that each approach maintains <a href="">reversibility</a>.[4]</p><p>This approach is similar to the Open Philanthropy Project’s <a href="">hits-based giving framework</a> – invest in many promising initiatives with the expectation that most will fail.</p><p>Below, I look at how this framework interacts with focus areas that effective altruists are already working on. Other causes that EA has not looked into closely (e.g. improving education) may also perform well under this framework; assessing causes of this sort is beyond the scope of this essay.</p><p>My thinking here is preliminary, and very probably contains errors &amp; oversights.</p><h1>EA focus areas to prioritize</h1><p>Broadly speaking, the steering capacity framework suggests prioritizing interventions that:[5]</p><ul><li>Further our understanding of what matters</li><li>Improve governance</li><li>Improve prediction-making &amp; foresight</li><li>Reduce existential risk</li><li>Increase the number of well-intentioned, highly capable people</li></ul><p></p><h3>To prioritize – better understanding what matters</h3><p>Increasing our understanding of what’s worth caring about is important for clarifying our intentions about what trajectories to aim for. For many moral questions, there is already broad agreement in the EA community (e.g. the view that all currently existing human lives matter is uncontroversial within EA). On other questions, further thinking would be valuable (e.g. how best to compare human lives to the lives of animals).</p><p>Myriad thinkers have done valuable work on this question. Particularly worth mentioning is the work of the <a href="">Foundational Research Institute</a>, the <a href="">Global Priorities Project</a>, the <a href="">Qualia Research Institute</a>, as well the <a href="">Open Philanthropy Project’s work on consciousness &amp; moral patienthood</a>.</p><p></p><h3>To prioritize – improving governance</h3><p>Improving governance is largely aimed at improving coordination – our ability to mediate diverse preferences, decide on collectively held goals, and work together towards those goals.</p><p>Efficient governance institutions are robustly useful in that they keep focus oriented on solving important problems &amp; minimize resource expenditure on zero-sum competitive signaling.</p><p>Two routes towards improved governance seem promising: (1) improving the functioning of existing institutions, and (2) experimenting with alternative institutional structures (Robin Hanson’s <a href="">futarchy proposal</a> and <a href="">seasteading</a> initiatives are examples here).</p><p></p><h3>To prioritize – improving foresight</h3><p>Improving foresight &amp; prediction-making ability is important for informing our decisions. The further we can see down the path, the more information we can incorporate into our decision-making, which in turn leads to higher quality outcomes with fewer surprises.</p><p>Forecasting ability can definitely be improved from baseline, but there are probably hard limits on how far into the future we can extend our predictions while remaining believable.</p><p>Philip Tetlock’s <a href="">Good Judgment Project</a> is a promising forecasting intervention, as are prediction markets like <a href="">PredictIt</a> and polling aggregators like <a href="">538</a>.</p><p></p><h3>To prioritize – reducing existential risk</h3><p>Reducing existential risk can be framed as “avoiding large obstacles that lie ahead.” Avoiding extinction and “lock-in” of suboptimal states is necessary for realizing the full potential benefit of the future.</p><p>Many initiatives are underway in the x-risk reduction cause area. <a href="">Larks’ annual review of AI safety work</a> is excellent; Open Phil has good material about <a href="">projects focused on other x-risks</a>.</p><p></p><h3>To prioritize – increase the number of well-intentioned, highly capable people</h3><p>Well-intentioned, highly capable people are a scarce resource, and will almost certainly continue to be highly useful going forward. Increasing the number of well-intentioned, highly capable people seems robustly good, as such people are able to diagnosis &amp; coordinate together on future problems as they arise.</p><p>Projects like <a href="">CFAR</a> and <a href="">SPARC</a> are in this category.</p><p>In a different vein, <a href="">psychedelic experiences hold promise as a treatment</a> for treatment-resistant depression, and may also improve the intentions of highly capable people who have not reflected much about what matters (“the betterment of well people”).</p><p></p><h1>EA focus areas to deprioritize, maybe</h1><p>The steering capacity framework suggests deprioritizing animal welfare &amp; global health interventions, to the extent that these interventions’ effectiveness is driven by their proximate impacts.</p><p>Under this framework, prioritizing animal welfare &amp; global health interventions may be justified, but only on the basis of improving our intent, wisdom, coordination, capability, or predictive power.</p><h3>To deprioritize, maybe – animal welfare</h3><p>To the extent that animal welfare interventions expand our civilization’s <a href="">moral circle</a>, they may hold promise as interventions that improve our intentions &amp; understanding of what matters (the <a href="">Sentience Institute</a> is doing work along this line).</p><p>However, following this framework, the case for animal welfare interventions has to be made on these grounds, not on the basis of cost-effectively reducing animal suffering in the present.</p><p>This is because the animals that are helped in such interventions cannot help “steer the ship” – they cannot contribute to making sure that our civilization’s trajectory is headed in a good direction.</p><p></p><h3>To deprioritize, maybe – global health</h3><p>To the extent that global health interventions improve coordination, or reduce x-risk by increasing socio-political stability, they may hold promise under the steering capacity framework.</p><p>However, the case for global health interventions would have to be made on the grounds of increasing coordination, reducing x-risk, or improving another steering capacity attribute. Arguments for global health interventions on the grounds that they cost-effectively help people in the present day (without consideration of how this bears on our future trajectory) are not competitive under this framework.</p><p></p><h1>Conclusion</h1><p>In sum, I think the fact that we are intractably clueless implies a portfolio approach to doing good – pursuing, in parallel, a large number of diverse hypotheses about how to do good.</p><p>Interventions that improve our understanding of what matters, improve governance, improve prediction-making ability, reduce existential risk, and increase the number of well-intentioned, highly capable people are all promising. Global health &amp; animal welfare interventions may hold promise as well, but the case for these cause areas needs to be made on the basis of improving our steering capacity, not on the basis of their proximate impacts.</p><p></p><p><em>Thanks to members of the Mather essay discussion group and an anonymous collaborator for thoughtful feedback on drafts of this post. Views expressed above are my own. Cross-posted to <a href="">LessWrong</a> &amp; my <a href="">personal blog</a>.</em></p><hr class="dividerBlock"/><h2>Footnotes</h2><p>[1]: Nick Beckstead has done the best work I know of on the topic of why the far future matters. <a href="">This post</a> is a good introduction; for a more in-depth treatment see his PhD thesis, <a href="">On the Overwhelming Importance of Shaping the Far Future</a>.</p><p>[2]: I&#x27;m grateful to Ben Hoffman for discussion that fleshed out the &quot;steering capacity&quot; concept; see <a href="">this comment thread</a>. </p><p>[3]: Note that this list of attributes is not exhaustive &amp; this metaphor isn&#x27;t perfect. I&#x27;ve found the space travel metaphor useful for thinking about cause prioritization given our uncertainty about the far future, so am deploying it here.</p><p>[4]: Maintaining reversibility is important because given our cluelessness, we are unsure of the net impact of any action. When uncertain about overall impact, it’s important to be able to walk back actions that we come to view as net negative.</p><p>[5]: I&#x27;m not sure of how to prioritize these things amongst themselves. Probably improving our understanding of what matters &amp; our predictive power are highest priority, but that&#x27;s a very weakly held view.</p><p></p> milan_griffes X2n6pt3uzZtxGT9Lm 2018-02-15T05:04:25.291Z How tractable is cluelessness? <p>This is the third in a series of posts exploring <a href="">consequentialist cluelessness</a> and its implications for effective altruism:</p><ul><li>The <a href="">first post</a> describes cluelessness &amp; its relevance to EA; arguing that for many popular EA interventions we don’t have a clue about the intervention’s overall net impact.</li><li>The <a href="">second post</a> considers a potential reply to concerns about cluelessness.</li><li><strong>This post</strong> examines how tractable cluelessness is – to what extent we can grow more clueful about an intervention through intentional effort?</li><li>The <a href="">fourth post</a> discusses what being clueless implies about doing good.</li></ul><p></p><p>Consider reading the <a href="">first</a> and <a href="">second</a> posts first.</p><hr class="dividerBlock"/><p>Let&#x27;s consider the <a href="">tractability</a> of cluelessness in two parts:</p><ol><li>How clueful do we need to be before deciding on a course of action? (i.e. how much effort should we spend contemplating &amp; exploring before committing resources to an intervention?)</li><li>How clueful can we become by contemplation &amp; exploration?</li></ol><p></p><h2>How clueful do we need to be before deciding on a course of action?</h2><p>In his talk <a href="">Crucial Considerations and Wise Philanthropy</a>, Nick Bostrom defines a crucial consideration as “a consideration such that if it were taken into account it would overturn the conclusions we would otherwise reach about how we should direct our efforts, or an idea or argument that might possibly reveal the need not just for some minor course adjustment in our practical endeavors but a major change of direction or priority.”</p><p>A plausible reply to “how clueful do we need to be before deciding on a course of action?” might be: “as clueful as is needed to uncover all the crucial considerations relevant to the decision.”</p><p>Deciding to act before uncovering all the crucial considerations relevant to the decision is potentially disastrous, as even one unknown crucial consideration could bear on the consequences of the decision in a way that would entirely revise the moral calculus.</p><p>In contrast, deciding to act before uncovering all non-crucial (“normal”) considerations is by definition not disastrous, as unknown normal considerations might imply a minor course adjustment but not a radically different direction.</p><p></p><h2>How clueful can we become by contemplation &amp; exploration?</h2><p>Under this framing, our second tractability question can be rephrased as “by contemplation and exploration, can we uncover all the crucial considerations relevant to a decision?”</p><p>For cases where the answer is “yes”, we can become clueful enough to make a good decision – we can uncover and consider everything that would necessitate a radical change of direction.</p><p>Conversely, in cases where the answer is “no”, we can’t become clueful enough to make a good decision – despite our efforts there will remain unknown considerations that, if known, would radically change our decision-making.</p><p>There is a difference here between long-run consequences and indirect consequences (see definitions in <a href="">the first post</a>). By careful investigation, we can uncover more &amp; more of the indirect, temporally near consequences of an intervention. It’s plausible that for many interventions, we could uncover all the indirect consequences that relate to the intervention’s crucial considerations.</p><p>But we probably can’t uncover most of the long-run consequences of an intervention by investigation. We can improve our forecasting ability, but because of the complexity of reality, the fidelity of real-world forecasts declines as they extend into the future. It seems unlikely that our forecasting will be able to generate believable predictions of impacts more than 30 years out anytime soon.</p><p>Because many of the consequences of an intervention unfold on a long time horizon (one that’s much longer than our forecasting horizon), it’s implausible to uncover all the long-run consequences that relate to the intervention’s crucial considerations.</p><p></p><h2>Ethical precautionary principle</h2><p>Then, for any decision whose consequences are distributed over a long time horizon (i.e. most decisions), it’s difficult to be sure that we are operating in the “yes we can become clueful enough” category. More precisely, we can only become sufficiently clueful for decisions where there are no unknown crucial considerations that lie past our forecasting horizon.</p><p>Due to <a href="">the vast size of the future</a>, even a small probability of an unknown, temporally distant crucial consideration should give us pause.</p><p>I think this implies operating under an <em>ethical precautionary principle:</em> acting as if there were always an unknown crucial consideration that would strongly affect our decision-making, if only we knew it (i.e. always acting as if we are in the “no we can’t become clueful enough” category).</p><p>Does always following this precautionary principle imply <a href="">analysis paralysis</a>, such that we never take any action at all? I don’t think so. We find ourselves in the middle of a process that’s underway, and devoting all of our resources to analysis &amp; contemplation is itself a decision (“<a href="">If you choose not to decide, you still have made a choice</a>”).</p><p>Instead of paralyzing us, I think the ethical precautionary principle implies that we should focus our efforts in some areas and avoid others. I’ll explore this further in the <a href="">next post</a>.</p> milan_griffes Q8isNAMsFxny5N37Y 2017-12-29T18:52:56.369Z “Just take the expected value” – a possible reply to concerns about cluelessness <p>This is the second in a series of posts exploring <a href="">consequentialist cluelessness</a> and its implications for effective altruism:</p><ul><li>The <a href="">first post</a> describes cluelessness &amp; its relevance to EA; arguing that for many popular EA interventions we don’t have a clue about the intervention’s overall net impact.</li><li><strong>This post</strong> considers a potential reply to concerns about cluelessness – maybe when we are uncertain about a decision, we should just choose the option with the highest expected value.</li><li>Following posts discuss <a href="">how tractable cluelessness is</a>, and what <a href="">being clueless implies about doing good</a>.</li></ul><p>Consider reading the <a href="">first post</a> first.</p><hr class="dividerBlock"/><p>A rationalist’s reply to concerns about cluelessness could be as follows:</p><ul><li>Cluelessness is just a special case of empirical uncertainty.[1]</li><li>We have a framework for dealing with empirical uncertainty – <a href="">expected value</a>.</li><li>So for decisions where we are uncertain, we can determine the best course of action by multiplying our best-guess probability against our best-guess utility for each option, then choosing the option with the highest expected value.</li></ul><p>While this approach makes sense in the abstract, it doesn’t work well in real-world cases. The difficulty is that it’s unclear what “best-guess” probabilities &amp; utilities we should assign, as well as unclear to what extent we should believe our best guesses.  </p><p>Consider this passage from <a href="">Greaves 2016</a> (“credence function” can be read roughly as “probability”):</p><blockquote>The alternative line I will explore here begins from the suggestion that in the situations we are considering, instead of having some single and completely precise (real-valued) credence function, agents are rationally required to have imprecise credences: that is, to be in a credal state that is represented by a many-membered set of probability functions (call this set the agent’s ‘representor’). Intuitively, the idea here is that when the evidence fails conclusively to recommend any particular credence function above certain others, agents are rationally required to remain neutral between the credence functions in question: to include all such equally-recommended credence functions in their representor.</blockquote><p></p><p>To translate a little, Greaves is saying that real-world agents don’t assign precise probabilities to outcomes, they instead consider multiple possible probabilities for each outcome (taken together, these probabilities sum to the agent’s “representor”). Because an agent holds multiple probabilities for each outcome, and has no way by which to arbitrate between its multiple probabilities, it cannot use a straightforward expected value calculation to determine the best outcome.</p><p>Intuitively, this makes sense. Probabilities can only be formally assigned when the <a href="">sample space</a> is fully mapped out, and for most real-world decisions we can’t map the full sample space (in part because the world is very complicated, and in part because we can’t predict the long-run consequences of an action).[2] We can make subjective probability estimates, but if a probability estimate does not flow out of a clearly articulated model of the world, its believability is suspect.[3]</p><p>Furthermore, because multiple probability estimates can seem sensible, agents can hold multiple estimates simultaneously (i.e. their representor). For decisions where the full sample space isn’t mapped out (i.e. most real-world decisions), the method by which human decision-makers convert their multi-value representor into a single-value, “best-guess” estimate is opaque.</p><p>The next time you encounter someone making a subjective probability estimate, ask “how did you arrive at that number?” The answer will frequently be along the lines of “it seems about right” or “I would be surprised if it were higher.” Answers like this indicate that the estimator doesn’t have visibility into the process by which they’re arriving at their estimate.</p><p>So we have believability problems on two levels:</p><ol><li>Whenever we make a probability estimate that doesn’t flow from a clear world-model, the believability of that estimate is questionable.</li><li>And if we attempt to reconcile multiple probability estimates into a single best-guess, the believability of that best-guess is questionable because our method of reconciling multiple estimates into a single value is opaque.[4]</li></ol><p></p><p>By now it should be clear that simply following the expected value is not a sufficient response to concerns of cluelessness. However, it’s possible that cluelessness can be addressed by other routes – perhaps by diligent investigation, we can grow clueful enough to make believable decisions about how to do good. </p><p>The <a href="">next post</a> will consider this further.</p><p></p><p><em>Thanks to Jesse Clifton and an anonymous collaborator for thoughtful feedback on drafts of this post. Views expressed above are my own. Cross-posted to <a href="">my personal blog</a>.</em></p><hr class="dividerBlock"/><h2>Footnotes</h2><p>[1]: This is separate from normative uncertainty – uncertainty about what criterion of moral betterness to use when comparing options. Empirical uncertainty is uncertainty about the overall impact of an action, given a criterion of betterness. In general, cluelessness is a subset of empirical uncertainty. </p><p>[2]: Leonard Savage, who worked out much of the foundations of Bayesian statistics, considered Bayesian decision theory to only apply in &quot;small world&quot; settings. See p. 16 &amp; p. 82 of the second edition of his <a href="">Foundations of Statistics</a> for further discussion of this point.</p><p>[3]: Thanks to Jesse Clifton to making this point.</p><p>[4]: This problem persists even if each input estimate flows from a clear world-model.</p> milan_griffes MWquqEMMZ4WXCrsug 2017-12-21T19:37:07.709Z What consequences? <p>This is the first in a series of posts exploring <a href="">consequentialist cluelessness</a> and its implications for effective altruism:</p><ul><li><strong>This post </strong>describes cluelessness &amp; its relevance to EA; arguing that for many popular EA interventions we don’t have a clue about the intervention’s overall net impact.</li><li>The <a href="">second post</a> considers a potential reply to concerns about cluelessness.</li><li>The <a href="">third post</a> examines how tractable cluelessness is – to what extent we can grow more clueful about an intervention through intentional effort?</li><li>The <a href="">fourth post</a> discusses how we might do good while being clueless to an important extent.</li></ul><p></p><p><strong>My prior</strong> is that cluelessness presents a profound challenge to effective altruism in its current instantiation, and that we need to radically revise our beliefs about doing good such that we prioritize activities that are robust to moral &amp; empirical uncertainty.</p><p><strong>My goal</strong> in writing this piece is to elucidate this position, or to discover why it’s mistaken. I’m posting in serial form to allow more opportunity for forum readers to change my mind about cluelessness and its implications.</p><hr class="dividerBlock"/><p>By “cluelessness”, I mean the possibility that we don’t have a clue about the overall net impact of our actions.[1] Another way of framing this concern: when we think about the consequences of our actions, how do we determine what consequences we should consider?</p><p>First, some definitions. The consequences of an action can be divided into three categories:</p><ul><li><strong>Proximate consequences</strong> – the immediate effects that occur soon afterward to intended object(s) of an action. Relatively easy to observe and measure.</li></ul><p></p><ul><li><strong>Indirect consequences</strong> – the effects that occur soon afterward to unintended object(s) of an action. These could also be termed “cross-stream” effects. Relatively difficult to observe and measure.</li></ul><p></p><ul><li><strong>Long-run consequences</strong> – the effects of an action that occur much later, including effects on both intended and unintended objects. These could also be termed “downstream” effects. Impossible to observe and measure; most long-run consequences can only be estimated.[2]</li></ul><hr class="dividerBlock"/><h2>Effective altruist approaches towards consequences</h2><p>EA-style reasoning addresses consequentialist cluelessness in one of two ways:</p><p><strong>1. The brute-good approach</strong> – collapsing the consequences of an action into a proximate “brute-good” unit, then comparing the aggregate “brute-good” consequences of multiple interventions to determine the intervention with the best (brute good) consequences.</p><ul><ul><li>For example, GiveWell uses “deaths averted” as a brute-good unit, then converts other impacts of the intervention being considered into “deaths-averted equivalents”, then compares interventions to each other using this common unit.</li><li>This approach is common among the cause areas of animal welfare, global development, and EA coalition-building.</li></ul></ul><p><strong>2. The x-risk reduction approach</strong> – simplifying “do the actions with the best consequences” into “do the actions that yield the most existential-risk reduction.” Proximate &amp; indirect consequences are only considered insofar as they bear on x-risk; the main focus is on the long-run: whether or not humanity will survive into the far future.</p><ul><ul><li>Nick Bostrom makes this explicit in his essay, <em><a href="">Astronomical Waste</a>:</em> “The utilitarian imperative ‘Maximize expected aggregate utility!’ can be simplified to the maxim ‘Minimize existential risk!’”</li><li>This approach is common among the x-risk reduction cause area.</li></ul></ul><p>EA focus can be imagined as a bimodal distribution – EA either considers only the proximate effects of an intervention, ignoring its indirect &amp; long-run consequences; or considers only the very long-run effects of an intervention (i.e. to what extent the intervention reduces x-risk), considering all proximate &amp; indirect effects only insofar as they bear on x-risk reduction.[3]</p><p>Consequences that fall between these two peaks of attention are not included in EA’s moral calculus, nor are they explicitly determined to be of negligible importance. Instead, they are mentioned in passing, or ignored entirely.</p><p>This is problematic. It’s likely that for most interventions, these consequences compose a substantial portion of the intervention’s overall impact.</p><hr class="dividerBlock"/><h2>Cluelessness and the brute-good approach</h2><p>The cluelessness problem for the brute-good approach can be stated as follows:</p><blockquote>Due to the difficulty of observing and measuring indirect &amp; long-run consequences of interventions, we do not know the bulk of the consequences of any intervention, and so cannot confidently compare the consequences of one intervention to another. Comparing only the proximate effects of interventions assumes that proximate effects compose the majority of interventions’ impact, whereas in reality the bulk of an intervention’s impact is composed of indirect &amp; long-run effects which are difficult to observe and difficult to estimate.[4]</blockquote><p></p><p>The brute-good approach often implicitly assumes symmetry of non-proximate consequences (i.e. for every indirect &amp; long-run consequence, there is an equal and opposite consequence such that indirect &amp; long-run consequences cancel out and only proximate consequences matter). This assumption seems poorly supported.[5]</p><p>It might be thought that indirect &amp; long-run consequences can be surfaced as part of the decision-making process, then included in the decision-maker’s calculus. This seems very difficult to do in a believable way (i.e. a way in which we feel confident that we’ve uncovered all crucial considerations). I will consider this issue further in the <a href="">next post</a> of this series.</p><p>Some examples follow, to make the cluelessness problem for the brute-good approach salient.</p><p></p><h3>Example: baby Hitler</h3><p>Consider the position of an Austrian physician in the 1890s who was called to tend to a sick infant, Adolf Hitler. </p><p>Considering only proximate effects, the physician should clearly have treated baby Hitler and made efforts to ensure his survival. But the picture is clouded when indirect &amp; long-run consequences are added to the calculus. Perhaps letting baby Hitler die (or even committing infanticide) would have been better in the long-run. Or perhaps the German zeitgeist of the 1920s and 30s was such that the terrors of Nazism would have been unleashed even absent Hitler’s leadership. Regardless, the decision to minister to Hitler as a sick infant is not straightforward when indirect &amp; long-run consequences are considered.</p><p>A potential objection here is that the Austrian physician could in no way have foreseen that the infant they were called to tend to would later become a terrible dictator, so the physician should have done what seemed best given the information they could uncover. But this objection only highlights the difficulty presented by cluelessness. In a very literal sense, a physician in this position is clueless about what action would be best. Assessing only proximate consequences would provide some guidance about what action to take, but this guidance would not necessarily point to the action with the best consequences in the long run.</p><p></p><h3>Example: bednet distributions in unstable regions</h3><p>The <a href="">Against Malaria Foundation (AMF)</a> funds bed net distributions in developing countries, with the goal of reducing malaria incidence. In 2017, AMF funded its largest distribution to date, <a href="">over 12 million nets in Uganda</a>.</p><p>Uganda has had <a href="">a chronic problem with terror groups</a>, notably the <a href="">Lord’s Resistance Army</a> operating in the north and <a href="">Al-Shabab</a> carrying out attacks in the capital. Though the country is believed to be relatively stable at present, there remain non-negligible risks of civil war or government overthrow.</p><p>Considering only the proximate consequences, distributing bednets in Uganda is probably a highly cost-effective method of reducing malaria incidence and saving lives. But this assessment is muddied when indirect and long-run effects are also considered.</p><p>Perhaps saving the lives of young children results in increasing the supply of child-soldier recruits for rebel groups, leading to increased regional instability.</p><p>Perhaps importing &amp; distributing millions of foreign-made bed nets disrupts local supply chains and breeds Ugandan resentment toward foreign aid.</p><p>Perhaps stabilizing the child mortality rate during <a href="">a period of fundamentalist-Christian revival</a> increases the probability of a fundamentalist-Christian value system becoming locked in, which could prove problematic further down the road.</p><p>I’m not claiming that any of the above are likely outcomes of large-scale bed net distributions. The claim is that the above are all possible effects of a large-scale bed net distribution (each with a non-negligible, unknown probability), and that due to many possible effects like this, we are prospectively clueless about the overall impact of a large-scale bed net distribution.</p><p></p><h3>Example: direct-action animal-welfare interventions</h3><p>Some animal welfare activists advocate <a href="">direct action</a>, the practice of directly confronting problematic food industry practices.</p><p>In 2013, animal-welfare activists organized <a href="">a “die-in” at a San Francisco Chipotle</a>. At the die-in, activists confronted Chipotle consumers with claims about the harm inflicted on farm animals by Chipotle’s supply chain.</p><p>The die-in likely had the proximate effect of raising awareness of animal welfare among the Chipotle consumers and employees who were present during the demonstration. Increasing social awareness of animal welfare is probably positive according to consequentialist perspectives that give moral consideration to animals.</p><p>However, if considering indirect and long-run consequences as well, the overall impact of direct action demonstrations like the die-in is unclear. Highly confrontational demonstrations may result in the animal welfare movement being labeled “radical” or “dangerous” by the mainstream, thus limiting the movement’s influence.</p><p>Confrontational tactics may also be controversial within the animal welfare movement, causing divisiveness and potentially leading to a schism, which could harm the movement’s efficacy.</p><p>Again, I’m not claiming that the above are likely effects of direct-action animal-welfare interventions. The claim is that indirect &amp; long-run effects like this each have a non-negligible, unknown probability, such that we are prospectively clueless regarding the overall impact of the intervention.</p><hr class="dividerBlock"/><h2>Cluelessness and the existential risk reduction approach</h2><p>Unlike the brute-good approach, which tends to overweight the impact of proximate effects and underweight that of indirect &amp; long-run effects, the x-risk reduction approach focuses almost exclusively on the long-run consequences of actions (i.e. how they effect the probability that humanity survives into the far future). Interventions can be compared according to a common criterion: the amount by which they are expected to reduce existential risk.</p><p>While I think cluelessness poses less difficulty for the x-risk reduction approach, it remains problematic. The cluelessness problem for the x-risk reduction approach can be stated as follows:</p><blockquote>Interventions aimed at reducing existential risk have a clear criterion by which to make comparisons: “which intervention yields a larger reduction in existential risk?” However, because the indirect &amp; long-run consequences of any specific x-risk intervention are difficult to observe, measure, and estimate, arriving at a believable estimate of the amount of x-risk reduction yielded by an intervention is difficult. Because it is difficult to arrive at believable estimates of the amount of x-risk reduction yielded by interventions, we are somewhat clueless when trying to compare the impact of one x-risk intervention to another.</blockquote><p>An example follows to make this salient.</p><p></p><h3>Example: stratospheric aerosol injection to blunt impacts of climate change</h3><p><a href="">Injecting sulfate aerosols into the stratosphere</a> has been put forward as an intervention that could reduce the impact of climate change (by reflecting sunlight away from the earth, thus cooling the planet).</p><p>However, it’s possible that stratospheric aerosol injection could have unintended consequences, such as cooling the planet so much that the surface is rendered uninhabitable (incidentally, this is the background story of the film <em><a href="">Snowpiercer</a></em>). Because aerosol injection is relatively cheap to do (on the order of <a href="">tens of billions USD</a>), there is concern that small nation-states, especially those disproportionately affected by climate change, might deploy aerosol injection programs without the consent or foreknowledge of other countries.  </p><p>Given this strategic landscape, the effects of calling attention to stratospheric aerosol injection as a cause are unclear. It’s possible that further public-facing work on the intervention results in international agreements governing the use of the technology. This would most likely be a reduction in existential risk along this vector.</p><p>However, it’s also possible that further public-facing work on aerosol injection makes the technology more discoverable, revealing the technology to decision-makers who were previously ignorant of its promise. Some of these decision-makers might be inclined to pursue research programs aimed at developing a stratospheric aerosol injection capability, which would most likely increase existential risk along this vector.</p><p>It is difficult to arrive at believable estimates of the probability that further work on aerosol injection yields an x-risk reduction, and of the probability that further work yields an x-risk increase (though more granular mapping of the game-theoretic and strategic landscape here would increase the believability of our estimates).</p><p>Taken together, then, it’s unclear whether public-facing work on aerosol injection yields an x-risk reduction on net. (Note too that keeping work on the intervention secret may not straightforwardly reduce x-risk either, as no secret research program can guarantee 100% leak prevention, and leaked knowledge may have a more negative effect than the same knowledge made freely available.)</p><p>We are, to some extent, clueless regarding the net impact of further work on the intervention.</p><hr class="dividerBlock"/><h2>Where to, from here?</h2><h2></h2><p>It might be claimed that, although we start out being clueless about the consequences of our actions, we can grow more clueful by way of intentional effort &amp; investigation. Unknown unknowns can be uncovered and incorporated into expected-value estimates. Plans can be adjusted in light of new information. Organizations can pivot as their approaches run into unexpected hurdles.</p><p>Cluelessness, in other words, might be very tractable.</p><p>This is the claim I will consider in the <a href="">next post</a>. My prior is that cluelessness is quite intractable, and that despite best efforts we will remain clueless to an important extent.</p><p>The topic definitely deserves careful examination.</p><p></p><p></p><p><em>Thanks to members of the Mather essay discussion group for thoughtful feedback on drafts of this post. Views expressed above are my own. Cross-posted to <a href="">my personal blog</a>.</em></p><hr class="dividerBlock"/><h2>Footnotes</h2><p>[1]: The term &quot;cluelessness&quot; is not my coinage; I am borrowing it from academic philosophy. See in particular <a href="">Greaves 2016</a>.</p><p>[2]: Indirect &amp; long-run consequences are sometimes referred to as “<a href="">flow-through effects</a>,” which, as far as I can tell, does not make a clean distinction between temporally near effects (“indirect consequences”) and temporally distant effects (“long-run consequences”). This distinction seems interesting, so I will use “indirect” &amp; “long-run” in favor of “flow-through effects.”</p><p>[3]: Thanks to Daniel Berman for making this point.</p><p>[4]: More precisely, the brute-good approach assumes that indirect &amp; long-run consequences will either:</p><ul><li>Be negligible</li><li>Cancel each other out via symmetry (see footnote 5)</li><li>On net point in the same direction as the proximate consequences (see <a href="">Cotton-Barratt 2014</a>: &quot;The upshot of this is that it is likely interventions in human welfare, as well as being immediately effective to relieve suffering and improve lives, also tend to have a significant long-term impact. This is often more difficult to measure, but the short-term impact can generally be used as a reasonable proxy.&quot;)</li></ul><p></p><p>[5]: See <a href="">Greaves 2016</a> for discussion of the symmetry argument, and in particular p. 9 for discussion of why it&#x27;s insufficient for cases of &quot;complex cluelessness.&quot; </p> milan_griffes LPMtTvfZvhZqy25Jw 2017-11-23T18:27:21.894Z Reading recommendations for the problem of consequentialist scope? <p>Determining which&#xA0;scope of outcomes to consider when making a decision seems like a difficult problem for consequentialism. By &quot;scope of outcomes&quot; I mean how far into the future and how many links in the causal chain to incorporate into decision-making. For example, if I&apos;m assessing the comparative goodness of two charities, I&apos;ll need to have some method of comparing&#xA0;future impacts (perhaps &quot;consider impacts that occur in the next 20 years&quot;) and flow-through contemporaneous impacts (perhaps &quot;consider the actions of the charitable recipient, but not the actions of those they interact with&quot;).<br><br>I&apos;m using&#xA0;&quot;consequentialist scope&quot; as a shorthand for this type of determination because I&apos;m not aware of a common-usage word for it.<br><br>Consequentialist scope seems both (a) important and (b) difficult to think about clearly, so I want to learn more about it. <br><br>Does anyone have&#xA0;reading recommendations for this? Philosophy papers, blog posts, books, whatever.&#xA0;I didn&apos;t encounter it in <em>Reasons and Persons</em>, but I&apos;ve only read the first third so far.</p> milan_griffes 4NnvA2sXbB87CqArF 2017-08-02T02:07:46.769Z Should Good Ventures focus on current giving opportunities, or save for future giving opportunities? <p>Around this time of year, <a href="">GiveWell</a>&#xA0;traditionally spends a lot of time thinking about game theoretic considerations &#x2013; specifically, what funding recommendation it ought to make to <a href="">Good Ventures</a>&#xA0;so that Good Ventures allocates&#xA0;its resources wisely. (Here are GiveWell&apos;s game theoretic posts from <a href="">2014</a>&#xA0;&amp; <a href="">2015</a>.)</p> <p>The main considerations here are:</p> <ol> <li>How should Good Ventures act in an environment where individual donors &amp; other foundations are also giving money?</li> <li>How should Good Ventures value its current giving opportunities compared to the giving opportunities it will have in the future?</li> </ol> <p>I&apos;m more interested in the second consideration, so that&apos;s what I&apos;ll engage with here. If present-day opportunities seem better than expected future opportunities, Good Ventures should fully take advantage of its current opportunities, because they are the best giving opportunities it will ever encounter. Conversely, if present-day opportunities seem worse than expected future opportunities, Good Ventures should give sparsely now, preserving its resources for the superior upcoming opportunities.</p> <p>Personally, I&apos;m bullish on present-day opportunities. Present-day opportunities seem more attractive than future ones for a couple reasons:</p> <ol> <li>The world is improving, so giving opportunities will get worse if current trends continue.</li> <li>There&apos;s a non-negligible chance that a global catastrophic risk (GCR) occurs within Good Ventures&apos; lifetime (it&apos;s a <a href="">&quot;burn-down&quot; foundation</a>), thus nullifying any future giving opportunities.</li> <li>Strong AI might emerge sometime in the next 30 years. This could be a global catastrophe, or it could ferry humanity into a post-scarcity environment, wherein philanthropic giving opportunities are either dramatically reduced or entirely absent.</li> </ol> <p>So far, my reasoning has been qualitative, and <a href="">if it&apos;s worth doing, it&apos;s worth doing with made-up numbers</a>, so let&apos;s assign some subjective probabilities to the different scenarios we could encounter (in the next 30 years):</p> <ul> <li>P(current, broad trend of humanitarian improvement stalls out or reverses; no strong AI, no GCRs) = 30%</li> <li>P(current, broad trend of humanitarian improvement continues; no GCRs, if strong AI occurs it doesn&apos;t lead to post-scarcity) = 56%</li> <li>P(strong AI leads to a post-scarcity economy) = 5%</li> <li>P(strong AI leads to a global catastrophe) = 2%</li> <li>P(a different GCR occurs) = 7%</li> </ul> <p>To assess the expected value of these scenarios, we also have to assign a utility score to each scenario (obviously, the following is incredibly rough):</p> <ul> <li>Current, broad trend of humanitarian improvement stalls out or reverses; no strong AI, no GCRs = Baseline</li> <li>Current, broad trend of humanitarian improvement continues; no GCRs, if strong AI occurs it doesn&apos;t lead to post-scarcity = 2x as good as baseline</li> <li>Strong AI leads to a post-scarcity economy = 100x as good as baseline</li> <li>Strong AI leads to a global catastrophe = 0x as good as baseline</li> <li>A different GCR occurs = 0x as good as baseline</li> </ul> <p>Before calculating the expected value of each scenario, let&apos;s unpack my assessments a bit. I&apos;m imagining &quot;baseline&quot; goodness as essentially things as they are right now, with no dramatic changes to human happiness in the next 30 years. If quality of life broadly construed continues to improve over the next 30 years, I assess that as twice as good as the baseline scenario.</p> <p>Achieving post-scarcity in the next 30 years is assessed as 100x as good as the baseline scenario of no improvement. (Arguably this could be nearly infinitely better than baseline, but to avoid Pascal&apos;s mugging we&apos;ll cap it at 100x.)</p> <p>A global catastrophe in the next 30 years is assessed as 0x as good as baseline.</p> <p>Again, this is all very rough.</p> <p>Now, calculating the expected value of each outcome is straightforward:</p> <ul> <li>Expected value of <em>current, broad trend of humanitarian improvement stalls out or reverses; no strong AI, no GCRs</em> = 0.3 x 1 = 0.3</li> <li>Expected value of <em>current, broad trend of humanitarian improvement continues; no GCRs, if strong AI occurs it doesn&apos;t lead to post-scarcity</em> = 0.56 x 2 = 1.12</li> <li>Expected value of <em>strong AI leads to a post-scarcity economy</em> = 0.05 x 100 = 5</li> <li>Expected value of <em>strong AI leads to a global catastrophe</em> = 0.02 * 0 = 0</li> <li>Expected value of <em>a different GCR occurs</em> = 0.07 * 0 = 0</li> </ul> <p>And each scenario maps to a now-or-later giving decision:</p> <ul> <li>Current, broad trend of humanitarian improvement stalls out or reverses; no strong AI, no GCRs &#x2013;&gt; Give later (because new opportunities may be discovered)</li> <li>Current, broad trend of humanitarian improvement continues; no GCRs, if strong AI occurs it doesn&apos;t lead to post-scarcity &#x2013;&gt; Give now (because the best giving opportunities are the ones we&apos;re currently aware of)</li> <li>Strong AI leads to a post-scarcity economy &#x2013;&gt; Give now (because philanthropy is obsolete in post-scarcity)</li> <li>Strong AI leads to a global catastrophe (GCR) &#x2013;&gt; Give now (because philanthropy is nullified by a global catastrophe)</li> <li>A different GCR occurs &#x2013;&gt; Give now (because philanthropy is nullified by a global catastrophe)</li> </ul> <p>So, we can add up the expected values of all the &quot;give now&quot; scenarios and all the &quot;give later&quot; scenarios, and see which sum is higher:</p> <ul> <li><em>Give now</em> total expected value = 1.12 + 5 + 0 + 0 = 6.12</li> <li><em>Give later</em> total expected value = 0.3 &#xA0;= 0.3</li> </ul> <p>This is a little strange because GCR outcomes are given no weight, but in reality if we were faced with a substantial risk of a global catastrophe, that would strongly influence our decision-making. Maybe the proper way to do this is to assign a negative value to GCR outcomes and include them in the &quot;give later&quot; bucket, but that pushes even further in the direction of &quot;give now&quot; so I&apos;m not going to fiddle with it here.</p> <p>Comparing the sums&#xA0;shows that, in expectation, giving now will lead to substantially more value. Most of this is driven by the post-scarcity variable, but even with post-scarcity outcomes excluded, I still assess &quot;give now&quot; scenarios to have about 4x the expected value as &quot;give later&quot; scenarios.</p> <p>Yes, this exercise is ad-hoc and a little silly. Others&#xA0;could&#xA0;assign&#xA0;different probabilities &amp; utilities, which would&#xA0;lead them to different conclusions. But the point the exercise illustrates is important: if you&apos;re like me in thinking that, over the next 30 years, things are most likely going to continue slowly improving with some chance of a trend reversal and a tail risk of major societal disruption, then in expectation, present-day giving opportunities are a better bet than future giving opportunities.</p> <p>&#xA0;---</p> <p><strong>Disclosure:</strong>&#xA0;I used to work at GiveWell.</p> <p>A version of this post appeared on <a href="">my personal blog</a>.</p> milan_griffes CdS9JSRLYchehTMhN 2016-11-07T16:10:29.709Z