milan_griffes feed - EA Forum Reader milan_griffes’s posts and comments on the Effective Altruism Forum en-us Comment by Milan_Griffes on Burnout: What is it and how to Treat it. https://forum.effectivealtruism.org/posts/NDszJWMsdLCB4MNoy/burnout-what-is-it-and-how-to-treat-it#CuLMiwdiZFjTgyFGB <p>Could a Forum mod give more context about the deleted comment on this post? </p><p>(The one deleted by Max Dalton on 2018-11-8)</p><p>Seems like the comment was deleted by a mod, not by the comment author. If that&#x27;s correct...</p><ul><li>Was the comment author informed that their comment was being deleted? </li><li>Did the comment author consent to have their comment deleted?</li><li>What about the comment led to it being deleted by a mod?</li></ul><p></p><p>This is surprising to me; I wasn&#x27;t aware that EA Forum mods were deleting content from the Forum.</p> milan_griffes CuLMiwdiZFjTgyFGB 2019-04-20T00:14:49.841Z Comment by Milan_Griffes on Legal psychedelic retreats launching in Jamaica https://forum.effectivealtruism.org/posts/H8gC7gLfFvCTwLDsf/legal-psychedelic-retreats-launching-in-jamaica#XmS6iHkMg64ZY2CDm <p>Thanks, this is helpful.</p><p>It&#x27;s tricky to think about from my perspective – two true things seem in tension:</p><ul><li>(1) New business is being started, and the founder definitely wants people to use their product.</li><li>(2) New business is being started <em><strong>because</strong> the founder thinks starting it is an impactful thing to do. </em></li></ul><p>So it feels like there&#x27;s a balance to strike between (1) &amp; (2) in the communications around the retreat. </p><p>Following (1), we&#x27;d want more folks to attend the retreat. Following (2), we&#x27;re indifferent to whether folks attend the retreat, and really just want to get people&#x27;s thoughts on the retreat as new EA project.</p><p>Does the tension I&#x27;m pointing to make sense?</p><p>[Disclosure: I&#x27;m helping out Atman Retreat as an advisor; views are my own]</p> milan_griffes XmS6iHkMg64ZY2CDm 2019-04-19T17:29:57.627Z Comment by Milan_Griffes on Should EA grantmaking be subject to independent audit? https://forum.effectivealtruism.org/posts/bFM5ZorxkPTtoPiec/should-ea-grantmaking-be-subject-to-independent-audit#sj57ApMpYWKdqvvgm <p>Tangentially relevant: <a href="https://www.openphilanthropy.org/blog/update-how-were-thinking-about-openness-and-information-sharing">https://www.openphilanthropy.org/blog/update-how-were-thinking-about-openness-and-information-sharing</a></p> milan_griffes sj57ApMpYWKdqvvgm 2019-04-19T16:22:02.300Z Comment by Milan_Griffes on Legal psychedelic retreats launching in Jamaica https://forum.effectivealtruism.org/posts/H8gC7gLfFvCTwLDsf/legal-psychedelic-retreats-launching-in-jamaica#byeEN2wxMN3F597Sy <p>Also note that the Openness result Scott talks about hasn&#x27;t replicated: <a href="https://www.enthea.net/griffiths-2017-2.html">https://www.enthea.net/griffiths-2017-2.html</a></p><p>(More research needed, as always.)</p> milan_griffes byeEN2wxMN3F597Sy 2019-04-19T16:07:00.881Z Comment by Milan_Griffes on Legal psychedelic retreats launching in Jamaica https://forum.effectivealtruism.org/posts/H8gC7gLfFvCTwLDsf/legal-psychedelic-retreats-launching-in-jamaica#FG56nMjkDSq4PQeCm <p>Thanks. The Slate Star Codex post is definitely interesting, though it&#x27;s easy to construct a set of countervailing examples – people who use psychedelics &amp; seem pretty sensible (e.g. Steve Jobs, Eric Weinstein, Tim Ferriss, off the top of my head).</p><p><em>edit:</em> Sam Harris, Elon Musk, Aldous Huxley are also in the &quot;use psychedelics &amp; seem pretty sensible&quot; category.</p><p>Also, Gregory was noting a correlation within EA specifically; none of these examples speak to that.</p> milan_griffes FG56nMjkDSq4PQeCm 2019-04-19T16:05:22.587Z Comment by Milan_Griffes on Legal psychedelic retreats launching in Jamaica https://forum.effectivealtruism.org/posts/H8gC7gLfFvCTwLDsf/legal-psychedelic-retreats-launching-in-jamaica#wPr4FxyHK7Ag3AAtP <blockquote>This trend is a weak one, with many exceptions; I also don&#x27;t know about direction of causation. Yet this is enough to make me recommend that taking psychedelics to &#x27;make one a better EA&#x27; is very ill-advised.</blockquote><p></p><p>Given the weakness of the trend &amp; uncertainty about how the causation runs, &quot;very ill-advised&quot; seems too strong.</p><p>Also your view doesn&#x27;t account for the potential upsides of psychedelic use.</p> milan_griffes wPr4FxyHK7Ag3AAtP 2019-04-19T06:02:47.769Z Comment by Milan_Griffes on Legal psychedelic retreats launching in Jamaica https://forum.effectivealtruism.org/posts/H8gC7gLfFvCTwLDsf/legal-psychedelic-retreats-launching-in-jamaica#b3sAKHwABhTknmJ3t <blockquote>... in EA, psychedelic use seems to go along with a cluster of bad epistemic practice (e.g. pseudoscience, neurobabble, &#x27;enlightenment&#x27;, obscurantism).</blockquote><p></p><p>Could you link to some public-facing examples of the bad epistemic practice you have in mind?</p><p>(I don&#x27;t share your intuition so would like to get a better idea of what&#x27;s generating it.)</p> milan_griffes b3sAKHwABhTknmJ3t 2019-04-19T06:01:06.344Z Comment by Milan_Griffes on Legal psychedelic retreats launching in Jamaica https://forum.effectivealtruism.org/posts/H8gC7gLfFvCTwLDsf/legal-psychedelic-retreats-launching-in-jamaica#MddknxcF93mr7KGk4 <blockquote>... I think a better set up would have been: &#x27;psychedelics look good whether you just value the near-term or the long-term&#x27;</blockquote><p></p><p>Good point – I agree that the near-term / long-term distinction is better for this.</p> milan_griffes MddknxcF93mr7KGk4 2019-04-19T01:04:46.616Z Comment by Milan_Griffes on Complex value & situational awareness https://forum.effectivealtruism.org/posts/JmPQKoWjF6p9pahQx/complex-value-and-situational-awareness#khZEZB9hTWXgsdbJx <p>Also I&#x27;m not confident you internalized this part of the post:</p><blockquote>Maintaining situational awareness dovetails nicely with adding complex value – the better your situational awareness, the more opportunities for adding complex value you&#x27;ll see.</blockquote><p></p><p>Could be restated as: &quot;situational awareness uncovers opportunities for adding complex value.&quot; </p><p>In some sense, &quot;person was situationally aware&quot; is upstream of all examples of adding complex value.</p> milan_griffes khZEZB9hTWXgsdbJx 2019-04-18T23:58:03.671Z Comment by Milan_Griffes on Thoughts on 80,000 Hours’ research that might help with job-search frustrations https://forum.effectivealtruism.org/posts/YHyvjYSEQtp3nfd6c/thoughts-on-80-000-hours-research-that-might-help-with-job#PL5TD2PrC8PKdxQcj <p>Perhaps the case for keeping the page up has something to do with the page being highly ranked on Google search...</p> milan_griffes PL5TD2PrC8PKdxQcj 2019-04-18T23:10:42.443Z Comment by Milan_Griffes on Complex value & situational awareness https://forum.effectivealtruism.org/posts/JmPQKoWjF6p9pahQx/complex-value-and-situational-awareness#DgQ3yijyy6Ee9vAGF <p>Some of Eli Tyre&#x27;s work is also a good example of this. Details on <a href="https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions#KL9zrgxkY9kvXr7Pq">this thread</a>.</p> milan_griffes DgQ3yijyy6Ee9vAGF 2019-04-18T22:44:16.059Z Comment by Milan_Griffes on Legal psychedelic retreats launching in Jamaica https://forum.effectivealtruism.org/posts/H8gC7gLfFvCTwLDsf/legal-psychedelic-retreats-launching-in-jamaica#vDWxgYAdMThj4ZCE2 <blockquote>otherwise why would there be <em>no</em> studies listed that found no effect or a negative effect?</blockquote><p></p><p>Aaron did link to <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4813425/">Nichols 2016</a>, a review article on psychedelics that includes discussion of associated risks and potential side effects.</p><p>The academic research on psychedelics has been generally very positive. </p><p>&quot;Bad trips&quot; may not even be a pathology – see <a href="https://www.ncbi.nlm.nih.gov/pubmed/27578767">Carbonaro et al. 2016,</a> a survey of people who&#x27;d reported having bad trips. Carbonaro et al. found that 84% of people who had bad trips &quot;endorsed benefiting from the experience.&quot;</p> milan_griffes vDWxgYAdMThj4ZCE2 2019-04-18T22:09:05.064Z Comment by Milan_Griffes on A mental health resource for EA community https://forum.effectivealtruism.org/posts/CJZGFxzHfdPuu2X76/a-mental-health-resource-for-ea-community#EQMpjsbdoaE7oHTER <blockquote>Common triggers of psychosis... some medications or drugs, especially marijuana, psychedelics, MDMA</blockquote><p></p><p>Re: psychedelics &amp; psychosis risk, see <a href="https://www.ncbi.nlm.nih.gov/pubmed/23976938">Krebs &amp; Johansen 2013</a>, a study of National Survey on Drug Use and Health data (<em>n</em> = 130,152) which found:</p><p></p><blockquote>21,967 respondents (13.4% weighted) reported lifetime psychedelic use. There were no significant associations between lifetime use of any psychedelics, lifetime use of specific psychedelics (LSD, psilocybin, mescaline, peyote), or past year use of LSD and increased rate of any of the mental health outcomes.</blockquote><blockquote>Rather, in several cases psychedelic use was associated with lower rate of mental health problems.</blockquote><p></p><p>More detail on <a href="https://forum.effectivealtruism.org/posts/H8gC7gLfFvCTwLDsf/legal-psychedelic-retreats-launching-in-jamaica#YH5xqdNdB7KJA68uc">this comment thread</a>.</p> milan_griffes EQMpjsbdoaE7oHTER 2019-04-18T22:02:14.699Z Comment by Milan_Griffes on Is Modern Monetary Theory a good idea? https://forum.effectivealtruism.org/posts/bbTPAN83fk3TR7gwr/is-modern-monetary-theory-a-good-idea#FqQ4SSFS8rQpHuS8q <p>+1</p><p>I think monetary policy etc. has a lot of relevance to things EA cares about.</p><p>Happily it&#x27;s already on Open Phil&#x27;s radar: <a href="https://www.openphilanthropy.org/research/cause-reports/macroeconomic-policy">https://www.openphilanthropy.org/research/cause-reports/macroeconomic-policy</a></p> milan_griffes FqQ4SSFS8rQpHuS8q 2019-04-18T21:23:10.982Z Comment by Milan_Griffes on Complex value & situational awareness https://forum.effectivealtruism.org/posts/JmPQKoWjF6p9pahQx/complex-value-and-situational-awareness#xxEkf8Wa9siLnEPho <p>Actually, I&#x27;ve gotten a lot of personal value from being on a couple facebook messenger groups he curates. That&#x27;s not confidential :-)</p> milan_griffes xxEkf8Wa9siLnEPho 2019-04-18T19:24:02.982Z Comment by Milan_Griffes on Complex value & situational awareness https://forum.effectivealtruism.org/posts/JmPQKoWjF6p9pahQx/complex-value-and-situational-awareness#Jtu6thdu2WsLLyWCE <p>Unfortunately (and also related to some of the points of the OP), all the concrete examples that come to mind are confidential.</p> milan_griffes Jtu6thdu2WsLLyWCE 2019-04-18T19:22:00.061Z Comment by Milan_Griffes on Legal psychedelic retreats launching in Jamaica https://forum.effectivealtruism.org/posts/H8gC7gLfFvCTwLDsf/legal-psychedelic-retreats-launching-in-jamaica#goPw8dLaNjqfTMnes <blockquote>(Also FYI, the findings from the <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3537171/">2011 paper</a> SSC references haven&#x27;t been replicated.)</blockquote><p></p><p>Here&#x27;s more on one failure to replicate the Openness result: <a href="https://www.enthea.net/griffiths-2017-2.html">https://www.enthea.net/griffiths-2017-2.html</a></p><p>(More research needed, as always.)</p> milan_griffes goPw8dLaNjqfTMnes 2019-04-18T19:03:25.645Z Comment by Milan_Griffes on Legal psychedelic retreats launching in Jamaica https://forum.effectivealtruism.org/posts/H8gC7gLfFvCTwLDsf/legal-psychedelic-retreats-launching-in-jamaica#mtcpQut95okEJQhp2 <blockquote>I don&#x27;t see this as a risk for EA/rationalist types though, and would argue that pretty strongly.</blockquote><p></p><p>+1</p><p>I&#x27;m very bullish on more big-five openness in the rationality &amp; EA communities, personally.</p> milan_griffes mtcpQut95okEJQhp2 2019-04-18T18:57:02.874Z Comment by Milan_Griffes on Complex value & situational awareness https://forum.effectivealtruism.org/posts/JmPQKoWjF6p9pahQx/complex-value-and-situational-awareness#FbLPo3KYTBhqMW7mt <p>&quot;Staying relevant is my long goal&quot;</p><p>-- <a href="https://www.quora.com/profile/Alex-K-Chen">Alex Chen</a></p> milan_griffes FbLPo3KYTBhqMW7mt 2019-04-18T18:54:02.071Z Comment by Milan_Griffes on Complex value & situational awareness https://forum.effectivealtruism.org/posts/JmPQKoWjF6p9pahQx/complex-value-and-situational-awareness#RzCLrhbwHRdbBwd6H <blockquote>Are there particular instances in which you think someone has generated a lot of value by &quot;maintaining situational awareness&quot;?</blockquote><p></p><p>Alex Chen is the archetype of &quot;generating value by maintaining situational awareness&quot;</p><p><a href="https://www.quora.com/profile/Alex-K-Chen">https://www.quora.com/profile/Alex-K-Chen</a></p><p>(Among other things, he was the top question-asker on Quora for a long time, and perhaps still holds the record for &quot;asked most questions on Quora&quot;)</p><p>He&#x27;s an amazing networker, which is enabled by his situational awareness. (Really anyone who&#x27;s a good networker is so because of their good situational awareness.)</p> milan_griffes RzCLrhbwHRdbBwd6H 2019-04-18T18:48:54.767Z Comment by Milan_Griffes on Legal psychedelic retreats launching in Jamaica https://forum.effectivealtruism.org/posts/H8gC7gLfFvCTwLDsf/legal-psychedelic-retreats-launching-in-jamaica#eCqhEwdgEWo7aJZTA <p>The retreat price is listed in the &quot;Upcoming Retreats&quot; section, near the bottom of the homepage: <a href="https://atmanretreat.com">https://atmanretreat.com</a></p> milan_griffes eCqhEwdgEWo7aJZTA 2019-04-18T18:38:29.871Z Comment by Milan_Griffes on Legal psychedelic retreats launching in Jamaica https://forum.effectivealtruism.org/posts/H8gC7gLfFvCTwLDsf/legal-psychedelic-retreats-launching-in-jamaica#Aoi2GsXxhMqruyXy3 <blockquote>I figured the OP was suggesting that people go to the retreat?</blockquote><p></p><p>Can you point me to the place(s) where the OP is suggesting people go on these retreats?</p><p>Perhaps this is the part you have in mind:</p><blockquote>We’re launching our first retreats in June, and we think this is a great opportunity to try psychedelics in a safe, legal setting, along with other people who are excited about EA.</blockquote><p>Or maybe this is more a subtextual thing you&#x27;re picking up on?</p> milan_griffes Aoi2GsXxhMqruyXy3 2019-04-18T18:24:15.970Z Comment by Milan_Griffes on Legal psychedelic retreats launching in Jamaica https://forum.effectivealtruism.org/posts/H8gC7gLfFvCTwLDsf/legal-psychedelic-retreats-launching-in-jamaica#J2ZthGzqFTqdWPbmY <p>I&#x27;m not sure where it&#x27;s going either :-)</p><p>You drew a distinction between the comparison posts I linked to &amp; the OP. I was confused by the distinction you were drawing. I asked for clarification.</p> milan_griffes J2ZthGzqFTqdWPbmY 2019-04-18T18:22:38.586Z Comment by Milan_Griffes on Legal psychedelic retreats launching in Jamaica https://forum.effectivealtruism.org/posts/H8gC7gLfFvCTwLDsf/legal-psychedelic-retreats-launching-in-jamaica#9emjfned4kd3GN9Ke <blockquote>Some* of the other posts you link to also ask things of their readers, but they also present a case for why that ask is a particularly exceptional use of resources.</blockquote><p></p><p>As far as I can tell, OP isn&#x27;t making an ask of its readers – to me it reads as &quot;FYI here&#x27;s a new thing you might be interested in!&quot;</p><p>Can you point me to the parts where it seems like it&#x27;s making an ask? </p><p>[Disclosure: I&#x27;m helping out Atman Retreat as an advisor; views are my own]</p> milan_griffes 9emjfned4kd3GN9Ke 2019-04-18T17:47:39.361Z Comment by Milan_Griffes on Legal psychedelic retreats launching in Jamaica https://forum.effectivealtruism.org/posts/H8gC7gLfFvCTwLDsf/legal-psychedelic-retreats-launching-in-jamaica#YH5xqdNdB7KJA68uc <p>Re: psychedelics &amp; psychosis risk, see <a href="https://www.ncbi.nlm.nih.gov/pubmed/23976938">Krebs &amp; Johansen 2013</a>, a study of National Survey on Drug Use and Health data (<em>n</em> = 130,152) which found:</p><blockquote>21,967 respondents (13.4% weighted) reported lifetime psychedelic use. There were no significant associations between lifetime use of any psychedelics, lifetime use of specific psychedelics (LSD, psilocybin, mescaline, peyote), or past year use of LSD and increased rate of any of the mental health outcomes. </blockquote><blockquote>Rather, in several cases psychedelic use was associated with lower rate of mental health problems.</blockquote><p></p><p>Unfortunately, it&#x27;s not a randomized, forward-looking trial. I personally give high-quality retrospective survey research like this some weight when thinking through the risks associated with psychedelics. (And more research is needed, as always.)</p> milan_griffes YH5xqdNdB7KJA68uc 2019-04-18T16:42:18.578Z Comment by Milan_Griffes on Legal psychedelic retreats launching in Jamaica https://forum.effectivealtruism.org/posts/H8gC7gLfFvCTwLDsf/legal-psychedelic-retreats-launching-in-jamaica#GnYj2AwLBAP4TSesX <p>Got it, thanks.</p><p>fwiw, I posted stuff like this about 20 months ago, while still in the &quot;idea&quot; phase, and also received a chilly reception then. (I&#x27;ve since removed those posts to avoid Google indexing them.)</p> milan_griffes GnYj2AwLBAP4TSesX 2019-04-18T16:36:14.305Z Comment by Milan_Griffes on Legal psychedelic retreats launching in Jamaica https://forum.effectivealtruism.org/posts/H8gC7gLfFvCTwLDsf/legal-psychedelic-retreats-launching-in-jamaica#cGiCZSHnmkPYuvHqD <blockquote>I think the EA Forum should be used for sharing ideas with the EA community and receiving useful feedback. I don&#x27;t think this post is a good example of that.</blockquote><p></p><p>Could you expand a little more about why it&#x27;s not a good example of &quot;sharing ideas with the EA community and receiving useful feedback&quot;?</p><p>i.e. expand more on how &quot;selling something that you think is a good idea&quot; differs from &quot;sharing something that you think is a good idea&quot;?</p> milan_griffes cGiCZSHnmkPYuvHqD 2019-04-18T15:57:39.802Z Comment by Milan_Griffes on Thoughts on 80,000 Hours’ research that might help with job-search frustrations https://forum.effectivealtruism.org/posts/YHyvjYSEQtp3nfd6c/thoughts-on-80-000-hours-research-that-might-help-with-job#BJZrWsgMvirKmoL7C <p>Ah, I hadn&#x27;t taken the quiz in a couple years. Looks like they&#x27;ve changed it since then.</p><p>I just tried 5 different answer-configurations of the quiz: <a href="https://80000hours.org/career-quiz/">https://80000hours.org/career-quiz/</a></p><p>And got &quot;congressional staffer&quot; or &quot;policy-oriented government job&quot; for all configurations. Guess I should move to DC.</p> milan_griffes BJZrWsgMvirKmoL7C 2019-04-18T06:11:39.254Z Comment by Milan_Griffes on EA Hotel fundraiser 4: concrete outputs after 10 months https://forum.effectivealtruism.org/posts/dLGM88JRE96iHd7z4/ea-hotel-fundraiser-4-concrete-outputs-after-10-months#Zd2HgKrLgcdhM2BAt <p>Maybe <a href="https://www.effectivealtruism.org/grants/">EA Grants</a> will be up-and-running in time to make a difference? (I&#x27;ll check with Nicole.)</p> milan_griffes Zd2HgKrLgcdhM2BAt 2019-04-18T05:54:56.685Z Comment by Milan_Griffes on EA Hotel fundraiser 4: concrete outputs after 10 months https://forum.effectivealtruism.org/posts/dLGM88JRE96iHd7z4/ea-hotel-fundraiser-4-concrete-outputs-after-10-months#wcvAS4H5snM4fKy4q <p>Ah, sad.</p> milan_griffes wcvAS4H5snM4fKy4q 2019-04-18T05:47:07.548Z Comment by Milan_Griffes on Legal psychedelic retreats launching in Jamaica https://forum.effectivealtruism.org/posts/H8gC7gLfFvCTwLDsf/legal-psychedelic-retreats-launching-in-jamaica#29vsR6c75cR6j24Zv <p>For additional background, here&#x27;s a short (&amp; lossy) argument for psychedelics as <a href="https://forum.effectivealtruism.org/posts/AChFG9AiNKkpr3Z3e/who-is-working-on-finding-cause-x">Cause X</a>:</p><ul><li>Almost everyone in EA holds either a <a href="https://80000hours.org/articles/future-generations/">longtermist view</a> or a <a href="https://forum.effectivealtruism.org/posts/XWSTBBH8gSjiaNiy7/cause-profile-mental-health#S9PfzbYHinhD8JHRG">person-affecting view</a></li><li>If you hold a <a href="https://80000hours.org/articles/future-generations/">longtermist view</a> &amp; you&#x27;re a consequentialist (as far as I know, most longtermists are consequentialist), <a href="https://forum.effectivealtruism.org/posts/X2n6pt3uzZtxGT9Lm/doing-good-while-clueless">consequentialist cluelessness</a> seems like a major theoretical problem</li><ul><li>Basically, trying to assess outcomes occurring 10,000+ years in the future breaks the consequentialism algorithm (because we have no visibility into these outcomes)</li><li>Given this theoretical problem, longtermist cause prioritization should include &quot;how robust to cluelessness is this?&quot; as a major factor</li><li>Some x-risk interventions seem pretty robust to cluelessness</li><li>Also robust: interventions that increase the set of well-intentioned + capable people</li><ul><li><a href="http://rationality.org/">CFAR</a> &amp; <a href="http://paradigmacademy.co/">Paradigm Academy</a> are aimed at this</li><li>The psychedelic experience also seems like a plausible lever on increasing capability (via reducing negative self-talk &amp; other mental blocks) and improving intentions (via <a href="https://en.wikipedia.org/wiki/Ego_death">ego dissolution</a> changing one&#x27;s metaphysical assumptions)</li></ul><li><strong>Ergo: </strong>under a longtermist view, psychedelic interventions are plausibly in the same ballpark of effectiveness as x-risk interventions &amp; other interventions that increase the set of well-intentioned + capable people</li></ul><li>If you hold a <a href="https://forum.effectivealtruism.org/posts/XWSTBBH8gSjiaNiy7/cause-profile-mental-health#S9PfzbYHinhD8JHRG">person-affecting view</a>, mental health seems like a cause area on par with global poverty (see Michael Plant&#x27;s <a href="https://forum.effectivealtruism.org/posts/XWSTBBH8gSjiaNiy7/cause-profile-mental-health">cause profile on mental health</a>)</li><ul><li>Psychedelics are showing a ton of <a href="https://www.enthea.net/pages/recommended-reading.html">promise as treatment</a> for a battery of chronic mental health issues (including anxiety, depression, OCD, and addictive disorders including smoking &amp; alcoholism)</li><li><strong>Ergo:</strong> under a person-affecting view, psychedelic interventions are plausibly in the same ballpark of effectiveness as global poverty interventions</li></ul></ul><p></p><p>If there&#x27;s sufficient interest, I&#x27;ll take the time to make the above argument more rigorous.</p><p>[Disclosure: I&#x27;m helping out Atman Retreat as an advisor; views are my own]</p> milan_griffes 29vsR6c75cR6j24Zv 2019-04-17T23:49:27.682Z Comment by Milan_Griffes on Reducing EA job search waste https://forum.effectivealtruism.org/posts/6Dqb8Fkh2AnhzbAAM/reducing-ea-job-search-waste#QdgXzSgQcvBYQryfk <p>+1 to Ought giving great job-search feedback.</p> milan_griffes QdgXzSgQcvBYQryfk 2019-04-17T23:00:11.787Z Comment by Milan_Griffes on Thoughts on 80,000 Hours’ research that might help with job-search frustrations https://forum.effectivealtruism.org/posts/YHyvjYSEQtp3nfd6c/thoughts-on-80-000-hours-research-that-might-help-with-job#4ssd6XK8nH6wHb5L2 <p>fwiw I&#x27;ve never gotten those outcomes when I&#x27;ve taken the quiz.</p> milan_griffes 4ssd6XK8nH6wHb5L2 2019-04-17T22:43:29.593Z Comment by Milan_Griffes on EA Hotel fundraiser 4: concrete outputs after 10 months https://forum.effectivealtruism.org/posts/dLGM88JRE96iHd7z4/ea-hotel-fundraiser-4-concrete-outputs-after-10-months#Gdr4hD9ehADgqtcKo <p>Probability of out-of-cycle grant consideration for the EA Hotel, given that they&#x27;re in a funding crunch &amp; you&#x27;ve broadly come around to thinking that it&#x27;s a good idea?</p> milan_griffes Gdr4hD9ehADgqtcKo 2019-04-17T22:42:02.830Z Comment by Milan_Griffes on Legal psychedelic retreats launching in Jamaica https://forum.effectivealtruism.org/posts/H8gC7gLfFvCTwLDsf/legal-psychedelic-retreats-launching-in-jamaica#nKnF64v4CYp8MQEr9 <blockquote>This feels inappropriate. I don&#x27;t want to be sold things on the EA Forum.</blockquote><p></p><p>I don&#x27;t see how this post is substantively different from previous content like:</p><ul><li><a href="https://forum.effectivealtruism.org/posts/jv2og63wwa7rkakf5/cea-is-fundraising-for-2019">CEA is Fundraising for 2019</a></li><li><a href="https://forum.effectivealtruism.org/posts/fzS9YHvZzkCjh5dXp/how-big-a-deal-could-gwwc-be-pretty-big">How big a deal could GWWC be? Pretty big.</a></li><li><a href="https://forum.effectivealtruism.org/posts/f97Was6q9Tw6qcczF/giving-what-we-can-is-still-growing-at-a-surprisingly-good">Giving What We Can is still growing at a surprisingly good pace</a></li><li><a href="https://forum.effectivealtruism.org/posts/WKkF36bJsH8FmYZkw/why-donate-to-80-000-hours">Why donate to 80,000 Hours</a></li></ul><p></p><p>All of which have a self-promotional slant. </p><p>For-profit businesses are just a different revenue structure than non-profit organizations. I think self-promotion of both for-profit &amp; non-profit projects that are being undertaken for EA reasons should be fair game for the Forum.</p><p>[Disclosure: I&#x27;m helping out Atman Retreat as an advisor; views are my own]</p> milan_griffes nKnF64v4CYp8MQEr9 2019-04-17T22:05:08.175Z Comment by Milan_Griffes on The Turing Test podcast is back with Bryan Caplan! https://forum.effectivealtruism.org/posts/6JxWPnt4ThujaTKZH/the-turing-test-podcast-is-back-with-bryan-caplan#2qd7ghKryawqwNnM7 <p>Just want to plug that this interview is amazing: literally any arbitrarily-selected 20-minute slice is full of great content &amp; Caplan-isms.</p> milan_griffes 2qd7ghKryawqwNnM7 2019-04-17T21:55:05.697Z Comment by Milan_Griffes on EA Hotel fundraiser 4: concrete outputs after 10 months https://forum.effectivealtruism.org/posts/dLGM88JRE96iHd7z4/ea-hotel-fundraiser-4-concrete-outputs-after-10-months#s9AWTGKYSdJoZprH6 <p>They <a href="https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions#QJdmWniGcSCZ2ivcN">applied to the Long Term Future Fund</a> April 2019 grant round, and were rejected.</p> milan_griffes s9AWTGKYSdJoZprH6 2019-04-17T21:48:59.847Z Comment by Milan_Griffes on Should EA grantmaking be subject to independent audit? https://forum.effectivealtruism.org/posts/bFM5ZorxkPTtoPiec/should-ea-grantmaking-be-subject-to-independent-audit#mx9Y7awmNc3dsFDs2 <p>I don&#x27;t have a particular specification in mind. <a href="https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions#d4YHzSJnNWmyxf6HM">Evan&#x27;s comment</a> seems like a good starting point for what audits could look like.</p> milan_griffes mx9Y7awmNc3dsFDs2 2019-04-17T20:35:06.862Z Comment by Milan_Griffes on Should EA grantmaking be subject to independent audit? https://forum.effectivealtruism.org/posts/bFM5ZorxkPTtoPiec/should-ea-grantmaking-be-subject-to-independent-audit#pQi3imA5jejQcvMnL <p>I think a lot of the value of an internal audit business unit flows from other stakeholders <em>being aware that internal audit exists, during their decision-making process. </em></p><p>i.e. the common knowledge that internal audit is a thing / one&#x27;s decisions will be later subject to independent scrutiny generates a lot of the value.</p> milan_griffes pQi3imA5jejQcvMnL 2019-04-17T19:36:05.195Z Comment by Milan_Griffes on Who is working on finding "Cause X"? https://forum.effectivealtruism.org/posts/AChFG9AiNKkpr3Z3e/who-is-working-on-finding-cause-x#6pQrrcHZPnF3ZZTXk <blockquote>As far as I&#x27;m aware, no other cause pri efforts have been predicated on the theme of &#x27;finding Cause X.&#x27;</blockquote><p></p><p><a href="https://www.openphilanthropy.org/research/cause-reports">https://www.openphilanthropy.org/research/cause-reports</a></p> milan_griffes 6pQrrcHZPnF3ZZTXk 2019-04-17T17:50:08.498Z Comment by Milan_Griffes on Who is working on finding "Cause X"? https://forum.effectivealtruism.org/posts/AChFG9AiNKkpr3Z3e/who-is-working-on-finding-cause-x#LzhWSa9vLuaYYFgJj <blockquote>I don&#x27;t see how it makes sense to anyone as a practical pursuit.</blockquote><p></p><p>GiveWell &amp; Open Phil have at times undertaken systematic reviews of plausible cause areas; their general framework for this seems quite practical.</p><p></p><blockquote>That&#x27;s because to its nature, based on the overall soundness in the fundamental assumptions behind x-risk, its basically binary whether it is or isn&#x27;t the top priority.</blockquote><p></p><p>Pretty strongly disagree with this. I think there&#x27;s a strong case for x-risk being a priority cause area, but I don&#x27;t think it dominates all other contenders. (More on this <a href="https://forum.effectivealtruism.org/posts/X2n6pt3uzZtxGT9Lm/doing-good-while-clueless">here</a>.)</p> milan_griffes LzhWSa9vLuaYYFgJj 2019-04-17T17:48:54.147Z Should EA grantmaking be subject to independent audit? https://forum.effectivealtruism.org/posts/bFM5ZorxkPTtoPiec/should-ea-grantmaking-be-subject-to-independent-audit <p>Inspired by this <a href="https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions#d4YHzSJnNWmyxf6HM">awesome comment</a> by Evan Gaensbauer. </p><p>Tangentially related: comment thread about <a href="https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions#u46N5SY9bvefegm2t">compensating Long Term Future Fund grantmakers</a></p><hr class="dividerBlock"/><p>Evan did a great, <a href="https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions#d4YHzSJnNWmyxf6HM">independent audit of the recent round of Long Term Future Fund grants</a>.</p><p>This seems like a clear value-add. It&#x27;d be good to have similar audits happen for each round of EA grantmaking.</p><p>Currently, we&#x27;re relying on folks taking it upon themselves to do audits like this. There&#x27;s no formal structure, and if such audits stopped, there wouldn&#x27;t be any institutional consequences (though there&#x27;d likely be indirect consequences, such as lower grantmaking quality over time).</p><p>Given that audits like this seem like a clear value-add to the EA grantmaking process, building formal structure to support grantmaking audits seems leveraged.</p><p>Many firms do something analogous by maintaining &quot;Internal Audit&quot; business units, though it&#x27;s not clear that those units are well-designed. See also <a href="https://forum.effectivealtruism.org/posts/jDLzettqkp4Ec4ktn/link-open-phil-s-2019-progress-and-plans-update">Open Phil&#x27;s recent announcement</a> of their intention to build out an internal &quot;Impact Evaluation&quot; function. </p><p>Should the EA community find a way to financially support independent audits of its grantmaking? </p> milan_griffes bFM5ZorxkPTtoPiec 2019-04-17T17:18:32.303Z Comment by Milan_Griffes on Long Term Future Fund: April 2019 grant decisions https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions#Bj9JCyxN7N3QmrGX3 <p>This is great! Thank you for the care &amp; attention you put into creating this audit.</p> milan_griffes Bj9JCyxN7N3QmrGX3 2019-04-17T16:58:47.615Z Is Modern Monetary Theory a good idea? https://forum.effectivealtruism.org/posts/bbTPAN83fk3TR7gwr/is-modern-monetary-theory-a-good-idea <p><a href="https://www.vox.com/future-perfect/2019/4/16/18251646/modern-monetary-theory-new-moment-explained">https://www.vox.com/future-perfect/2019/4/16/18251646/modern-monetary-theory-new-moment-explained</a></p><p>I don&#x27;t know very much about this, but it seems like MMT is getting enough cultural / political purchase that it could end up having a big impact.</p><p>Either a big positive impact or a big negative impact, depending on how sensible it is.</p><p>Does anyone with MMT expertise have a take? </p> milan_griffes bbTPAN83fk3TR7gwr 2019-04-16T21:25:30.508Z What Master's is the best preparation for an Econ PhD? https://forum.effectivealtruism.org/posts/kh9PDKQ8xj4sTcMbp/what-master-s-is-the-best-preparation-for-an-econ-phd <p>Previously: <a href="https://forum.effectivealtruism.org/posts/cjNkvXSFPxTuYBaaZ/what-type-of-master-s-is-best-for-ai-policy-work">What type of Master&#x27;s is best for AI policy work?</a></p><hr class="dividerBlock"/><p>Pursuing a PhD in Economics is one of <a href="https://80000hours.org/career-reviews/economics-phd/">80,000 Hours recommended career paths</a> (<a href="https://web.archive.org/web/20180909075700/https://80000hours.org/career-reviews/economics-phd/">a</a>). </p><p>It also might be <a href="https://www.econlib.org/archives/2005/10/is_the_econ_phd.html">a free lunch</a> (<a href="https://web.archive.org/web/20190416205836/https://www.econlib.org/archives/2005/10/is_the_econ_phd.html">a</a>). (That article is a little dated, but recent correspondence with the author leads me to believe that Caplan still thinks that an Economics PhD is a good deal, for some people &amp; some career goals.)</p><p>Entry into good Econ PhD programs is very competitive. It&#x27;s especially daunting to consider if, like me, you don&#x27;t have a quantitative undergraduate degree.</p><p>I&#x27;ve heard that getting a quantitative Master&#x27;s is a good way to boost one&#x27;s PhD applications. But there are <em>a lot </em>of Master&#x27;s programs out there, and presumably some are better than others for going on to an Econ PhD.</p><p>So, does anyone have thoughts on which specific Master&#x27;s programs are best-tailored for applying to competitive Econ PhD programs upon completion?</p><p></p> milan_griffes kh9PDKQ8xj4sTcMbp 2019-04-16T21:04:18.295Z Comment by Milan_Griffes on Thoughts on 80,000 Hours’ research that might help with job-search frustrations https://forum.effectivealtruism.org/posts/YHyvjYSEQtp3nfd6c/thoughts-on-80-000-hours-research-that-might-help-with-job#yTy347dSMFfwci3QL <p>Great comment.</p><p></p><blockquote>Despite recognising this on a conceptual level, I still find it hard to believe and often feel guilty (or shame or sadness) when I think of people whose &#x27;altruistic successfulness&#x27; surpasses mine.</blockquote><p></p><p>In my experience, feelings like this have flowed from not being clear on the motivations driving my actions. More on this here: <a href="https://forum.effectivealtruism.org/posts/YFiLMu9osLm3c3G8k/altruistic-action-is-dispassionate">Altruistic action is dispassionate</a></p><p>Getting clearer on the &quot;what motivations are driving this?&quot; thing has been really helpful (both for improving my subjective experience, and for boosting my efficacy). </p> milan_griffes yTy347dSMFfwci3QL 2019-04-16T19:41:35.794Z Comment by Milan_Griffes on Literature Review: Distributed Teams https://forum.effectivealtruism.org/posts/5wcvBNq9WvMuGJC3D/literature-review-distributed-teams#PwmymTqXuAmQNvNit <blockquote>-Distribution decreases bandwidth and trust (although you can make up for a surprising amount of this with well timed visits).</blockquote><blockquote>-Semi-distributed teams are worse than fully remote or fully co-located teams on basically every metric. The politics are worse because geography becomes a fault line for factions, and information is lost because people incorrectly count on proximity to distribute information.</blockquote><p></p><p>+1 to these two points. (Elizabeth &amp; I both worked at <a href="https://www.wave.com/">Wave</a>, which is distributed-first.)</p><p>Relatedly, Matt of Wordpress just published a <a href="https://ma.tt/2019/04/happy-tools/">nice piece on distributed work</a>. (Wordpress is probably the biggest distributed-first company.)</p> milan_griffes PwmymTqXuAmQNvNit 2019-04-16T19:03:58.199Z Complex value & situational awareness https://forum.effectivealtruism.org/posts/JmPQKoWjF6p9pahQx/complex-value-and-situational-awareness <p><em>Epistemic status: theorizing. </em></p><p>Previously: <a href="https://forum.effectivealtruism.org/posts/vMpuXz2zqS8iHya7i/ea-jobs-provide-scarce-non-monetary-goods">EA jobs provide scarce non-monetary goods</a>; <a href="https://forum.effectivealtruism.org/posts/jmbP9rwXncfa32seH/after-one-year-of-applying-for-ea-jobs-it-is-really-really">It is really, really hard to get hired by an EA organisation</a></p><hr class="dividerBlock"/><p>Here are two types of activity that (a) I genuinely enjoy and (b) seem quite useful:</p><ol><li>Adding complex value</li><li>Maintaining situational awareness</li></ol><hr class="dividerBlock"/><h2><strong>Complex value</strong></h2><p>What does &quot;adding complex value&quot; mean?</p><p>It means all the efforts (often small, often done at the margin) that are difficult to automate / formalize, and are (in aggregate) crucial for pulling a project together.</p><p>Complex value is the grease that helps all the machine&#x27;s cogs run together.</p><p>Examples:</p><ul><li>Establishing new linkages in the social graph by making introductions</li><li>Reviewing &amp; giving feedback on drafts of writing, pre-publication</li><li>Reading &amp; commenting on writing, post-publication</li><li>Having new ideas about things that would be good to do (especially things that would be good to do on the margin; big new ideas can be turned into standalone projects or companies)</li><li>Helping refine the pitch for a new idea; understanding and articulating the bear &amp; bull cases for the idea</li><li>Pitching good new ideas to relevant people that are plausibly interested</li></ul><hr class="dividerBlock"/><h2><strong>Situational awareness</strong></h2><p>What does &quot;maintaining situational awareness&quot; mean?</p><p>It&#x27;s all the reading &amp; conversations that are undertaken to learn what&#x27;s happening in the world, to keep your world-model up to date with both social reality &amp; objective, physical reality.</p><p>Maintaining situational awareness dovetails nicely with adding complex value – the better your situational awareness, the more opportunities for adding complex value you&#x27;ll see.</p><p>Examples:</p><ul><li>Lurking on twitter (especially with a well-curated feed)</li><li>Using various other social media (though the signal:noise ratio of other social media tends to be far worse than that of well-curated twitter)</li><li>Reading company &amp; project slacks</li><li>Semi-formal &quot;update&quot; conversations with other actors in project domains you care about</li><li>Informal conversations with friends who happen to work in project domains you care about</li><li>Attending conferences</li><li>Gossip</li></ul><p>Note that very different information sets flow through formal &amp; informal networks. These sets tend to be complementary, so it seems important to be tapped into both.</p><p>Note also that situational awareness seems distinct from &quot;learning about a subject.&quot; Probably the distinction cleaves on where most of the learning occurs – situational awareness focuses its learning on social reality (&quot;who thinks what about who/what?&quot;), whereas the locus of learning about subjects tends to be in physical reality (&quot;how does this part of physical reality work?&quot;).</p><p>Stereotypical city for situational awareness: DC<br/>Stereotypical city for learning about subjects: SF</p><hr class="dividerBlock"/><p>Unfortunately, though both adding complex value &amp; maintaining situational awareness are high-value, it&#x27;s hard to earn a living by making them your main focus.</p><p>It is <em>possible</em> to do this, e.g. one way of understanding the original pitch for <u><a href="https://www.givewell.org/">GiveWell</a></u> is &quot;create an institution in philanthropy that will aggregate explicit &amp; implicit information sets, remain at the frontier of situational awareness, and identify leveraged opportunities for adding complex value in the philanthropic sector.&quot;</p><p><u><a href="https://80000hours.org/">80,000 Hours</a></u> is another example of this, aimed at the domain of &quot;policy &amp; research careers&quot; rather than at philanthropy.</p><p>I&#x27;m still learning about how to successfully establish something like this. My current take is that (a) it&#x27;s generally hard to do, (b) the base rate of success is very low, and (c) successful attempts leaned heavily on leveraging pre-existing reputation &amp; social relationships.</p><hr class="dividerBlock"/><p><em>Cross-posted to <a href="https://www.lesswrong.com/posts/GeMrEhXZqEgcXq5mT/complex-value-and-situational-awareness">LessWrong</a> and my <a href="https://flightfromperfection.com/framework-complex-value-situational-awareness.html">blog</a>. Thanks to Dony Christie for conversations that introduced me to the &quot;complex value&quot; meme, and to Aaron Tucker for conversations that introduced me to the &quot;situational awareness&quot; meme.</em></p> milan_griffes JmPQKoWjF6p9pahQx 2019-04-16T18:42:58.980Z Comment by Milan_Griffes on What are people's objections to earning-to-give? https://forum.effectivealtruism.org/posts/hF3LdaqNbpxPrKBHJ/what-are-people-s-objections-to-earning-to-give#JH2aRSqcqGnCbPATt <p>+1</p><p>I think this dynamic is a great example of utilitarian morality diverging from intuitions about a what constitutes <a href="https://en.wikipedia.org/wiki/The_Good_Life">the Good Life</a>.</p><p>i.e. &quot;toil away at a random job you don&#x27;t really care about to subsidize other well-off people doing more interesting work&quot; definitely doesn&#x27;t pass the smell test for &quot;this is what the Good Life looks like.&quot;</p> milan_griffes JH2aRSqcqGnCbPATt 2019-04-16T17:46:03.289Z [Link] Open Phil's 2019 progress & plans update https://forum.effectivealtruism.org/posts/jDLzettqkp4Ec4ktn/link-open-phil-s-2019-progress-and-plans-update <p><a href="https://www.openphilanthropy.org/blog/our-progress-2018-and-plans-2019">https://www.openphilanthropy.org/blog/our-progress-2018-and-plans-2019</a> </p><p>(<a href="https://web.archive.org/web/20190416173024/https://www.openphilanthropy.org/blog/our-progress-2018-and-plans-2019">archived</a>)</p><p><strong>tl;dr –</strong></p><blockquote>-We recommended well over $100 million worth of grants in 2018. The bulk of these came from our major current focus areas: <a href="http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence">potential risks of advanced AI</a>, <a href="http://www.openphilanthropy.org/focus/global-catastrophic-risks/biosecurity">biosecurity and pandemic preparedness</a>, <a href="http://www.openphilanthropy.org/focus/us-policy/criminal-justice-reform">criminal justice reform</a>, <a href="http://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare">farm animal welfare</a>, <a href="https://www.openphilanthropy.org/focus/scientific-research">scientific research</a>, and <a href="http://www.openphilanthropy.org/focus/other-areas">effective altruism</a>. Additionally, <a href="https://www.openphilanthropy.org/blog/2018-allocation-givewell-top-charities">we recommended</a> ~$70 million in grants to GiveWell’s <a href="https://www.givewell.org/charities/top-charities">top charities</a> and <a href="https://www.givewell.org/research/incubation-grants">incubation grants</a>.</blockquote><blockquote>-We continue to believe there are hints of impact in the causes where our giving is most mature and near-term: criminal justice reform and farm animal welfare. This coming year, a major priority is to develop our impact evaluation function and thereby apply more scrutiny to our progress to date.</blockquote><blockquote>-Another major priority will be developing a “worldview investigations” function, which will seek to examine and document — and seek more debate, both internal and external, on — debatable views we hold that play a key role in our cause prioritization.</blockquote><blockquote>-A major focus for 2018 was increasing our research capacity. Our <a href="https://www.openphilanthropy.org/blog/reflections-our-2018-generalist-research-analyst-recruiting">research analyst recruiting program</a> was a full-year effort, starting with our announcement of new openings in <a href="https://www.openphilanthropy.org/blog/new-job-opportunities">February</a> and ending with hiring five full-time research-focused staff by December. There are a number of functions that we think Open Phil still needs to develop in order to be a fully mature grantmaker, and we believe our expanded research team will help us develop those functions.</blockquote><blockquote>-We also increased and professionalized our operations capacity. Beth Jones, our director of operations, joined Open Phil in May. Beth’s arrival allowed Morgan Davis to transition into a new role beginning to build our impact evaluation function.</blockquote><blockquote>-Like <a href="https://www.openphilanthropy.org/blog/our-progress-2017-and-plans-2018#Summary">last year</a>, we maintained a high level of grantmaking and made significant progress on increasing capacity and improving operations. We still believe we have room for further development on these fronts, and that we have more work to do in sharpening our thinking on cause prioritization and worldview diversification before we seek to increase our annual giving much more.</blockquote><p></p><p></p> milan_griffes jDLzettqkp4Ec4ktn 2019-04-16T17:31:53.811Z Comment by Milan_Griffes on [Link] The Optimizer's Curse & Wrong-Way Reductions https://forum.effectivealtruism.org/posts/Wghi6hpu5gGBZHvtj/link-the-optimizer-s-curse-and-wrong-way-reductions#GhhY5ywCazCTqmtcK <blockquote>The proposed solution of using priors just pushes the problem to selecting good priors.</blockquote><p></p><p>+1</p><p>In conversations I&#x27;ve had about this stuff, it seems like the crux is often the question of how easy it is to choose good priors, and whether a &quot;good&quot; prior is even an intelligible concept.</p><p>Compare <a href="https://confusopoly.com/2019/04/03/the-optimizers-curse-wrong-way-reductions/">Chris&#x27; piece</a> (&quot;selecting good priors is really hard!&quot;) with <a href="https://www.lesswrong.com/posts/5gQLrJr2yhPzMCcni/the-optimizer-s-curse-and-how-to-beat-it">this piece by Luke Muehlhauser</a> (&quot;the optimizer&#x27;s curse is trivial, just choose an appropriate prior!&quot;)</p> milan_griffes GhhY5ywCazCTqmtcK 2019-04-16T14:21:57.678Z Comment by Milan_Griffes on Rethink Priorities Plans for 2019 https://forum.effectivealtruism.org/posts/6cgRR6fMyrC4cG3m2/rethink-priorities-plans-for-2019#EkDQmEjDvDwg9CKEB <p>One of the takeaways was that mental health appears as a much bigger problem under the experience-sampling framework than it does under the QALY framework.</p><p>e.g. QALY framework considers one year with &quot;some problems walking about&quot; to be about as bad as one year with &quot;moderate anxiety or depression.&quot;</p><p>Which seems obviously wrong.</p><p>Here&#x27;s the <a href="https://docs.google.com/presentation/d/1al06X9G0hJMMDVZU09JrOnJpFD80yNvmn5FUArpyvWY/edit">presentation deck</a>.</p> milan_griffes EkDQmEjDvDwg9CKEB 2019-04-15T21:18:00.292Z Comment by Milan_Griffes on Rethink Priorities Plans for 2019 https://forum.effectivealtruism.org/posts/6cgRR6fMyrC4cG3m2/rethink-priorities-plans-for-2019#KBTnTRNjNDBvuY9fk <blockquote><strong>Understanding mental health interventions</strong> - We’d like to understand if we can rely on $/DALY or $/QALY metrics to capture mental health benefits or, if not, if there is a better cost-effectiveness metric that better captures mental health benefits. Once we have a good framework for prioritizing mental health, we’d like to see if we can identify any mental health opportunities that are competitive with other EA opportunities.</blockquote><p></p><p>Recommend chatting with Natalia Mendoça about this. She&#x27;s done a lot of good, independent work on these questions.</p><p>Summary of a recent presentation she gave:</p><blockquote>Natalia Mendoça presented about &quot;Using smartphones to improve well-being measures in order to aid cause prioritization research&quot; (<a href="https://docs.google.com/presentation/d/1al06X9G0hJMMDVZU09JrOnJpFD80yNvmn5FUArpyvWY/edit?usp=sharing">link</a> to presentation). She argued that the experience sampling paradigms that made waves in the 2000s and early 2010s happened at a time when relatively few people had smartphones. Since today smartphone adoption in developing countries has exploded we could use an experience sampling app to determine the major causes of suffering throughout the world in a way that wasn&#x27;t possible before. She specifically mentioned &quot;comparing how bad different illnesses feel&quot; in order to help us guide policy decision for cause prioritization.</blockquote><p></p><p><strong>tl;dr</strong> experience sampling seems methodologically superior to QALY surveying, and could be done at large scale cheaply given the massive growth of smartphone use over the last decade.</p> milan_griffes KBTnTRNjNDBvuY9fk 2019-04-15T21:15:19.706Z Comment by Milan_Griffes on [Link] The Optimizer's Curse & Wrong-Way Reductions https://forum.effectivealtruism.org/posts/Wghi6hpu5gGBZHvtj/link-the-optimizer-s-curse-and-wrong-way-reductions#nZCSxr5LRmzq5B2j5 <p>FYI I asked about this on <a href="https://blog.givewell.org/2019/03/11/march-2019-open-thread/#comment-952810">GiveWell&#x27;s most recent open</a> <a href="https://blog.givewell.org/2019/03/11/march-2019-open-thread/#comment-952810">thread</a>, Josh <a href="https://blog.givewell.org/2019/03/11/march-2019-open-thread/#comment-952879">replied</a>:</p><blockquote>Hi Milan,</blockquote><blockquote>We’ve considered this issue and discussed it internally; we spent some time last year exploring ways in which we might potentially adjust our models for it, but did not come up with any promising solutions (and, as the post notes, an explicit quantitative adjustment factor is not Chris’s recommended solution at this time).</blockquote><blockquote>So, we are left in a difficult spot: the optimizer’s curse (and related issues) seems like a real threat, but we do not see high-return ways to address it other than continuing to broadly deepen and question our research. In the case that Chris highlights most — our recommendation of deworming — we have put substantial effort into working along the lines that he recommends and we continue to do so. Examples of the kind of additional scrutiny that we have given to this recommendation includes:</blockquote><blockquote><br/>- Embracing model skepticism: We put weight on qualitative factors relevant to specific charities’ operations and specific uses of marginal funding (<strong><a href="https://blog.givewell.org/2018/11/26/our-updated-top-charities-for-giving-season-2018/">more</a></strong>). We generally try not to put too much weight on minor differences in cost-effectiveness analyses (<strong><a href="https://blog.givewell.org/2017/06/01/how-givewell-uses-cost-effectiveness-analyses/">more</a></strong>). We place substantial weight on cost-effectiveness analyses while doing what we can to recognize their limitations and bring in other forms of evidence.</blockquote><blockquote><br/>– Re-examining our assumptions through vetting: we asked Senior Advisor David Roodman to independently assess the evidence for deworming and he produced extensive reports with his thoughts: see <strong><a href="https://blog.givewell.org/2016/12/06/why-i-mostly-believe-in-worms/">here</a></strong> and <strong><a href="https://blog.givewell.org/2017/01/04/how-thin-the-reed-generalizing-from-worms-at-work/">here</a></strong>.</blockquote><blockquote><br/>– Having conversations and engaging with a variety of deworming researchers, particularly including skeptics. E.g., we’ve engaged with work from skeptical Cochrane researchers (e.g. <strong><a href="https://files.givewell.org/files/conversations/Aiken_Davey_Garner_Taylor-Robinson_2-26-16_(public).pdf">here</a></strong> and <strong><a href="https://blog.givewell.org/2015/07/24/new-deworming-reanalyses-and-cochrane-review/">here</a></strong>), epidemiologist <strong><a href="https://files.givewell.org/files/conversations/Nathan_Lo_04-24-17_(public).pdf">Nathan Lo</a></strong>, <strong><a href="https://www.givewell.org/charities/schistosomiasis-control-initiative/supplementary-information#AcademicPapers">Melissa Parker and Tim Allen</a></strong> (who looked at deworming through an anthropological perspective), etc.</blockquote><blockquote><br/>– Funding additional research with the goal of potentially falsifying our conclusions: see e.g. grants <strong><a href="https://www.givewell.org/research/incubation-grants/cega-uc-berkeley/june-2017-grant">here</a></strong> and <strong><a href="https://www.givewell.org/research/incubation-grants/uc-berkeley/april-2017-grant">here</a></strong>.</blockquote><blockquote></blockquote><blockquote>We will continue to take high-return steps to assess whether our recommendations are justified. For example, this year we are deepening our assessment of how we should expect deworming’s effectiveness to vary in contexts with different levels of worm infection. It is also on our list to consider quantitative adjustments for the optimizer’s curse further at some point in the future, but given the challenges we encountered in our work so far, we are unlikely to prioritize it soon.</blockquote><blockquote>Finally, we hope to continue to follow discussions on the optimizer’s curse and would be interested if theoretical progress or other practical suggestions are made. As Chris notes, this seems to be a cross-cutting theoretical issue that applies to cause prioritization researchers outside of GiveWell, as well.</blockquote><p></p> milan_griffes nZCSxr5LRmzq5B2j5 2019-04-15T20:58:39.637Z Comment by Milan_Griffes on Berkeley REACH EA Weekend Workshop https://forum.effectivealtruism.org/posts/ptTMSdi9BReWSFAth/berkeley-reach-ea-weekend-workshop#aP4ZGC6fGkQRZ5H7u <p>This is awesome! So exciting to see the REACH organizing stuff like this :-) </p> milan_griffes aP4ZGC6fGkQRZ5H7u 2019-04-15T20:53:47.588Z Who in EA enjoys managing people? https://forum.effectivealtruism.org/posts/FtJpESC9C3RfsKfJc/who-in-ea-enjoys-managing-people <p>Sparked by <a href="https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions#PxY3nc8hC9T89aP9h">this comment</a> over on the Longterm Future April 2019 grants thread:</p><blockquote>This is combined with an environment that is starved on management capacity, and so has very little room to give people feedback on their plans and actions.</blockquote><p></p><ul><li>Who in the EA community actively enjoys managing people?</li><ul><li>Who in EA actively enjoys managing research teams?</li><li>Who in EA actively enjoys managing operations teams?</li><li>Who in EA actively enjoys organizing events &amp; conferences?</li></ul></ul><p>I feel like I don&#x27;t have a very clear map of this space, and it seems like an important limiting factor for the community.</p><p></p> milan_griffes FtJpESC9C3RfsKfJc 2019-04-10T23:49:16.862Z Who is working on finding "Cause X"? https://forum.effectivealtruism.org/posts/AChFG9AiNKkpr3Z3e/who-is-working-on-finding-cause-x <p>As a community, EA sometimes talks about finding &quot;Cause X&quot; (<a href="https://www.effectivealtruism.org/articles/three-heuristics-for-finding-cause-x/">example 1</a>, <a href="https://www.effectivealtruism.org/articles/moral-progress-and-cause-x/">example</a> <a href="https://www.effectivealtruism.org/articles/moral-progress-and-cause-x/">2</a>). </p><p>The search for &quot;Cause X&quot; featured prominently in <a href="https://twitter.com/Effect_Altruism/status/985895759739531264">the billing for last year&#x27;s EA</a> <a href="https://twitter.com/Effect_Altruism/status/985895759739531264">Global</a> (<a href="https://web.archive.org/web/20190410230527/https://twitter.com/Effect_Altruism/status/985895759739531264">a</a>).</p><p>I understand &quot;Cause X&quot; to mean &quot;new cause area that is competitive with the existing EA cause areas in terms of impact-per-dollar.&quot;</p><p>This afternoon, I realized I don&#x27;t really know how many people in EA are actively pursuing the &quot;search for cause X.&quot; (I thought of a couple people, who I&#x27;ll note in comments to this thread. But my map feels very incomplete.)</p> milan_griffes AChFG9AiNKkpr3Z3e 2019-04-10T23:09:23.892Z Why did three GiveWell board members resign in April 2019? https://forum.effectivealtruism.org/posts/LhRGBERBQ79gbToDz/why-did-three-givewell-board-members-resign-in-april-2019 <p>See here: <a href="https://www.givewell.org/changes-in-board-membership">https://www.givewell.org/changes-in-board-membership</a> (<a href="https://web.archive.org/web/20190403213126/https://www.givewell.org/changes-in-board-membership">a</a>)</p> milan_griffes LhRGBERBQ79gbToDz 2019-04-03T21:32:23.408Z Is visiting North Korea effective? https://forum.effectivealtruism.org/posts/Qa3AkbX9mqWZ3XuKp/is-visiting-north-korea-effective <p><em>Epistemic status: exploring, probably a bad idea. This is definitely not a recommendation. </em></p><p><em>If you find yourself seriously considering a trip to North Korea, please <a href="https://flightfromperfection.com/pages/about.html">reach out </a>so I can try to talk you out of it.</em></p><hr class="dividerBlock"/><p>Here&#x27;s a paraphrased quote from South Korean journalist Choi Hak-Rae (given around minute 37:00 of the third episode of <a href="https://www.nationalgeographic.com/tv/inside-north-koreas-dynasty/">National Geographic&#x27;s mini-doc on North Korea</a>):</p><blockquote>They offer the best of the best when foreigners visit their country, even if they are starving to death.</blockquote><p>It&#x27;s possible for (some?) Westerners to visit North Korea, though the <a href="https://travel.state.gov/content/travel/en/international-travel/International-Travel-Country-Information-Pages/KoreaDemocraticPeoplesRepublicof.html">US State Department <strong>strongly</strong> discourages it</a>:</p><ul><li><a href="https://www.youtube.com/watch?v=OtuFaEy4jzE">Youtube documentary 1</a></li><li><a href="https://www.youtube.com/watch?v=reEZn3mJ-Fo">Youtube documentary 2</a></li><li><a href="https://www.youtube.com/watch?v=6f7hSAlMVB8">Youtube documentary 3</a></li></ul><p>Given that Western tourists seem to receive &quot;tour guides&quot; &amp; elaborate guided tours for the duration of their stay, it seems possible that more Western visits to North Korea would strain the resources of the Kim regime. (Though it&#x27;s hard to assess what this expenditure would be fungible with – seems roughly as likely that resources would be redirected from food programs as they would be from other regime priorities.)</p><p>The visit would be relatively easy to put together, logistically (there are flights to Pyongyang from China). The intervention is definitely neglected.</p><p>Experience in North Korea would probably also build one&#x27;s career capital (by boosting reputation, signaling risk tolerance, and (perhaps) establishing relationships in the DPRK &amp; China). This seems especially salient for folks who are aiming at diplomatic &amp; public policy careers.</p><p>A visitor who is detained in-country (not the norm but not unheard of, from my very rough understanding) could also have positive altruistic impacts, by generating international pressure on the regime. But being detained would in all likelihood be extremely unpleasant &amp; perhaps fatal, so maybe it&#x27;s better to not include possible altruistic upsides from detention in the calculus. </p> milan_griffes Qa3AkbX9mqWZ3XuKp 2019-04-02T20:50:23.521Z Altruistic action is dispassionate https://forum.effectivealtruism.org/posts/YFiLMu9osLm3c3G8k/altruistic-action-is-dispassionate <p><em>Epistemic status: speculating, hypothesizing</em></p><p>At first approximation, there are two types of motivation for acting – egoistic &amp; altruistic. </p><p>Almost immediately, someone will come along and say &quot;Wait! In fact, there&#x27;s only one type of motivation for acting – egotistic motivation. All that &#x27;altruistic&#x27; stuff you see is just people acting towards their own self-interest along some dimension, and those actions happen to help out others as a side effect.&quot; </p><p>(cf. <a href="https://www.goodreads.com/book/show/28820444-the-elephant-in-the-brain">The Elephant in the Brain</a>, which doesn&#x27;t say exactly this but does say something like this.)</p><p>In response, many people are moved to defend the altruistic type of motivation (because they want to believe in altruism as a thing, because it better matches their internal experience, because of idealistic attachments; motivations vary). </p><p>I&#x27;m definitely one of these people – I think the altruistic motivation is a thing, distinct from the egoistic motivation. Less fancily – I think that people sometimes work to genuinely help other people, without trying to maximize some aspect of their self-interest.</p><p>Admittedly, it can be difficult to suss out a person&#x27;s motivations. There are strong incentives for appearing to act altruistically when in fact one is acting egotistically. And beyond that, there&#x27;s a fair bit of self-deception – people believing / rationalizing that they&#x27;re acting altruistically when in reality their motivations are self-serving (this gets confusing to think about, as it&#x27;s not clear when to disbelieve self-reports about a person&#x27;s internal state). </p><p>Here&#x27;s a potential heuristic to help determine when you&#x27;re acting altruistically or egotistically – altruistic action tends to be <em>dispassionate</em>. The altruist tends to not care very much about their altruistic actions. They are <em>unattached</em> to them.</p><p>It&#x27;s a bit subtle – an altruistic actor still wants things to go well for the situation they&#x27;re acting upon. They&#x27;re motivated to act, after all. But that care seems distinct from caring about their actions themselves – considerations about how they will be received &amp; perceived. </p><p>The locus of their care is in the other people involved in the situation – if things go better for those people, the altruist is happy. If things go worse, the altruist is sad. It doesn&#x27;t matter who helped those people, or what third parties thought of the situation. It doesn&#x27;t matter who got the credit. Those considerations are immaterial to the altruist. They aren&#x27;t the criteria by which the altruist is judging their success.</p><p>This heuristic doesn&#x27;t help very much for determining whether other people are acting from altruistic or egotistic bases (though if you see someone paying particular attention to optics, PR, etc., that may be a sign that they are being more moved by egotistic considerations in that particular instance). </p><p>I think this heuristic does help introspectively – I find that it helps me sort out the things I do for (mostly) altruistic reasons from the things I do for (mostly) egotistic reasons. (I do a large measure of both.)</p><hr class="dividerBlock"/><p><em>Cross-posted to <a href="https://flightfromperfection.com/altruistic-action-is-dispassionate.html">my blog</a></em>.</p> milan_griffes YFiLMu9osLm3c3G8k 2019-03-30T17:33:19.136Z Why is the EA Hotel having trouble fundraising? https://forum.effectivealtruism.org/posts/BNQbxX7bFRrgv8Yds/why-is-the-ea-hotel-having-trouble-fundraising <p>As far as I can tell, the EA Hotel hasn&#x27;t pulled in much money during its present fundraising drive (see its <a href="https://www.patreon.com/eahotel">Patreon</a> &amp; its <a href="https://www.gofundme.com/ea-hotel">GoFundMe</a>).</p><p>I&#x27;m curious about why this is, and whether it&#x27;s indicative of a broader dynamic operating in the community. (It reminds me of the situation the Berkeley REACH was in last year: <a href="https://forum.effectivealtruism.org/posts/pdcDDC4eFr3xNXJmm/reflections-on-berkeley-reach">1</a>, <a href="https://www.lesswrong.com/posts/D3zK3anJcdW9RuZr8/last-chance-to-fund-the-berkeley-reach">2</a>.)</p><p>For reference, the recent EA Hotel fundraising posts: <a href="https://forum.effectivealtruism.org/posts/MeTrGqRXJQzoLoDia/ea-hotel-fundraiser-1-the-story">1</a>, <a href="https://forum.effectivealtruism.org/posts/j7xz4rQidfRgFo6KC/ea-hotel-fundraiser-2-current-guests-and-their-projects">2</a>, <a href="https://forum.effectivealtruism.org/posts/Q5G8NgfBXTwsmFsxK/ea-hotel-fundraiser-3-estimating-the-relative-expected-value">3</a> </p> milan_griffes BNQbxX7bFRrgv8Yds 2019-03-26T23:20:16.794Z Will the EA Forum continue to have cash prizes? https://forum.effectivealtruism.org/posts/X6Tv2vTiPBmdNmeFa/will-the-ea-forum-continue-to-have-cash-prizes <p>Each month for the top three posts, as piloted; or in some other configuration?</p> milan_griffes X6Tv2vTiPBmdNmeFa 2019-03-25T17:37:30.519Z EA jobs provide scarce non-monetary goods https://forum.effectivealtruism.org/posts/vMpuXz2zqS8iHya7i/ea-jobs-provide-scarce-non-monetary-goods <p><em>Epistemic status: hypothesizing</em></p><p>Related: <a href="https://forum.effectivealtruism.org/posts/jmbP9rwXncfa32seH/after-one-year-of-applying-for-ea-jobs-it-is-really-really">It is really, really hard to get hired by an EA organisation</a></p><p>Also related: <a href="https://forum.effectivealtruism.org/posts/pN5qwnruujugRY4AC/the-career-coordination-problem">The career coordination problem</a>, <a href="https://forum.effectivealtruism.org/posts/btgGxzFaHaPZr7RQm/a-guide-to-improving-your-odds-at-getting-a-job-in-ea">A guide to improving your odds at getting a job in EA</a>, <a href="https://forum.effectivealtruism.org/posts/HPpZvPEHF2Nw2YECN/effective-altruism-and-meaning-in-life">EA and meaning in life</a>, <a href="https://forum.effectivealtruism.org/posts/G2Pfpkcwv3bJNF8o9/ea-is-vetting-constrained">EA is vetting-constrained</a>, <a href="https://forum.effectivealtruism.org/posts/oNY76m8DDWFiLo7nH/what-to-do-with-people">What to do with people?</a>, <a href="https://forum.effectivealtruism.org/posts/HwBsTZEzGyAjQLoe7/identifying-talent-without-credentialing-in-ea">Identifying talent without credentialing in EA</a>, <a href="https://forum.effectivealtruism.org/posts/EP6X362Q3ziibA99e/show-a-framework-for-shaping-your-talent-for-direct-work">SHOW: A framework for shaping your talent for direct</a> <a href="https://forum.effectivealtruism.org/posts/EP6X362Q3ziibA99e/show-a-framework-for-shaping-your-talent-for-direct-work">work</a>, <a href="https://forum.effectivealtruism.org/posts/Lms9WjQawfqERwjBS/the-career-and-the-community">The career and the community</a></p><hr class="dividerBlock"/><p>As the <a href="https://forum.effectivealtruism.org/posts/jmbP9rwXncfa32seH/after-one-year-of-applying-for-ea-jobs-it-is-really-really">recent catalyzing EA jobs post</a> was blowing up, a friend of mine observed that from a simple economics perspective, the obvious response to this state of affairs would be to pay labor less (i.e. lower compensation for the professional EA roles being hired for).</p><p>Following a simple economics framework, lowering salaries would in turn lower demand for the jobs, resulting in fewer applications and less competition. Fewer people would want professional EA jobs, but those who did would find them easier to get.</p><p>When my friend said this, it seemed clear to me that lowering salaries wouldn&#x27;t have the proposed effect. </p><p>After some discussion, I arrived at a hypothesis for why not: <strong>professional EA organizations provision scarce, non-monetary goods to their employees</strong>.</p><p>Specifically, working at a professional EA organization can provide the following non-monetary benefits (in no particular order):</p><ul><li><strong>Social status.</strong> In the EA &amp; rationality subcultures, working at professional EA organizations is high status (e.g. when I started at GiveWell, I was surprised at how people in these circles treated me when they found out I was working there, even though I was an entry-level employee).</li><li><strong>Meaning-making / life orientation.</strong> At least for me, working at EA organizations can provide a sense of resolution to existentialist questions of purpose &amp; meaning, at least at first. (e.g. &quot;What&#x27;s the point? The point is to help as many people as possible with the limited resources at hand.&quot;)</li><li><strong>A sense of having a near-maximal altruistic impact.</strong> Working at EA organizations can provide reassurance that you&#x27;re doing the best you can / working on a near-optimal project (or perhaps working on a near-optimal project, modulo the current set of projects that can be worked on). I think this sense is stronger for working on more meta-level stuff, like career-advising or grant-making. (Also note that social status seems to correlate with how meta the project is; e.g. compare the demand to work at <a href="https://www.openphilanthropy.org/">Open Phil</a> or <a href="https://80000hours.org/">80,000 Hours</a> with the demand to work at <a href="http://www.newincentives.org/">New Incentives</a>.)</li><li><strong>Being part of a value-aligned, elite tribe.</strong> Somewhat mixed in with the above points, I think there&#x27;s a lot of value to be had from feeling like a member of a tribe, especially a tribe that you think is awesome. I think working at a professional EA organization is the closest thing there is to a royal road to tribal membership in the EA community (as well as the rationality community, to a lesser extent).</li></ul><p>Because these goods are non-monetary, it&#x27;d be difficult for EA organizations to reduce their quantity even if they wanted to (and for the most part, they probably don&#x27;t want to, as degrading such would also degrade large parts of what makes EA worthwhile).</p><p>This leads me to think that demand for jobs at professional EA organizations will continue to be very high for the foreseeable future, and especially so for meta-level EA organizations.</p><hr class="dividerBlock"/><p>For reference, I spent two years as a research analyst at <a href="https://www.givewell.org/">GiveWell</a>, then two years as head of risk at <a href="https://www.wave.com/">Wave</a>. These days I&#x27;m doing independent research, occasional contract work for <a href="https://ought.org/">Ought</a> &amp; toying with a startup idea.</p><p></p> milan_griffes vMpuXz2zqS8iHya7i 2019-03-20T20:56:46.817Z Is EA a community of elites? https://forum.effectivealtruism.org/posts/dqEYjTkKeydk9sDgh/is-ea-a-community-of-elites <p>More precisely: to what extent is EA a community of elites?</p><p>I mean &quot;elite&quot; in the &quot;coastal elite&quot; sense, not in the &quot;extremely good at a certain task&quot; sense.</p><p><a href="https://forum.effectivealtruism.org/posts/jmbP9rwXncfa32seH/after-one-year-of-applying-for-ea-jobs-it-is-really-really#5Hp6TtNTTscSCCS55">This comment</a> on the giant <a href="https://forum.effectivealtruism.org/posts/jmbP9rwXncfa32seH/after-one-year-of-applying-for-ea-jobs-it-is-really-really">EA jobs are really hard to get</a> (<a href="https://web.archive.org/web/20190301062156/https://forum.effectivealtruism.org/posts/jmbP9rwXncfa32seH/after-one-year-of-applying-for-ea-jobs-it-is-really-really">a</a>) thread has me pondering...</p> milan_griffes dqEYjTkKeydk9sDgh 2019-03-01T06:24:31.846Z What type of Master's is best for AI policy work? https://forum.effectivealtruism.org/posts/cjNkvXSFPxTuYBaaZ/what-type-of-master-s-is-best-for-ai-policy-work <p><a href="https://80000hours.org/articles/us-ai-policy/">80,000 Hours recommends</a> a few different flavors of Master&#x27;s as entry points into working on US-oriented AI policy: security studies, international relations, public policy, and machine learning.</p><p>Does anyone have opinions on which of these types of programs is the best to focus on?</p><p>(Clearly a large part of this revolves around personal fit, but perhaps some of these are much more relevant than others in a way that dominates personal fit considerations.)</p> milan_griffes cjNkvXSFPxTuYBaaZ 2019-02-22T20:04:47.502Z What's the best Security Studies Master's program? https://forum.effectivealtruism.org/posts/9ijZGySijwzsHwQed/what-s-the-best-security-studies-master-s-program <p><a href="https://80000hours.org/articles/us-ai-policy/">80,000 Hours recommends a Master&#x27;s in Security Studies</a> as one entry point towards a US-oriented AI policy career.</p><p>Does anyone have opinions on the best Security Studies Master&#x27;s, where &quot;best&quot; is some combination of:</p><ul><li>Affordability</li><li>Time to complete</li><li>Prestige + access to important networks</li><li>Quality of instruction</li></ul><p></p> milan_griffes 9ijZGySijwzsHwQed 2019-02-22T20:01:37.670Z Time-series data for income & happiness? https://forum.effectivealtruism.org/posts/DtSXXdZnEb2mH3jGs/time-series-data-for-income-and-happiness <p><em>Previous: <a href="https://forum.effectivealtruism.org/posts/dvHpzesXtZAMFSTk3/giving-more-won-t-make-you-happier">Giving more won&#x27;t make you happier</a></em></p><p>This evening, I was listening to <a href="http://www.econtalk.org/richard-epstein-on-happiness-inequality-and-envy/">this old episode of EconTalk</a> (<a href="https://web.archive.org/web/20190220052705/http://www.econtalk.org/richard-epstein-on-happiness-inequality-and-envy/">a</a>), wherein Richard Epstein discusses income, wealth, and happiness:</p><blockquote>Theory of revealed preferences. We see people working hard to get raises. The happiness literature suggests that everyone is under a deep delusion about what makes them happy, and the guys running the survey know better. </blockquote><blockquote>Methodological fallacy: data seem to suggest that when you have higher incomes you don&#x27;t necessarily have a whole lot higher level of happiness... People make a pact – I&#x27;ll be miserable for a few years if you&#x27;ll make me rich in the longer run. </blockquote><blockquote>So, in the short run they report being less happy. They don&#x27;t want to be unhappy forever, so eventually they&#x27;ll take a lower paying job – and report being happier.</blockquote><p>This consideration – people with intense, high-paying jobs being less happy when surveyed but (knowingly) doing this for increased future happiness – seems important when thinking about the happiness&lt;&gt;income relationship. We totally overlooked it in <a href="https://forum.effectivealtruism.org/posts/dvHpzesXtZAMFSTk3/giving-more-won-t-make-you-happier">the recent Forum post</a>.</p><p>Does anyone know of longitudinal studies that look at happiness of people over time, as they move into and out of high-intensity jobs? </p><p>I&#x27;d like to learn more about this.</p> milan_griffes DtSXXdZnEb2mH3jGs 2019-02-20T05:38:23.800Z What we talk about when we talk about life satisfaction https://forum.effectivealtruism.org/posts/zAPWr9eGtWkc8nuyH/what-we-talk-about-when-we-talk-about-life-satisfaction <p><em>Epistemic status: exploring. Previous <a href="https://forum.effectivealtruism.org/posts/dvHpzesXtZAMFSTk3/giving-more-won-t-make-you-happier#fYKCwAEhRsNT8gYdL">related discussion</a>.</em></p><p>I feel confused about what people are talking about when they talk about life satisfaction scales.</p><p>You know, this kind of question: &quot;how satisfied are you with your life, on a scale of 0 to 10?&quot;</p><p>(Actual life satisfaction scales are <u><a href="https://backend.fetzer.org/sites/default/files/images/stories/pdf/selfmeasures/SATISFACTION-SatisfactionWithLife.pdf">somewhat more nuanced</a></u> (<u><a href="https://web.archive.org/web/20190204230559/https://backend.fetzer.org/sites/default/files/images/stories/pdf/selfmeasures/SATISFACTION-SatisfactionWithLife.pdf">a</a></u>), but the confusion I&#x27;m pointing to persists.)</p><p><strong>The most satisfying life imaginable</strong></p><p>On a 0-to-10 scale, does 10 mean &quot;the most satisfying life I can imagine?&quot;</p><p>But given how poor our introspective access is, why should we trust our judgments about what possible life-shape would be most satisfying?</p><p>The difficulty here sharpens when reflecting on how satisfaction preferences morph over time: my 5-year-old self had a very different preference-set than my 20-something self, and I&#x27;d expect my middle-aged self to have quite a different preference-set than my 20-something self.</p><p>Perhaps we mean something like &quot;the most satisfying life I can imagine for myself at this point in my life, given what I know about myself &amp; my preferences.&quot; But this is problematic – if someone was extremely satisfied (such that they&#x27;d rate themselves a 10), but would become even more satisfied if Improvement <em>X</em> were introduced, shouldn&#x27;t the scale be able to accommodate their perceived increase in satisfaction? (i.e. They weren&#x27;t really at a 10 before receiving Improvement <em>X</em> after all, if their satisfaction improved upon receiving it. But under this definition, the extremely satisfied person was appropriately rating themselves a 10 beforehand.)</p><p><strong>The most satisfying life, objectively</strong></p><p>On a 0-to-10 scale, does 10 mean &quot;the most satisfying life, objectively?&quot;</p><p>But given the enormous <u><a href="https://en.wikipedia.org/wiki/State-space_representation">state-space</a></u> of reality (which remains truly enormous even after being reduced by qualifiers like &quot;reality ordered such that humans exist&quot;), why should we be confident that the states we&#x27;re familiar with overlap with the states that are objectively most satisfying?</p><p>The difficulty here sharpens when we factor in reports of extremely satisfying states unlocked by esoteric practices. (Sex! Drugs! Enlightenment!) Reports like this crop up frequently enough that it seems hasty to dismiss them out of hand without first investigating (e.g. reports of enlightenment states from this neighborhood of the social graph: <u><a href="https://www.lesswrong.com/posts/tMhEv28KJYWsu6Wdo/kensh">1</a></u>, <u><a href="https://www.lesswrong.com/posts/mELQFMi9egPn5EAjK/my-attempt-to-explain-looking-insight-meditation-and">2</a></u>, <u><a href="https://www.goodreads.com/book/show/25942786-the-mind-illuminated">3</a></u>, <u><a href="https://www.mctb.org/">4</a></u>, <u><a href="https://www.meetup.com/Stream-Entry-and-Intentional-Community/">5</a></u>).</p><p>The difficulty sharpens even further given the lack of consensus around what life satisfaction is – the Evangelical model of a satisfying life is very different than the Buddhist.</p><p><strong>The most satisfying life, in practice</strong></p><p>I think that in practice, a 10 on a 0-to-10 scale means something like &quot;the most satisfying my life can be, benchmarked on all the ways my life has been so far plus the nearest neighbors of those.&quot;</p><p>This seems okay, but plausibly forecloses on a large space of awesomely satisfying lives that look very different than one&#x27;s current benchmark.</p><p>So I don&#x27;t really know what we&#x27;re talking about when we talk about life satisfaction scales.</p><hr class="dividerBlock"/><p><em>Cross-posted to <a href="https://www.lesswrong.com/posts/bbpG2Qzg7zTJ48836/what-we-talk-about-when-we-talk-about-life-satisfaction">LessWrong</a> &amp; <a href="https://flightfromperfection.com/what-we-talk-about-when-we-talk-about-life-satisfaction.html">my blog</a>.</em></p> milan_griffes zAPWr9eGtWkc8nuyH 2019-02-04T23:51:06.245Z Is intellectual work better construed as exploration or performance? https://forum.effectivealtruism.org/posts/QwLKTcte8LgTNmCM2/is-intellectual-work-better-construed-as-exploration-or <p><em>Cross-posted to <a href="https://www.lesswrong.com/posts/pafPiytMM4HkPGZ5x/is-intellectual-work-better-construed-as-an-exploration-or-a">LessWrong</a>.</em></p><p>I notice I rely on two metaphors of intellectual work:</p><p>1. <strong>intellectual work as exploration – </strong>intellectual work is an expedition through unknown territory (a la <a href="https://en.wikipedia.org/wiki/Meru_(film)">Meru</a>, a la <a href="https://en.wikipedia.org/wiki/Amundsen%27s_South_Pole_expedition">Amundsen &amp; the South Pole</a>). It&#x27;s unclear whether the expedition will be successful; the explorers band together to function as one unit &amp; support each other; the value of the work is largely &quot;in the moment&quot; / &quot;because it&#x27;s there&quot;, the success of the exploration is mostly determined by objective criteria.</p><p>Examples: Andrew Wiles spending six years in secrecy <a href="https://en.wikipedia.org/wiki/Andrew_Wiles#Proof_of_Fermat's_Last_Theorem">to prove Fermat&#x27;s Last Theorem</a>, Distill&#x27;s <a href="https://distill.pub/">essays on machine learning</a>, Robert Caro&#x27;s <a href="https://en.wikipedia.org/wiki/Robert_Caro#Work">books</a>, Donne Martin&#x27;s <a href="https://github.com/donnemartin/data-science-ipython-notebooks">data science portfolio</a> (clearly a labor of love)</p><p>2. <strong>intellectual work as performance</strong> – intellectual work is a performative act with an audience (a la <a href="https://en.wikipedia.org/wiki/Black_Swan_(film)">Black Swan</a>, a la Super Bowl halftime shows). It&#x27;s not clear that any given performance will succeed, but there will always be a &quot;best performance&quot;; performers tend to compete &amp; form factions; the value of the work accrues afterward / the work itself is instrumental; the success of the performance is mostly determined by subjective criteria.</p><p>Examples: journal <a href="https://en.wikipedia.org/wiki/Impact_factor">impact factors</a>, any social science result that&#x27;s <a href="https://80000hours.org/psychology-replication-quiz/">published but fails to replicate</a>, academic dogfights on Twitter, <a href="https://www.ted.com/">TED talks</a></p><hr class="dividerBlock"/><p>Clearly both metaphors do work – I&#x27;m wondering which is better to cultivate on the margin. </p><p>My intuition is that it&#x27;s better to lean on the image of intellectual work as exploration; curious what folks here think.</p> milan_griffes QwLKTcte8LgTNmCM2 2019-01-25T22:00:52.792Z If slow-takeoff AGI is somewhat likely, don't give now https://forum.effectivealtruism.org/posts/JimLnG3sbYqPF8rKJ/if-slow-takeoff-agi-is-somewhat-likely-don-t-give-now <p>There&#x27;s a longstanding debate in EA about whether to emphasizing giving now or giving later – see <a href="https://blog.givewell.org/2007/01/22/more-thoughts-on-responsible-investing/">Holden in 2007</a> (<a href="https://web.archive.org/web/20160527170938/https://blog.givewell.org/2007/01/22/more-thoughts-on-responsible-investing/">a</a>), <a href="http://www.overcomingbias.com/2011/09/let-us-give-to-future.html">Robin Hanson in 2011</a> (<a href="https://web.archive.org/web/20180214090356/http://www.overcomingbias.com/2011/09/let-us-give-to-future.html">a</a>), <a href="https://blog.givewell.org/2011/12/20/give-now-or-give-later/">Holden in 2011 (updated 2016)</a> (<a href="https://web.archive.org/web/20181223232653/https://blog.givewell.org/2011/12/20/give-now-or-give-later/">a</a>), <a href="https://rationalaltruist.com/2013/03/12/giving-now-vs-later/">Paul Christiano in 2013</a> (<a href="https://web.archive.org/web/20190103213255/https://rationalaltruist.com/2013/03/12/giving-now-vs-later/">a</a>), <a href="http://www.overcomingbias.com/2013/04/more-now-means-less-later.html">Robin Hanson in 2013</a> (<a href="https://web.archive.org/web/20180207203314/http://www.overcomingbias.com/2013/04/more-now-means-less-later.html">a</a>), <a href="https://forum.effectivealtruism.org/posts/7uJcBNZhinomKtH9p/giving-now-vs-later-a-summary">Julia Wise in 2013</a> (<a href="https://web.archive.org/web/20190123204330/https://forum.effectivealtruism.org/posts/7uJcBNZhinomKtH9p/giving-now-vs-later-a-summary">a</a>), <a href="https://mdickens.me/2019/01/21/should_global_poverty_donors_give_now_or_later">Michael Dickens in 2019</a> (<a href="https://web.archive.org/web/20190123204441/https://mdickens.me/2019/01/21/should_global_poverty_donors_give_now_or_later/">a</a>). </p><p>I think answers to the &quot;give now vs. give later&quot; question rest on deep worldview assumptions, which makes it fairly insoluble (though <a href="https://mdickens.me/2019/01/21/should_global_poverty_donors_give_now_or_later">Michael Dickens&#x27; recent post</a> (<a href="https://web.archive.org/web/20190123204441/https://mdickens.me/2019/01/21/should_global_poverty_donors_give_now_or_later/">a</a>) is a nice example of someone changing their mind about the issue). So here, I&#x27;m not trying to answer the question once and for all. Instead, I just want to make an argument that seems fairly obvious but I haven&#x27;t seen laid out anywhere.</p><p>Here&#x27;s a sketch of the argument –</p><p><strong>Premise 1:</strong> If AGI happens, it will happen via a slow takeoff.</p><ul><li>Here&#x27;s <a href="https://sideways-view.com/2018/02/24/takeoff-speeds/">Paul Christiano on slow vs. fast takeoff</a> (<a href="https://web.archive.org/web/20190110143143/https://sideways-view.com/2018/02/24/takeoff-speeds/">a</a>) – the following doesn&#x27;t hold if you think AGI is more likely to happen via a fast takeoff.</li></ul><p><strong>Premise 2:</strong> The frontier of AI capability research will be pushed forward by research labs at publicly-traded companies that can be invested in. </p><ul><li>e.g. <a href="https://ai.google/research/teams/brain">Google Brain</a>, <a href="https://deepmind.com/">Google DeepMind</a>, <a href="https://research.fb.com/category/facebook-ai-research/">Facebook AI</a>, <a href="https://www.aboutamazon.com/research">Amazon AI</a>, <a href="https://www.microsoft.com/en-us/ai">Microsoft AI</a>, <a href="http://research.baidu.com">Baidu AI</a>, <a href="https://www.ibm.com/watson/">IBM Watson</a></li><li><a href="https://openai.com/">OpenAI</a> is a confounder here – it&#x27;s unclear who will control the benefits realized by the OpenAI capabilities research team. </li><ul><li>From the <a href="https://blog.openai.com/openai-charter/">OpenAI charter</a> (<a href="https://web.archive.org/web/20190110155821/https://blog.openai.com/openai-charter/">a</a>): &quot;Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.&quot;</li></ul><li>Chinese companies that can&#x27;t be accessed by foreign investment are another confounder – I don&#x27;t know much about that space yet.</li></ul><p><strong>Premise 3:</strong> A large share of the returns unlocked by advances in AI will accrue to shareholders of the companies that invent &amp; deploy the new capabilities. </p><p><strong>Premise 4:</strong> Being an investor in such companies will generate outsized returns on the road to slow-takeoff AGI.</p><ul><li>It&#x27;d be difficult to identify the particular company that will achieve a particular advance in AI capabilities, but relatively simple to hold a basket of the companies most likely to achieve an advance (similar to an index fund).</li><li>If you&#x27;re skeptical of being able to select a basket of AI companies that will track AI progress, investing in a broader index fund (e.g. <a href="https://investor.vanguard.com/mutual-funds/profile/VTSAX">VTSAX</a>) could be about as good. During a slow takeoff the returns to AI may well ripple through the whole economy. </li></ul><p><strong>Conclusion:</strong> If you&#x27;re interested in maximizing your altruistic impact, and think slow-takeoff AGI is somewhat likely (and more likely than fast-takeoff AGI), then investing your current capital is better than donating it now, because you may achieve (very) outsized returns that can later be deployed to greater altruistic effect as AI research progresses.</p><ul><li>Note that this conclusion holds for both <a href="https://forum.effectivealtruism.org/posts/XWSTBBH8gSjiaNiy7/cause-profile-mental-health#YSRDoxjyyehZS3oyj">person-affecting and longtermist views</a>. All you need to believe for it to hold is that a slow takeoff is somewhat likely, and more likely than a fast takeoff. </li><li>If you think a fast takeoff is more likely, it probably makes more sense to either invest your current capital in tooling up as an AI alignment researcher, or to donate now to your favorite AI alignment organization (<a href="https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison">Larks&#x27; 2018 review</a> (<a href="https://web.archive.org/web/20181220104842/https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison">a</a>) is a good starting point here).</li></ul><hr class="dividerBlock"/><p><em>Cross-posted to <a href="https://flightfromperfection.com/if-slow-takeoff-agi-dont-give-now.html">my blog</a>. I&#x27;m not an investment advisor, and the above isn&#x27;t investment advice.</em></p><p></p> milan_griffes JimLnG3sbYqPF8rKJ 2019-01-23T20:54:58.944Z Giving more won't make you happier https://forum.effectivealtruism.org/posts/dvHpzesXtZAMFSTk3/giving-more-won-t-make-you-happier <p><em>See also: <a href="https://forum.effectivealtruism.org/posts/DtSXXdZnEb2mH3jGs/time-series-data-for-income-and-happiness">Time-series data for income &amp; happiness?</a></em></p><hr class="dividerBlock"/><p>At first approximation, there are two motivations for donating money – egoistic &amp; altruistic. </p><p>The egoistic motivation relates to the personal benefit you accrue from giving your money away. The altruistic motivation relates to the benefits that other people receive from your donations. (This roughly maps to the <a href="https://www.lesswrong.com/posts/3p3CYauiX8oLjmwRF/purchase-fuzzies-and-utilons-separately">fuzzies vs. utilons</a> (<a href="https://web.archive.org/web/20181204220400/https://www.lesswrong.com/posts/3p3CYauiX8oLjmwRF/purchase-fuzzies-and-utilons-separately">a</a>) distinction.)</p><h2>The egoistic motivation for donating is scope insensitive</h2><p>The egoistic motivation for donating is highly <a href="https://en.wikipedia.org/wiki/Scope_neglect">scope insensitive</a> – giving away $500 feels roughly as good as giving away $50,000. I haven’t found any academic evidence on this, but it’s been robustly true in my experience.</p><p>This scope insensitivity seems pretty baked in – knowing about it doesn’t make it go away. I can remind myself that I’m having 100x the impact when I donate $50,000 than when I donate $500, but I find that when I reflect casually about my donations, I feel about as satisfied at my small donations as I do about my large ones, even after repeatedly reminding myself about the 100x differential.</p><p>We’re probably also scope insensitive qualitatively – giving $5,000 to a low-impact charity feels about as good as giving $5,000 to an effective charity (especially if you don’t reflect very much about the impact of the donation, and especially especially if the low-impact charity tells you a compelling story about the particular people your donation is helping).</p><h2>Effective giving increases happiness, but so does low-impact giving</h2><p>EA sometimes advocates that giving will increase your happiness. Here’s an <a href="https://80000hours.org/articles/money-and-happiness/#if-you-gave-money-to-charity-would-it-make-you-more-satisfied-or-less">80,000 Hours article</a> (<a href="https://web.archive.org/web/20181204181048/https://80000hours.org/articles/money-and-happiness/">a</a>) to that effect. Here’s a <a href="https://www.givingwhatwecan.org/get-involved/giving-and-happiness/">piece by Giving What We Can</a> (<a href="https://web.archive.org/web/20180808162121/https://www.givingwhatwecan.org/get-involved/giving-and-happiness/">a</a>).</p><p>I think sometimes implicit here is the claim that giving <em>effectively </em>will increase your happiness (I think this because almost all other discussion of giving in EA spaces is about effective giving, and why effective giving is something to get excited about).</p><p>It seems pretty clear that donating some money to charity will increase your happiness. It’s less clear that donating to an effective charity will make you happier than donating to a low-impact charity. </p><p>Given the scope insensitivity of the egoistic motivation, it’s also unclear that giving away a lot of money will make you happier than giving away a small amount of money. </p><p>It seems especially unclear that the donation-to-happiness link scales anywhere linearly. Perhaps donating $1,000 makes you happier than donating $100, but does it make you 10x as happy? Does donating $2,000 make you 2x as donating $1,000? My intuition is that it doesn’t.</p><h2>Income increases happiness, up to a point </h2><p>Okay, so that’s a bunch of discussion from intuition &amp; lived experience. Now let’s look at paper.</p><p><a href="https://flightfromperfection.com/files/post_attachments/jebb_et_al_2018.pdf">Jebb et al. 2018</a> analyzed Gallup Worldwide Poll survey data on income &amp; happiness. This dataset had responses from about 1.7 million people in 164 countries, so we don’t have to worry about small sample size.</p><p>Jebb et al. were curious about the income satiation effect – is there a point at which additional income no longer contributes to subjective well-being? And if there is, where is it?</p><p>From the Gallup data, Jebb et al. found that there is indeed an income satiation effect: </p><p></p><span><figure><img src="https://flightfromperfection.com/files/post_attachments/jebb_2018_table_1.jpg" class="draft-image center" style="" /></figure></span><p></p><p>Globally, happiness stopped increasing alongside income after $95,000 USD / year.</p><p>For Western European respondents, happiness stopped increasing alongside income after $100,000 USD / year. For North American respondents, the satiation point was $105,000 USD / year.</p><h2>An aside on terminology</h2><p>&quot;<a href="https://en.wikipedia.org/wiki/Subjective_well-being">Subjective well-being</a>&quot; is the term social scientists use to think about happiness. Researchers usually break subjective well-being down into two components – life evaluation &amp; emotional well-being. Here are heavyweights Daniel Kahneman &amp; Angus Deaton on <a href="https://www.pnas.org/content/107/38/16489">how those two things are different</a> (<a href="https://web.archive.org/web/20181208205826/https://www.pnas.org/content/107/38/16489">a</a>):</p><blockquote>Emotional well-being (sometimes called hedonic well-being or experienced happiness) refers to the emotional quality of an individual&#x27;s everyday experience – the frequency and intensity of experiences of joy, fascination, anxiety, sadness, anger, and affection that make one&#x27;s life pleasant or unpleasant. Life evaluation refers to a person&#x27;s thoughts about his or her life. Surveys of subjective well-being have traditionally emphasized life evaluation. The most commonly asked question in these surveys is the life satisfaction question: “How satisfied are you with your life as a whole these days?” ... Emotional well-being is assessed by questions about the presence of various emotions in the experience of yesterday (e.g., enjoyment, happiness, anger, sadness, stress, worry).</blockquote><p><a href="https://flightfromperfection.com/files/post_attachments/jebb_et_al_2018.pdf">Jebb et al.</a> break down emotional well-being further into positive affect &amp; negative affect, which roughly correspond to experiencing positive &amp; negative emotive states.</p><p>Life evaluation seems like the more intuitive metric for our purposes here. (It’s also the more conservative choice due to its higher satiation points.) So when I talk about &quot;happiness,&quot; I&#x27;m actually talking about &quot;subjective well-being as assessed by life evaluation scores.&quot; My main points would still hold if we focused on emotional well-being instead.</p><h2>Income increases happiness up to $115,000 / year</h2><p>Returning to <a href="https://flightfromperfection.com/files/post_attachments/jebb_et_al_2018.pdf">Table 1</a>, we can pull out a couple of takeaways: </p><ul><li>The income satiation point for most EAs is at least $100,000 USD / year.</li><ul><li>Most EAs are in North America and Western Europe. </li><ul><li>The satiation point for life evaluation in Western Europe is about $100,000 USD / year.</li><li>The life evaluation satiation point in North America is about $105,000 USD / year.</li></ul></ul><li>Almost all EAs fall into Jebb et al.’s &quot;high education&quot; bracket: 16+ years of education, i.e. on track to complete a Bachelor’s. </li><ul><li>High-education populations have higher satiation points than low-education populations, an effect that the authors attribute to &quot;income aspirations or social comparisons with different groups.&quot;</li><li>The &quot;high education&quot; satiation point is $115,000 USD / year. </li><ul><li>That’s a global figure. The paper doesn’t give a region-by-region breakout of the &quot;high education&quot; cohort; it’s likely that the figure is even higher in the Western Europe &amp; North American regions, which have higher satiation points than the global average.</li></ul></ul></ul><p>Essentially, all income earned up to $115,000 USD / year (for college-educated folks living in North America &amp; Western Europe) contributes to one’s happiness.</p><h2>Putting it all together </h2><p>We can use the <a href="https://flightfromperfection.com/files/post_attachments/jebb_et_al_2018.pdf">Jebb et al. paper</a> to infer that donations which put your annual income below $115k will probably make you less happy. (And if you’re giving substantial amounts while earning a total income of less than $115k, those donations will probably contribute to a decrease in your happiness.) </p><p>Correspondingly, donating amounts such that your annual income remains above $115k probably won’t affect your happiness.</p><p>There’s a wrinkle here: it’s possible that much of the happiness benefit of earning a high income comes from the knowledge that you earn a high income, not what you use the money for materially. If this is the case, donating large amounts out of an income above $115k shouldn’t ding your happiness. </p><p>So perhaps only a weaker version of the claim holds: once you achieve an annual income above $115,000, you can give away large portions of it without incurring a happiness penalty (having already realized the happy-making benefit of your earnings). But even in this case, donating large amounts out of an income less than $115k still lowers your happiness (because you never benefit from the knowledge that you earn at least $115k). </p><p>It’s true that the act of donating will generate some personal happiness. But given the <a href="https://en.wikipedia.org/wiki/Scope_neglect">scope insensitivity</a> at play here, you can realize a lot of this benefit by donating small amounts (and thus keeping a lot more of your money, which can then be deployed in other happy-making ways).</p><p>From a purely egoistic viewpoint, scope insensitivity lets us have our cake &amp; eat it too – we can feel good about our donating behavior while keeping most of our money.</p><h2>Conclusion: EA shouldn’t say that effective giving will make you happy</h2><p>My provisional conclusion here is that EA shouldn&#x27;t recommend effective giving on egoistic grounds.</p><p>There remains a strong altruistic case to be made for effective giving, but I think it’s worth acknowledging the real tradeoff between giving away large amounts of money and one’s personal happiness, at least for people earning less than $115,000 USD / year (on average, for college-educated people in Western Europe &amp; North America). If you want to give large amounts while avoiding this tradeoff, you should achieve a stable annual income of at least $115k before making substantial donations.</p><p>Further, EA should actively discourage people from effective giving if they&#x27;re mainly considering it as a way to become happier. Effective giving probably won&#x27;t make you happier than low-impact giving, and donating large amounts won&#x27;t make you happier than donating small amounts. Saying otherwise would be a false promise.</p><hr class="dividerBlock"/><p><em>Thanks to Gregory Lewis, Howie Lempel, Helen Toner, Benjamin Pence, and an anonymous collaborator for feedback on drafts of this essay. Cross-posted to <a href="https://flightfromperfection.com/giving-more-wont-make-you-happier.html">my blog</a>.</em></p> milan_griffes dvHpzesXtZAMFSTk3 2018-12-10T18:15:16.663Z Open Thread #42 https://forum.effectivealtruism.org/posts/hZxs7cDJ7JcHMH8GZ/open-thread-42 <p>Use this thread to post things that are awesome, but not awesome enough to be full posts. This is also a great place to post if you don&apos;t have enough karma to post on the main forum.</p> <p>Consider giving your post a brief title to improve readability.</p> milan_griffes hZxs7cDJ7JcHMH8GZ 2018-10-17T20:10:00.472Z Doing good while clueless https://forum.effectivealtruism.org/posts/X2n6pt3uzZtxGT9Lm/doing-good-while-clueless <p>This is the fourth (and final) post in a series exploring <a href="https://flightfromperfection.com/cluelessness-what-to-do.html">consequentialist cluelessness</a> and its implications for effective altruism:</p><ul><li>The <a href="https://forum.effectivealtruism.org/ea/1hh/what_consequences/">first post</a> describes cluelessness &amp; its relevance to EA; arguing that for many popular EA interventions we don’t have a clue about the intervention’s overall net impact.</li><li>The <a href="https://forum.effectivealtruism.org/ea/1ix/just_take_the_expected_value_a_possible_reply_to/">second post</a> considers a potential reply to concerns about cluelessness.</li><li>The <a href="https://forum.effectivealtruism.org/ea/1j4/how_tractable_is_cluelessness/">third post</a> examines how tractable cluelessness is – to what extent we can grow more clueful about an intervention through intentional effort?</li><li><strong>This post</strong> discusses how we might do good while being clueless to an important extent.</li></ul><p>Consider reading the previous posts (<a href="https://forum.effectivealtruism.org/ea/1hh/what_consequences/">1</a>, <a href="https://forum.effectivealtruism.org/ea/1ix/just_take_the_expected_value_a_possible_reply_to/">2</a>, <a href="https://forum.effectivealtruism.org/ea/1j4/how_tractable_is_cluelessness/">3</a>) first.</p><hr class="dividerBlock"/><p>The <a href="https://forum.effectivealtruism.org/ea/1j4/how_tractable_is_cluelessness/">last post</a> looked at whether we could grow more clueful by intentional effort. It concluded that, for the foreseeable future, we will probably remain clueless about the long-run impacts of our actions to a meaningful extent, even after taking measures to improve our understanding and foresight.</p><p>Given this state of affairs, we should act cautiously when trying to do good. This post outlines a framework for doing good while being clueless, then looks at what this framework implies about current EA cause prioritization.</p><p>The following only make sense if you already believe that the far future matters a lot; this argument has been made <a href="https://forum.effectivealtruism.org/ea/6l/a_relatively_atheoretical_perspective_on/">elegantly elsewhere</a> so we won’t rehash it here.[1]</p><h1>An analogy: interstellar travel</h1><p>Consider a spacecraft, journeying out into space. The occupants of the craft are searching for a star system to settle. Promising destination systems are all very far away, and the voyagers don’t have a complete map of how to get to any of them. Indeed, they know very little about the space they will travel through.</p><p>To have a good journey, the voyagers will have to successfully steer their ship (both literally &amp; metaphorically). Let&#x27;s use &quot;steering capacity&quot; as an umbrella term that refers to the capacity needed to have a successful journey.[2] &quot;Steering capacity&quot; can be broken down into the following five attributes:[3]</p><ul><li>The voyagers must have a clear idea of what they are looking for. (<strong>Intent</strong>)</li><li>The voyagers must be able to reach agreement about where to go. (<strong>Coordination</strong>)</li><li>The voyagers must be discerning enough to identify promising systems as promising, when they encounter them. Similarly, they must be discerning enough to accurately identify threats &amp; obstacles. (<strong>Wisdom</strong>)</li><li>Their craft must be powerful enough to reach the destinations they choose. (<strong>Capability</strong>)</li><li>Because the voyagers travel through unmapped territory, they must be able to see far enough ahead to avoid obstacles they encounter. (<strong>Predictive power</strong>)</li></ul><p>This spacecraft is a useful analogy for thinking about our civilization’s trajectory. Like us, the space voyagers are somewhat clueless – they don’t know quite where they should go (though they can make guesses), and they don’t know how to get there (though they can plot a course and make adjustments along the way).</p><p>The five attributes given above – intent, coordination, wisdom, capability, and predictive power – determine how successful the space voyagers will be in arriving at a suitable destination system. These same attributes can also serve as a useful framework for considering which altruistic interventions we should prioritize, given our present situation.  </p><h1>The basic point</h1><p>The basic point here is that interventions whose main known effects do not improve our steering capacity (i.e. our intent, wisdom, coordination, capability, and predictive power) are not as important as interventions whose main known effects do improve these attributes.</p><p>An implication of this is that interventions whose effectiveness is driven mainly by their <a href="https://forum.effectivealtruism.org/ea/1hh/what_consequences/">proximate impacts</a> are less important than interventions whose effectiveness is driven mainly by increasing our steering capacity.</p><p>This is because any action we take is going to have indirect &amp; long-run consequences that bear on our civilization’s trajectory. Many of the long-run consequences of our actions are unknown, so the future is unpredictable. Therefore, we ought to prioritize interventions that improve the wisdom, capability, and coordination of future actors, so that they are better positioned to address future problems that we did not foresee.</p><h1>What being clueless means for altruistic prioritization</h1><p>I think the steering capacity framework implies a portfolio approach to doing good – simultaneously pursuing a large number of diverse hypotheses about how to do good, provided that each approach maintains <a href="https://www.centreforeffectivealtruism.org/blog/hard-to-reverse-decisions-destroy-option-value/">reversibility</a>.[4]</p><p>This approach is similar to the Open Philanthropy Project’s <a href="https://www.openphilanthropy.org/blog/hits-based-giving">hits-based giving framework</a> – invest in many promising initiatives with the expectation that most will fail.</p><p>Below, I look at how this framework interacts with focus areas that effective altruists are already working on. Other causes that EA has not looked into closely (e.g. improving education) may also perform well under this framework; assessing causes of this sort is beyond the scope of this essay.</p><p>My thinking here is preliminary, and very probably contains errors &amp; oversights.</p><h1>EA focus areas to prioritize</h1><p>Broadly speaking, the steering capacity framework suggests prioritizing interventions that:[5]</p><ul><li>Further our understanding of what matters</li><li>Improve governance</li><li>Improve prediction-making &amp; foresight</li><li>Reduce existential risk</li><li>Increase the number of well-intentioned, highly capable people</li></ul><p></p><h3>To prioritize – better understanding what matters</h3><p>Increasing our understanding of what’s worth caring about is important for clarifying our intentions about what trajectories to aim for. For many moral questions, there is already broad agreement in the EA community (e.g. the view that all currently existing human lives matter is uncontroversial within EA). On other questions, further thinking would be valuable (e.g. how best to compare human lives to the lives of animals).</p><p>Myriad thinkers have done valuable work on this question. Particularly worth mentioning is the work of the <a href="https://foundational-research.org/">Foundational Research Institute</a>, the <a href="http://globalprioritiesproject.org/">Global Priorities Project</a>, the <a href="https://qualiaresearchinstitute.org/">Qualia Research Institute</a>, as well the <a href="https://www.openphilanthropy.org/2017-report-consciousness-and-moral-patienthood">Open Philanthropy Project’s work on consciousness &amp; moral patienthood</a>.</p><p></p><h3>To prioritize – improving governance</h3><p>Improving governance is largely aimed at improving coordination – our ability to mediate diverse preferences, decide on collectively held goals, and work together towards those goals.</p><p>Efficient governance institutions are robustly useful in that they keep focus oriented on solving important problems &amp; minimize resource expenditure on zero-sum competitive signaling.</p><p>Two routes towards improved governance seem promising: (1) improving the functioning of existing institutions, and (2) experimenting with alternative institutional structures (Robin Hanson’s <a href="http://mason.gmu.edu/~rhanson/futarchy.html">futarchy proposal</a> and <a href="https://www.seasteading.org/">seasteading</a> initiatives are examples here).</p><p></p><h3>To prioritize – improving foresight</h3><p>Improving foresight &amp; prediction-making ability is important for informing our decisions. The further we can see down the path, the more information we can incorporate into our decision-making, which in turn leads to higher quality outcomes with fewer surprises.</p><p>Forecasting ability can definitely be improved from baseline, but there are probably hard limits on how far into the future we can extend our predictions while remaining believable.</p><p>Philip Tetlock’s <a href="https://www.goodjudgment.com/">Good Judgment Project</a> is a promising forecasting intervention, as are prediction markets like <a href="https://www.predictit.org/">PredictIt</a> and polling aggregators like <a href="http://fivethirtyeight.com/">538</a>.</p><p></p><h3>To prioritize – reducing existential risk</h3><p>Reducing existential risk can be framed as “avoiding large obstacles that lie ahead.” Avoiding extinction and “lock-in” of suboptimal states is necessary for realizing the full potential benefit of the future.</p><p>Many initiatives are underway in the x-risk reduction cause area. <a href="https://forum.effectivealtruism.org/ea/1iu/2018_ai_safety_literature_review_and_charity/">Larks’ annual review of AI safety work</a> is excellent; Open Phil has good material about <a href="https://www.openphilanthropy.org/focus/global-catastrophic-risks">projects focused on other x-risks</a>.</p><p></p><h3>To prioritize – increase the number of well-intentioned, highly capable people</h3><p>Well-intentioned, highly capable people are a scarce resource, and will almost certainly continue to be highly useful going forward. Increasing the number of well-intentioned, highly capable people seems robustly good, as such people are able to diagnosis &amp; coordinate together on future problems as they arise.</p><p>Projects like <a href="http://rationality.org/">CFAR</a> and <a href="https://sparc-camp.org/">SPARC</a> are in this category.</p><p>In a different vein, <a href="https://www.enthea.net/">psychedelic experiences hold promise as a treatment</a> for treatment-resistant depression, and may also improve the intentions of highly capable people who have not reflected much about what matters (“the betterment of well people”).</p><p></p><h1>EA focus areas to deprioritize, maybe</h1><p>The steering capacity framework suggests deprioritizing animal welfare &amp; global health interventions, to the extent that these interventions’ effectiveness is driven by their proximate impacts.</p><p>Under this framework, prioritizing animal welfare &amp; global health interventions may be justified, but only on the basis of improving our intent, wisdom, coordination, capability, or predictive power.</p><h3>To deprioritize, maybe – animal welfare</h3><p>To the extent that animal welfare interventions expand our civilization’s <a href="http://www.stafforini.com/docs/Singer%20-%20The%20expanding%20circle.pdf">moral circle</a>, they may hold promise as interventions that improve our intentions &amp; understanding of what matters (the <a href="https://www.sentienceinstitute.org/">Sentience Institute</a> is doing work along this line).</p><p>However, following this framework, the case for animal welfare interventions has to be made on these grounds, not on the basis of cost-effectively reducing animal suffering in the present.</p><p>This is because the animals that are helped in such interventions cannot help “steer the ship” – they cannot contribute to making sure that our civilization’s trajectory is headed in a good direction.</p><p></p><h3>To deprioritize, maybe – global health</h3><p>To the extent that global health interventions improve coordination, or reduce x-risk by increasing socio-political stability, they may hold promise under the steering capacity framework.</p><p>However, the case for global health interventions would have to be made on the grounds of increasing coordination, reducing x-risk, or improving another steering capacity attribute. Arguments for global health interventions on the grounds that they cost-effectively help people in the present day (without consideration of how this bears on our future trajectory) are not competitive under this framework.</p><p></p><h1>Conclusion</h1><p>In sum, I think the fact that we are intractably clueless implies a portfolio approach to doing good – pursuing, in parallel, a large number of diverse hypotheses about how to do good.</p><p>Interventions that improve our understanding of what matters, improve governance, improve prediction-making ability, reduce existential risk, and increase the number of well-intentioned, highly capable people are all promising. Global health &amp; animal welfare interventions may hold promise as well, but the case for these cause areas needs to be made on the basis of improving our steering capacity, not on the basis of their proximate impacts.</p><p></p><p><em>Thanks to members of the Mather essay discussion group and an anonymous collaborator for thoughtful feedback on drafts of this post. Views expressed above are my own. Cross-posted to <a href="https://www.lesserwrong.com/posts/iZBK6PboA66Zb6dmt/doing-good-while-clueless">LessWrong</a> &amp; my <a href="https://flightfromperfection.com/doing-good-while-clueless.html">personal blog</a>.</em></p><hr class="dividerBlock"/><h2>Footnotes</h2><p>[1]: Nick Beckstead has done the best work I know of on the topic of why the far future matters. <a href="https://forum.effectivealtruism.org/ea/6l/a_relatively_atheoretical_perspective_on/">This post</a> is a good introduction; for a more in-depth treatment see his PhD thesis, <a href="https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxuYmVja3N0ZWFkfGd4OjExNDBjZTcwNjMxMzRmZGE">On the Overwhelming Importance of Shaping the Far Future</a>.</p><p>[2]: I&#x27;m grateful to Ben Hoffman for discussion that fleshed out the &quot;steering capacity&quot; concept; see <a href="http://benjaminrosshoffman.com/seeding-productive-culture/">this comment thread</a>. </p><p>[3]: Note that this list of attributes is not exhaustive &amp; this metaphor isn&#x27;t perfect. I&#x27;ve found the space travel metaphor useful for thinking about cause prioritization given our uncertainty about the far future, so am deploying it here.</p><p>[4]: Maintaining reversibility is important because given our cluelessness, we are unsure of the net impact of any action. When uncertain about overall impact, it’s important to be able to walk back actions that we come to view as net negative.</p><p>[5]: I&#x27;m not sure of how to prioritize these things amongst themselves. Probably improving our understanding of what matters &amp; our predictive power are highest priority, but that&#x27;s a very weakly held view.</p><p></p> milan_griffes X2n6pt3uzZtxGT9Lm 2018-02-15T05:04:25.291Z How tractable is cluelessness? https://forum.effectivealtruism.org/posts/Q8isNAMsFxny5N37Y/how-tractable-is-cluelessness <p>This is the third in a series of posts exploring <a href="https://flightfromperfection.com/cluelessness-what-to-do.html">consequentialist cluelessness</a> and its implications for effective altruism:</p><ul><li>The <a href="https://forum.effectivealtruism.org/ea/1hh/what_consequences/">first post</a> describes cluelessness &amp; its relevance to EA; arguing that for many popular EA interventions we don’t have a clue about the intervention’s overall net impact.</li><li>The <a href="https://forum.effectivealtruism.org/ea/1ix/just_take_the_expected_value_a_possible_reply_to/">second post</a> considers a potential reply to concerns about cluelessness.</li><li><strong>This post</strong> examines how tractable cluelessness is – to what extent we can grow more clueful about an intervention through intentional effort?</li><li>The <a href="https://forum.effectivealtruism.org/ea/1kv/doing_good_while_clueless/">fourth post</a> discusses what being clueless implies about doing good.</li></ul><p></p><p>Consider reading the <a href="https://forum.effectivealtruism.org/ea/1hh/what_consequences/">first</a> and <a href="https://forum.effectivealtruism.org/ea/1ix/just_take_the_expected_value_a_possible_reply_to/">second</a> posts first.</p><hr class="dividerBlock"/><p>Let&#x27;s consider the <a href="https://concepts.effectivealtruism.org/concepts/importance-neglectedness-tractability/">tractability</a> of cluelessness in two parts:</p><ol><li>How clueful do we need to be before deciding on a course of action? (i.e. how much effort should we spend contemplating &amp; exploring before committing resources to an intervention?)</li><li>How clueful can we become by contemplation &amp; exploration?</li></ol><p></p><h2>How clueful do we need to be before deciding on a course of action?</h2><p>In his talk <a href="http://www.stafforini.com/blog/bostrom/">Crucial Considerations and Wise Philanthropy</a>, Nick Bostrom defines a crucial consideration as “a consideration such that if it were taken into account it would overturn the conclusions we would otherwise reach about how we should direct our efforts, or an idea or argument that might possibly reveal the need not just for some minor course adjustment in our practical endeavors but a major change of direction or priority.”</p><p>A plausible reply to “how clueful do we need to be before deciding on a course of action?” might be: “as clueful as is needed to uncover all the crucial considerations relevant to the decision.”</p><p>Deciding to act before uncovering all the crucial considerations relevant to the decision is potentially disastrous, as even one unknown crucial consideration could bear on the consequences of the decision in a way that would entirely revise the moral calculus.</p><p>In contrast, deciding to act before uncovering all non-crucial (“normal”) considerations is by definition not disastrous, as unknown normal considerations might imply a minor course adjustment but not a radically different direction.</p><p></p><h2>How clueful can we become by contemplation &amp; exploration?</h2><p>Under this framing, our second tractability question can be rephrased as “by contemplation and exploration, can we uncover all the crucial considerations relevant to a decision?”</p><p>For cases where the answer is “yes”, we can become clueful enough to make a good decision – we can uncover and consider everything that would necessitate a radical change of direction.</p><p>Conversely, in cases where the answer is “no”, we can’t become clueful enough to make a good decision – despite our efforts there will remain unknown considerations that, if known, would radically change our decision-making.</p><p>There is a difference here between long-run consequences and indirect consequences (see definitions in <a href="https://forum.effectivealtruism.org/ea/1hh/what_consequences/">the first post</a>). By careful investigation, we can uncover more &amp; more of the indirect, temporally near consequences of an intervention. It’s plausible that for many interventions, we could uncover all the indirect consequences that relate to the intervention’s crucial considerations.</p><p>But we probably can’t uncover most of the long-run consequences of an intervention by investigation. We can improve our forecasting ability, but because of the complexity of reality, the fidelity of real-world forecasts declines as they extend into the future. It seems unlikely that our forecasting will be able to generate believable predictions of impacts more than 30 years out anytime soon.</p><p>Because many of the consequences of an intervention unfold on a long time horizon (one that’s much longer than our forecasting horizon), it’s implausible to uncover all the long-run consequences that relate to the intervention’s crucial considerations.</p><p></p><h2>Ethical precautionary principle</h2><p>Then, for any decision whose consequences are distributed over a long time horizon (i.e. most decisions), it’s difficult to be sure that we are operating in the “yes we can become clueful enough” category. More precisely, we can only become sufficiently clueful for decisions where there are no unknown crucial considerations that lie past our forecasting horizon.</p><p>Due to <a href="https://nickbostrom.com/astronomical/waste.html">the vast size of the future</a>, even a small probability of an unknown, temporally distant crucial consideration should give us pause.</p><p>I think this implies operating under an <em>ethical precautionary principle:</em> acting as if there were always an unknown crucial consideration that would strongly affect our decision-making, if only we knew it (i.e. always acting as if we are in the “no we can’t become clueful enough” category).</p><p>Does always following this precautionary principle imply <a href="https://en.wikipedia.org/wiki/Analysis_paralysis">analysis paralysis</a>, such that we never take any action at all? I don’t think so. We find ourselves in the middle of a process that’s underway, and devoting all of our resources to analysis &amp; contemplation is itself a decision (“<a href="https://genius.com/Rush-freewill-lyrics">If you choose not to decide, you still have made a choice</a>”).</p><p>Instead of paralyzing us, I think the ethical precautionary principle implies that we should focus our efforts in some areas and avoid others. I’ll explore this further in the <a href="https://forum.effectivealtruism.org/ea/1kv/doing_good_while_clueless/">next post</a>.</p> milan_griffes Q8isNAMsFxny5N37Y 2017-12-29T18:52:56.369Z “Just take the expected value” – a possible reply to concerns about cluelessness https://forum.effectivealtruism.org/posts/MWquqEMMZ4WXCrsug/just-take-the-expected-value-a-possible-reply-to-concerns <p>This is the second in a series of posts exploring <a href="https://flightfromperfection.com/cluelessness-what-to-do.html">consequentialist cluelessness</a> and its implications for effective altruism:</p><ul><li>The <a href="https://forum.effectivealtruism.org/ea/1hh/what_consequences/">first post</a> describes cluelessness &amp; its relevance to EA; arguing that for many popular EA interventions we don’t have a clue about the intervention’s overall net impact.</li><li><strong>This post</strong> considers a potential reply to concerns about cluelessness – maybe when we are uncertain about a decision, we should just choose the option with the highest expected value.</li><li>Following posts discuss <a href="https://forum.effectivealtruism.org/ea/1j4/how_tractable_is_cluelessness/">how tractable cluelessness is</a>, and what <a href="https://forum.effectivealtruism.org/ea/1kv/doing_good_while_clueless/">being clueless implies about doing good</a>.</li></ul><p>Consider reading the <a href="https://forum.effectivealtruism.org/ea/1hh/what_consequences/">first post</a> first.</p><hr class="dividerBlock"/><p>A rationalist’s reply to concerns about cluelessness could be as follows:</p><ul><li>Cluelessness is just a special case of empirical uncertainty.[1]</li><li>We have a framework for dealing with empirical uncertainty – <a href="https://en.wikipedia.org/wiki/Expected_value">expected value</a>.</li><li>So for decisions where we are uncertain, we can determine the best course of action by multiplying our best-guess probability against our best-guess utility for each option, then choosing the option with the highest expected value.</li></ul><p>While this approach makes sense in the abstract, it doesn’t work well in real-world cases. The difficulty is that it’s unclear what “best-guess” probabilities &amp; utilities we should assign, as well as unclear to what extent we should believe our best guesses.  </p><p>Consider this passage from <a href="https://flightfromperfection.com/files/post_attachments/cluelessness_greaves_2016.pdf">Greaves 2016</a> (“credence function” can be read roughly as “probability”):</p><blockquote>The alternative line I will explore here begins from the suggestion that in the situations we are considering, instead of having some single and completely precise (real-valued) credence function, agents are rationally required to have imprecise credences: that is, to be in a credal state that is represented by a many-membered set of probability functions (call this set the agent’s ‘representor’). Intuitively, the idea here is that when the evidence fails conclusively to recommend any particular credence function above certain others, agents are rationally required to remain neutral between the credence functions in question: to include all such equally-recommended credence functions in their representor.</blockquote><p></p><p>To translate a little, Greaves is saying that real-world agents don’t assign precise probabilities to outcomes, they instead consider multiple possible probabilities for each outcome (taken together, these probabilities sum to the agent’s “representor”). Because an agent holds multiple probabilities for each outcome, and has no way by which to arbitrate between its multiple probabilities, it cannot use a straightforward expected value calculation to determine the best outcome.</p><p>Intuitively, this makes sense. Probabilities can only be formally assigned when the <a href="https://en.wikipedia.org/wiki/Sample_space">sample space</a> is fully mapped out, and for most real-world decisions we can’t map the full sample space (in part because the world is very complicated, and in part because we can’t predict the long-run consequences of an action).[2] We can make subjective probability estimates, but if a probability estimate does not flow out of a clearly articulated model of the world, its believability is suspect.[3]</p><p>Furthermore, because multiple probability estimates can seem sensible, agents can hold multiple estimates simultaneously (i.e. their representor). For decisions where the full sample space isn’t mapped out (i.e. most real-world decisions), the method by which human decision-makers convert their multi-value representor into a single-value, “best-guess” estimate is opaque.</p><p>The next time you encounter someone making a subjective probability estimate, ask “how did you arrive at that number?” The answer will frequently be along the lines of “it seems about right” or “I would be surprised if it were higher.” Answers like this indicate that the estimator doesn’t have visibility into the process by which they’re arriving at their estimate.</p><p>So we have believability problems on two levels:</p><ol><li>Whenever we make a probability estimate that doesn’t flow from a clear world-model, the believability of that estimate is questionable.</li><li>And if we attempt to reconcile multiple probability estimates into a single best-guess, the believability of that best-guess is questionable because our method of reconciling multiple estimates into a single value is opaque.[4]</li></ol><p></p><p>By now it should be clear that simply following the expected value is not a sufficient response to concerns of cluelessness. However, it’s possible that cluelessness can be addressed by other routes – perhaps by diligent investigation, we can grow clueful enough to make believable decisions about how to do good. </p><p>The <a href="https://forum.effectivealtruism.org/ea/1j4/how_tractable_is_cluelessness/">next post</a> will consider this further.</p><p></p><p><em>Thanks to Jesse Clifton and an anonymous collaborator for thoughtful feedback on drafts of this post. Views expressed above are my own. Cross-posted to <a href="https://flightfromperfection.com/just-take-the-expected-value.html">my personal blog</a>.</em></p><hr class="dividerBlock"/><h2>Footnotes</h2><p>[1]: This is separate from normative uncertainty – uncertainty about what criterion of moral betterness to use when comparing options. Empirical uncertainty is uncertainty about the overall impact of an action, given a criterion of betterness. In general, cluelessness is a subset of empirical uncertainty. </p><p>[2]: Leonard Savage, who worked out much of the foundations of Bayesian statistics, considered Bayesian decision theory to only apply in &quot;small world&quot; settings. See p. 16 &amp; p. 82 of the second edition of his <a href="https://books.google.com/books/about/The_Foundations_of_Statistics.html?id=zSv6dBWneMEC">Foundations of Statistics</a> for further discussion of this point.</p><p>[3]: Thanks to Jesse Clifton to making this point.</p><p>[4]: This problem persists even if each input estimate flows from a clear world-model.</p> milan_griffes MWquqEMMZ4WXCrsug 2017-12-21T19:37:07.709Z What consequences? https://forum.effectivealtruism.org/posts/LPMtTvfZvhZqy25Jw/what-consequences <p>This is the first in a series of posts exploring <a href="https://flightfromperfection.com/cluelessness-what-to-do.html">consequentialist cluelessness</a> and its implications for effective altruism:</p><ul><li><strong>This post </strong>describes cluelessness &amp; its relevance to EA; arguing that for many popular EA interventions we don’t have a clue about the intervention’s overall net impact.</li><li>The <a href="https://forum.effectivealtruism.org/ea/1ix/just_take_the_expected_value_a_possible_reply_to/">second post</a> considers a potential reply to concerns about cluelessness.</li><li>The <a href="https://forum.effectivealtruism.org/ea/1j4/how_tractable_is_cluelessness/">third post</a> examines how tractable cluelessness is – to what extent we can grow more clueful about an intervention through intentional effort?</li><li>The <a href="https://forum.effectivealtruism.org/ea/1kv/doing_good_while_clueless/">fourth post</a> discusses how we might do good while being clueless to an important extent.</li></ul><p></p><p><strong>My prior</strong> is that cluelessness presents a profound challenge to effective altruism in its current instantiation, and that we need to radically revise our beliefs about doing good such that we prioritize activities that are robust to moral &amp; empirical uncertainty.</p><p><strong>My goal</strong> in writing this piece is to elucidate this position, or to discover why it’s mistaken. I’m posting in serial form to allow more opportunity for forum readers to change my mind about cluelessness and its implications.</p><hr class="dividerBlock"/><p>By “cluelessness”, I mean the possibility that we don’t have a clue about the overall net impact of our actions.[1] Another way of framing this concern: when we think about the consequences of our actions, how do we determine what consequences we should consider?</p><p>First, some definitions. The consequences of an action can be divided into three categories:</p><ul><li><strong>Proximate consequences</strong> – the immediate effects that occur soon afterward to intended object(s) of an action. Relatively easy to observe and measure.</li></ul><p></p><ul><li><strong>Indirect consequences</strong> – the effects that occur soon afterward to unintended object(s) of an action. These could also be termed “cross-stream” effects. Relatively difficult to observe and measure.</li></ul><p></p><ul><li><strong>Long-run consequences</strong> – the effects of an action that occur much later, including effects on both intended and unintended objects. These could also be termed “downstream” effects. Impossible to observe and measure; most long-run consequences can only be estimated.[2]</li></ul><hr class="dividerBlock"/><h2>Effective altruist approaches towards consequences</h2><p>EA-style reasoning addresses consequentialist cluelessness in one of two ways:</p><p><strong>1. The brute-good approach</strong> – collapsing the consequences of an action into a proximate “brute-good” unit, then comparing the aggregate “brute-good” consequences of multiple interventions to determine the intervention with the best (brute good) consequences.</p><ul><ul><li>For example, GiveWell uses “deaths averted” as a brute-good unit, then converts other impacts of the intervention being considered into “deaths-averted equivalents”, then compares interventions to each other using this common unit.</li><li>This approach is common among the cause areas of animal welfare, global development, and EA coalition-building.</li></ul></ul><p><strong>2. The x-risk reduction approach</strong> – simplifying “do the actions with the best consequences” into “do the actions that yield the most existential-risk reduction.” Proximate &amp; indirect consequences are only considered insofar as they bear on x-risk; the main focus is on the long-run: whether or not humanity will survive into the far future.</p><ul><ul><li>Nick Bostrom makes this explicit in his essay, <em><a href="https://nickbostrom.com/astronomical/waste.html">Astronomical Waste</a>:</em> “The utilitarian imperative ‘Maximize expected aggregate utility!’ can be simplified to the maxim ‘Minimize existential risk!’”</li><li>This approach is common among the x-risk reduction cause area.</li></ul></ul><p>EA focus can be imagined as a bimodal distribution – EA either considers only the proximate effects of an intervention, ignoring its indirect &amp; long-run consequences; or considers only the very long-run effects of an intervention (i.e. to what extent the intervention reduces x-risk), considering all proximate &amp; indirect effects only insofar as they bear on x-risk reduction.[3]</p><p>Consequences that fall between these two peaks of attention are not included in EA’s moral calculus, nor are they explicitly determined to be of negligible importance. Instead, they are mentioned in passing, or ignored entirely.</p><p>This is problematic. It’s likely that for most interventions, these consequences compose a substantial portion of the intervention’s overall impact.</p><hr class="dividerBlock"/><h2>Cluelessness and the brute-good approach</h2><p>The cluelessness problem for the brute-good approach can be stated as follows:</p><blockquote>Due to the difficulty of observing and measuring indirect &amp; long-run consequences of interventions, we do not know the bulk of the consequences of any intervention, and so cannot confidently compare the consequences of one intervention to another. Comparing only the proximate effects of interventions assumes that proximate effects compose the majority of interventions’ impact, whereas in reality the bulk of an intervention’s impact is composed of indirect &amp; long-run effects which are difficult to observe and difficult to estimate.[4]</blockquote><p></p><p>The brute-good approach often implicitly assumes symmetry of non-proximate consequences (i.e. for every indirect &amp; long-run consequence, there is an equal and opposite consequence such that indirect &amp; long-run consequences cancel out and only proximate consequences matter). This assumption seems poorly supported.[5]</p><p>It might be thought that indirect &amp; long-run consequences can be surfaced as part of the decision-making process, then included in the decision-maker’s calculus. This seems very difficult to do in a believable way (i.e. a way in which we feel confident that we’ve uncovered all crucial considerations). I will consider this issue further in the <a href="https://forum.effectivealtruism.org/ea/1ix/just_take_the_expected_value_a_possible_reply_to/">next post</a> of this series.</p><p>Some examples follow, to make the cluelessness problem for the brute-good approach salient.</p><p></p><h3>Example: baby Hitler</h3><p>Consider the position of an Austrian physician in the 1890s who was called to tend to a sick infant, Adolf Hitler. </p><p>Considering only proximate effects, the physician should clearly have treated baby Hitler and made efforts to ensure his survival. But the picture is clouded when indirect &amp; long-run consequences are added to the calculus. Perhaps letting baby Hitler die (or even committing infanticide) would have been better in the long-run. Or perhaps the German zeitgeist of the 1920s and 30s was such that the terrors of Nazism would have been unleashed even absent Hitler’s leadership. Regardless, the decision to minister to Hitler as a sick infant is not straightforward when indirect &amp; long-run consequences are considered.</p><p>A potential objection here is that the Austrian physician could in no way have foreseen that the infant they were called to tend to would later become a terrible dictator, so the physician should have done what seemed best given the information they could uncover. But this objection only highlights the difficulty presented by cluelessness. In a very literal sense, a physician in this position is clueless about what action would be best. Assessing only proximate consequences would provide some guidance about what action to take, but this guidance would not necessarily point to the action with the best consequences in the long run.</p><p></p><h3>Example: bednet distributions in unstable regions</h3><p>The <a href="https://www.againstmalaria.com/Default.aspx">Against Malaria Foundation (AMF)</a> funds bed net distributions in developing countries, with the goal of reducing malaria incidence. In 2017, AMF funded its largest distribution to date, <a href="https://www.againstmalaria.com/Distribution1.aspx?ProposalID=207">over 12 million nets in Uganda</a>.</p><p>Uganda has had <a href="https://en.wikipedia.org/wiki/Terrorism_in_Uganda">a chronic problem with terror groups</a>, notably the <a href="https://en.wikipedia.org/wiki/Lord%27s_Resistance_Army">Lord’s Resistance Army</a> operating in the north and <a href="https://en.wikipedia.org/wiki/Al-Shabaab_(militant_group)">Al-Shabab</a> carrying out attacks in the capital. Though the country is believed to be relatively stable at present, there remain non-negligible risks of civil war or government overthrow.</p><p>Considering only the proximate consequences, distributing bednets in Uganda is probably a highly cost-effective method of reducing malaria incidence and saving lives. But this assessment is muddied when indirect and long-run effects are also considered.</p><p>Perhaps saving the lives of young children results in increasing the supply of child-soldier recruits for rebel groups, leading to increased regional instability.</p><p>Perhaps importing &amp; distributing millions of foreign-made bed nets disrupts local supply chains and breeds Ugandan resentment toward foreign aid.</p><p>Perhaps stabilizing the child mortality rate during <a href="https://en.wikipedia.org/wiki/God_Loves_Uganda">a period of fundamentalist-Christian revival</a> increases the probability of a fundamentalist-Christian value system becoming locked in, which could prove problematic further down the road.</p><p>I’m not claiming that any of the above are likely outcomes of large-scale bed net distributions. The claim is that the above are all possible effects of a large-scale bed net distribution (each with a non-negligible, unknown probability), and that due to many possible effects like this, we are prospectively clueless about the overall impact of a large-scale bed net distribution.</p><p></p><h3>Example: direct-action animal-welfare interventions</h3><p>Some animal welfare activists advocate <a href="https://www.directactioneverywhere.com/why-direct-action/">direct action</a>, the practice of directly confronting problematic food industry practices.</p><p>In 2013, animal-welfare activists organized <a href="https://www.directactioneverywhere.com/chipotle/">a “die-in” at a San Francisco Chipotle</a>. At the die-in, activists confronted Chipotle consumers with claims about the harm inflicted on farm animals by Chipotle’s supply chain.</p><p>The die-in likely had the proximate effect of raising awareness of animal welfare among the Chipotle consumers and employees who were present during the demonstration. Increasing social awareness of animal welfare is probably positive according to consequentialist perspectives that give moral consideration to animals.</p><p>However, if considering indirect and long-run consequences as well, the overall impact of direct action demonstrations like the die-in is unclear. Highly confrontational demonstrations may result in the animal welfare movement being labeled “radical” or “dangerous” by the mainstream, thus limiting the movement’s influence.</p><p>Confrontational tactics may also be controversial within the animal welfare movement, causing divisiveness and potentially leading to a schism, which could harm the movement’s efficacy.</p><p>Again, I’m not claiming that the above are likely effects of direct-action animal-welfare interventions. The claim is that indirect &amp; long-run effects like this each have a non-negligible, unknown probability, such that we are prospectively clueless regarding the overall impact of the intervention.</p><hr class="dividerBlock"/><h2>Cluelessness and the existential risk reduction approach</h2><p>Unlike the brute-good approach, which tends to overweight the impact of proximate effects and underweight that of indirect &amp; long-run effects, the x-risk reduction approach focuses almost exclusively on the long-run consequences of actions (i.e. how they effect the probability that humanity survives into the far future). Interventions can be compared according to a common criterion: the amount by which they are expected to reduce existential risk.</p><p>While I think cluelessness poses less difficulty for the x-risk reduction approach, it remains problematic. The cluelessness problem for the x-risk reduction approach can be stated as follows:</p><blockquote>Interventions aimed at reducing existential risk have a clear criterion by which to make comparisons: “which intervention yields a larger reduction in existential risk?” However, because the indirect &amp; long-run consequences of any specific x-risk intervention are difficult to observe, measure, and estimate, arriving at a believable estimate of the amount of x-risk reduction yielded by an intervention is difficult. Because it is difficult to arrive at believable estimates of the amount of x-risk reduction yielded by interventions, we are somewhat clueless when trying to compare the impact of one x-risk intervention to another.</blockquote><p>An example follows to make this salient.</p><p></p><h3>Example: stratospheric aerosol injection to blunt impacts of climate change</h3><p><a href="https://www.openphilanthropy.org/research/cause-reports/geoengineering#Background_on_stratospheric_aerosol_injection">Injecting sulfate aerosols into the stratosphere</a> has been put forward as an intervention that could reduce the impact of climate change (by reflecting sunlight away from the earth, thus cooling the planet).</p><p>However, it’s possible that stratospheric aerosol injection could have unintended consequences, such as cooling the planet so much that the surface is rendered uninhabitable (incidentally, this is the background story of the film <em><a href="https://en.wikipedia.org/wiki/Snowpiercer">Snowpiercer</a></em>). Because aerosol injection is relatively cheap to do (on the order of <a href="https://www.openphilanthropy.org/research/cause-reports/geoengineering#footnote5">tens of billions USD</a>), there is concern that small nation-states, especially those disproportionately affected by climate change, might deploy aerosol injection programs without the consent or foreknowledge of other countries.  </p><p>Given this strategic landscape, the effects of calling attention to stratospheric aerosol injection as a cause are unclear. It’s possible that further public-facing work on the intervention results in international agreements governing the use of the technology. This would most likely be a reduction in existential risk along this vector.</p><p>However, it’s also possible that further public-facing work on aerosol injection makes the technology more discoverable, revealing the technology to decision-makers who were previously ignorant of its promise. Some of these decision-makers might be inclined to pursue research programs aimed at developing a stratospheric aerosol injection capability, which would most likely increase existential risk along this vector.</p><p>It is difficult to arrive at believable estimates of the probability that further work on aerosol injection yields an x-risk reduction, and of the probability that further work yields an x-risk increase (though more granular mapping of the game-theoretic and strategic landscape here would increase the believability of our estimates).</p><p>Taken together, then, it’s unclear whether public-facing work on aerosol injection yields an x-risk reduction on net. (Note too that keeping work on the intervention secret may not straightforwardly reduce x-risk either, as no secret research program can guarantee 100% leak prevention, and leaked knowledge may have a more negative effect than the same knowledge made freely available.)</p><p>We are, to some extent, clueless regarding the net impact of further work on the intervention.</p><hr class="dividerBlock"/><h2>Where to, from here?</h2><h2></h2><p>It might be claimed that, although we start out being clueless about the consequences of our actions, we can grow more clueful by way of intentional effort &amp; investigation. Unknown unknowns can be uncovered and incorporated into expected-value estimates. Plans can be adjusted in light of new information. Organizations can pivot as their approaches run into unexpected hurdles.</p><p>Cluelessness, in other words, might be very tractable.</p><p>This is the claim I will consider in the <a href="https://forum.effectivealtruism.org/ea/1ix/just_take_the_expected_value_a_possible_reply_to/">next post</a>. My prior is that cluelessness is quite intractable, and that despite best efforts we will remain clueless to an important extent.</p><p>The topic definitely deserves careful examination.</p><p></p><p></p><p><em>Thanks to members of the Mather essay discussion group for thoughtful feedback on drafts of this post. Views expressed above are my own. Cross-posted to <a href="https://flightfromperfection.com/what-consequences.html">my personal blog</a>.</em></p><hr class="dividerBlock"/><h2>Footnotes</h2><p>[1]: The term &quot;cluelessness&quot; is not my coinage; I am borrowing it from academic philosophy. See in particular <a href="https://flightfromperfection.com/files/post_attachments/cluelessness_greaves_2016.pdf">Greaves 2016</a>.</p><p>[2]: Indirect &amp; long-run consequences are sometimes referred to as “<a href="https://blog.givewell.org/2013/05/15/flow-through-effects/">flow-through effects</a>,” which, as far as I can tell, does not make a clean distinction between temporally near effects (“indirect consequences”) and temporally distant effects (“long-run consequences”). This distinction seems interesting, so I will use “indirect” &amp; “long-run” in favor of “flow-through effects.”</p><p>[3]: Thanks to Daniel Berman for making this point.</p><p>[4]: More precisely, the brute-good approach assumes that indirect &amp; long-run consequences will either:</p><ul><li>Be negligible</li><li>Cancel each other out via symmetry (see footnote 5)</li><li>On net point in the same direction as the proximate consequences (see <a href="http://globalprioritiesproject.org/2014/06/human-and-animal-interventions/">Cotton-Barratt 2014</a>: &quot;The upshot of this is that it is likely interventions in human welfare, as well as being immediately effective to relieve suffering and improve lives, also tend to have a significant long-term impact. This is often more difficult to measure, but the short-term impact can generally be used as a reasonable proxy.&quot;)</li></ul><p></p><p>[5]: See <a href="https://flightfromperfection.com/files/post_attachments/cluelessness_greaves_2016.pdf">Greaves 2016</a> for discussion of the symmetry argument, and in particular p. 9 for discussion of why it&#x27;s insufficient for cases of &quot;complex cluelessness.&quot; </p> milan_griffes LPMtTvfZvhZqy25Jw 2017-11-23T18:27:21.894Z Reading recommendations for the problem of consequentialist scope? https://forum.effectivealtruism.org/posts/4NnvA2sXbB87CqArF/reading-recommendations-for-the-problem-of-consequentialist <p>Determining which&#xA0;scope of outcomes to consider when making a decision seems like a difficult problem for consequentialism. By &quot;scope of outcomes&quot; I mean how far into the future and how many links in the causal chain to incorporate into decision-making. For example, if I&apos;m assessing the comparative goodness of two charities, I&apos;ll need to have some method of comparing&#xA0;future impacts (perhaps &quot;consider impacts that occur in the next 20 years&quot;) and flow-through contemporaneous impacts (perhaps &quot;consider the actions of the charitable recipient, but not the actions of those they interact with&quot;).<br><br>I&apos;m using&#xA0;&quot;consequentialist scope&quot; as a shorthand for this type of determination because I&apos;m not aware of a common-usage word for it.<br><br>Consequentialist scope seems both (a) important and (b) difficult to think about clearly, so I want to learn more about it. <br><br>Does anyone have&#xA0;reading recommendations for this? Philosophy papers, blog posts, books, whatever.&#xA0;I didn&apos;t encounter it in <em>Reasons and Persons</em>, but I&apos;ve only read the first third so far.</p> milan_griffes 4NnvA2sXbB87CqArF 2017-08-02T02:07:46.769Z Should Good Ventures focus on current giving opportunities, or save for future giving opportunities? https://forum.effectivealtruism.org/posts/CdS9JSRLYchehTMhN/should-good-ventures-focus-on-current-giving-opportunities <p>Around this time of year, <a href="http://www.givewell.org/">GiveWell</a>&#xA0;traditionally spends a lot of time thinking about game theoretic considerations &#x2013; specifically, what funding recommendation it ought to make to <a href="http://www.goodventures.org/">Good Ventures</a>&#xA0;so that Good Ventures allocates&#xA0;its resources wisely. (Here are GiveWell&apos;s game theoretic posts from <a href="http://blog.givewell.org/2014/12/02/donor-coordination-and-the-givers-dilemma/">2014</a>&#xA0;&amp; <a href="http://blog.givewell.org/2015/11/25/good-ventures-and-giving-now-vs-later/">2015</a>.)</p> <p>The main considerations here are:</p> <ol> <li>How should Good Ventures act in an environment where individual donors &amp; other foundations are also giving money?</li> <li>How should Good Ventures value its current giving opportunities compared to the giving opportunities it will have in the future?</li> </ol> <p>I&apos;m more interested in the second consideration, so that&apos;s what I&apos;ll engage with here. If present-day opportunities seem better than expected future opportunities, Good Ventures should fully take advantage of its current opportunities, because they are the best giving opportunities it will ever encounter. Conversely, if present-day opportunities seem worse than expected future opportunities, Good Ventures should give sparsely now, preserving its resources for the superior upcoming opportunities.</p> <p>Personally, I&apos;m bullish on present-day opportunities. Present-day opportunities seem more attractive than future ones for a couple reasons:</p> <ol> <li>The world is improving, so giving opportunities will get worse if current trends continue.</li> <li>There&apos;s a non-negligible chance that a global catastrophic risk (GCR) occurs within Good Ventures&apos; lifetime (it&apos;s a <a href="http://www.vox.com/2015/4/24/8457895/givewell-open-philanthropy-charity">&quot;burn-down&quot; foundation</a>), thus nullifying any future giving opportunities.</li> <li>Strong AI might emerge sometime in the next 30 years. This could be a global catastrophe, or it could ferry humanity into a post-scarcity environment, wherein philanthropic giving opportunities are either dramatically reduced or entirely absent.</li> </ol> <p>So far, my reasoning has been qualitative, and <a href="http://slatestarcodex.com/2013/05/02/if-its-worth-doing-its-worth-doing-with-made-up-statistics/">if it&apos;s worth doing, it&apos;s worth doing with made-up numbers</a>, so let&apos;s assign some subjective probabilities to the different scenarios we could encounter (in the next 30 years):</p> <ul> <li>P(current, broad trend of humanitarian improvement stalls out or reverses; no strong AI, no GCRs) = 30%</li> <li>P(current, broad trend of humanitarian improvement continues; no GCRs, if strong AI occurs it doesn&apos;t lead to post-scarcity) = 56%</li> <li>P(strong AI leads to a post-scarcity economy) = 5%</li> <li>P(strong AI leads to a global catastrophe) = 2%</li> <li>P(a different GCR occurs) = 7%</li> </ul> <p>To assess the expected value of these scenarios, we also have to assign a utility score to each scenario (obviously, the following is incredibly rough):</p> <ul> <li>Current, broad trend of humanitarian improvement stalls out or reverses; no strong AI, no GCRs = Baseline</li> <li>Current, broad trend of humanitarian improvement continues; no GCRs, if strong AI occurs it doesn&apos;t lead to post-scarcity = 2x as good as baseline</li> <li>Strong AI leads to a post-scarcity economy = 100x as good as baseline</li> <li>Strong AI leads to a global catastrophe = 0x as good as baseline</li> <li>A different GCR occurs = 0x as good as baseline</li> </ul> <p>Before calculating the expected value of each scenario, let&apos;s unpack my assessments a bit. I&apos;m imagining &quot;baseline&quot; goodness as essentially things as they are right now, with no dramatic changes to human happiness in the next 30 years. If quality of life broadly construed continues to improve over the next 30 years, I assess that as twice as good as the baseline scenario.</p> <p>Achieving post-scarcity in the next 30 years is assessed as 100x as good as the baseline scenario of no improvement. (Arguably this could be nearly infinitely better than baseline, but to avoid Pascal&apos;s mugging we&apos;ll cap it at 100x.)</p> <p>A global catastrophe in the next 30 years is assessed as 0x as good as baseline.</p> <p>Again, this is all very rough.</p> <p>Now, calculating the expected value of each outcome is straightforward:</p> <ul> <li>Expected value of <em>current, broad trend of humanitarian improvement stalls out or reverses; no strong AI, no GCRs</em> = 0.3 x 1 = 0.3</li> <li>Expected value of <em>current, broad trend of humanitarian improvement continues; no GCRs, if strong AI occurs it doesn&apos;t lead to post-scarcity</em> = 0.56 x 2 = 1.12</li> <li>Expected value of <em>strong AI leads to a post-scarcity economy</em> = 0.05 x 100 = 5</li> <li>Expected value of <em>strong AI leads to a global catastrophe</em> = 0.02 * 0 = 0</li> <li>Expected value of <em>a different GCR occurs</em> = 0.07 * 0 = 0</li> </ul> <p>And each scenario maps to a now-or-later giving decision:</p> <ul> <li>Current, broad trend of humanitarian improvement stalls out or reverses; no strong AI, no GCRs &#x2013;&gt; Give later (because new opportunities may be discovered)</li> <li>Current, broad trend of humanitarian improvement continues; no GCRs, if strong AI occurs it doesn&apos;t lead to post-scarcity &#x2013;&gt; Give now (because the best giving opportunities are the ones we&apos;re currently aware of)</li> <li>Strong AI leads to a post-scarcity economy &#x2013;&gt; Give now (because philanthropy is obsolete in post-scarcity)</li> <li>Strong AI leads to a global catastrophe (GCR) &#x2013;&gt; Give now (because philanthropy is nullified by a global catastrophe)</li> <li>A different GCR occurs &#x2013;&gt; Give now (because philanthropy is nullified by a global catastrophe)</li> </ul> <p>So, we can add up the expected values of all the &quot;give now&quot; scenarios and all the &quot;give later&quot; scenarios, and see which sum is higher:</p> <ul> <li><em>Give now</em> total expected value = 1.12 + 5 + 0 + 0 = 6.12</li> <li><em>Give later</em> total expected value = 0.3 &#xA0;= 0.3</li> </ul> <p>This is a little strange because GCR outcomes are given no weight, but in reality if we were faced with a substantial risk of a global catastrophe, that would strongly influence our decision-making. Maybe the proper way to do this is to assign a negative value to GCR outcomes and include them in the &quot;give later&quot; bucket, but that pushes even further in the direction of &quot;give now&quot; so I&apos;m not going to fiddle with it here.</p> <p>Comparing the sums&#xA0;shows that, in expectation, giving now will lead to substantially more value. Most of this is driven by the post-scarcity variable, but even with post-scarcity outcomes excluded, I still assess &quot;give now&quot; scenarios to have about 4x the expected value as &quot;give later&quot; scenarios.</p> <p>Yes, this exercise is ad-hoc and a little silly. Others&#xA0;could&#xA0;assign&#xA0;different probabilities &amp; utilities, which would&#xA0;lead them to different conclusions. But the point the exercise illustrates is important: if you&apos;re like me in thinking that, over the next 30 years, things are most likely going to continue slowly improving with some chance of a trend reversal and a tail risk of major societal disruption, then in expectation, present-day giving opportunities are a better bet than future giving opportunities.</p> <p>&#xA0;---</p> <p><strong>Disclosure:</strong>&#xA0;I used to work at GiveWell.</p> <p>A version of this post appeared on <a href="http://flightfromperfection.com/">my personal blog</a>.</p> milan_griffes CdS9JSRLYchehTMhN 2016-11-07T16:10:29.709Z