aarongertler feed - EA Forum Reader aarongertler’s posts and comments on the Effective Altruism Forum en-us EA Forum Prize: Winners for January 2019 https://forum.effectivealtruism.org/posts/k7j7oxMcHsun2nC5H/ea-forum-prize-winners-for-january-2019 <p> CEA is pleased to announce the winners of the January 2019 EA Forum Prize!</p><p>In first place (for a prize of $999): &quot;<u><a href="https://forum.effectivealtruism.org/posts/hP6oEXurLrDXyEzcT/ea-survey-2018-series-cause-selections">EA Survey 2018 Series: Cause Selections</a></u>&quot;, by <u><a href="https://forum.effectivealtruism.org/users/david_moss">David_Moss</a></u>, <u><a href="https://forum.effectivealtruism.org/users/incogneilo18">Neil_Dullaghan</a></u>, and Kim Cuddington.</p><p>In second place (for a prize of $500): &quot;<u><a href="https://forum.effectivealtruism.org/posts/Ns3h8rCtsTMgFZ9eH/ea-giving-tuesday-donation-matching-initiative-2018">EA Giving Tuesday Donation Matching Initiative 2018 Retrospective</a></u>&quot;, by <u><a href="https://forum.effectivealtruism.org/users/avin">AviNorowitz</a></u>.</p><p>In third place (for a prize of $250): &quot;<u><a href="https://forum.effectivealtruism.org/posts/jn7TwAtFsHLW3jnQK/eagx-boston-2018-postmortem">EAGx Boston 2018 Postmortem</a></u>”, by <u><a href="https://forum.effectivealtruism.org/users/mjreard">Mjreard</a></u>.</p><p>We also awarded prizes in <u><a href="https://forum.effectivealtruism.org/posts/k4SLFn74Nsbn4sbMA/ea-forum-prize-winners-for-november-2018">November</a></u> and <u><a href="https://forum.effectivealtruism.org/posts/gsNDoqpB2pWq5yYLv/ea-forum-prize-winners-for-december-2018">December</a></u>.</p><h2>What is the EA Forum Prize?</h2><p>Certain posts exemplify the kind of content we <u><a href="https://forum.effectivealtruism.org/about">most want to see</a></u> on the EA Forum. They are well-researched and well-organized; they care about <u><a href="https://ideas.ted.com/why-you-think-youre-right-even-when-youre-wrong/">informing readers, not just persuading them</a></u>.</p><p>The Prize is an incentive to create posts like this. But more importantly, we see it as an opportunity to showcase excellent content as an example and inspiration to the Forum&#x27;s users.</p><h2>The voting process</h2><p>All posts published in the month of January qualified for voting, save for those written by CEA staff and Prize judges.</p><p>Prizes were chosen by five people. Three of them are the Forum&#x27;s moderators (<u><a href="https://forum.effectivealtruism.org/users/aarongertler">Aaron Gertler</a></u>, <u><a href="https://forum.effectivealtruism.org/users/denise_melchin">Denise Melchin</a></u>, and <u><a href="https://forum.effectivealtruism.org/users/julia_wise">Julia Wise</a></u>).</p><p>The others were two of the three highest-karma users at the time the new Forum was launched (<u><a href="https://forum.effectivealtruism.org/users/peter_hurford">Peter Hurford</a></u> and <u><a href="https://forum.effectivealtruism.org/users/joey">Joey Savoie</a></u> — <u><a href="https://forum.effectivealtruism.org/users/robert_wiblin">Rob Wiblin</a></u> took this month off).</p><p>Voters recused themselves from voting for content written by their colleagues. Otherwise, they used their own individual criteria for choosing posts, though they broadly agree with the goals outlined above.</p><p>Winners were chosen by an initial round of <u><a href="https://en.wikipedia.org/wiki/Approval_voting">approval voting</a></u>, followed by a runoff vote to resolve ties.</p><h2>About the January winners</h2><p>“<u><a href="https://forum.effectivealtruism.org/posts/hP6oEXurLrDXyEzcT/ea-survey-2018-series-cause-selections">EA Survey 2018 Series: Cause Selections</a></u>”, like the other posts in that series, makes important data from the EA Survey much easier to find. The summary and use of descriptive headings both increase readability, and the methodological details help to put the post’s numbers in context.</p><p>As a movement, we collect a lot of information about ourselves, and it’s really helpful when authors report that information in a way that makes it easier to understand. All the posts in this series are worth reading if you want to learn about the EA community.</p><p>--</p><p>The EA Giving Tuesday program shows what a team of volunteers can do when they notice an opportunity — and how much more good can be done when those volunteers actively work to improve their project (in this case, they raised the matching funds they obtained by a factor of 10 between 2017 and 2018). </p><p>“<u><a href="https://forum.effectivealtruism.org/posts/Ns3h8rCtsTMgFZ9eH/ea-giving-tuesday-donation-matching-initiative-2018">EA Giving Tuesday Donation Matching Initiative 2018 Retrospective</a></u>” illustrates this well, taking readers through the setup and self-improvement processes of the Initiative, in a way that offers lessons for any number of other projects. </p><p>Documentation like this is important for keeping a project going even if a key contributor stops being available to work on it. We hope that others will learn from the EA Giving Tuesday example to create such documents (and Lessons Learned sections) for their own projects.</p><p>—</p><p>“<u><a href="https://forum.effectivealtruism.org/posts/jn7TwAtFsHLW3jnQK/eagx-boston-2018-postmortem">EAGx Boston 2018 Postmortem</a></u>” is a well-designed guide to running a small EA conference, which explains many important concepts in a clear and practical way using stories from a particular event. </p><p>Notable features of the post:</p><ul><li>The author links directly to materials they used for the event (like a template for inviting speakers), helping other organizers save time by giving them something to build on.</li><li>The takeaway section for each subtopic helps readers find the knowledge they wanted, whether they&#x27;re planning a full event or were just curious to see how another conference handled food.</li></ul><p>I personally expect to share this postmortem whenever someone asks me about running an EA event (whether with 20 people or 200), and I hope to see an updated version after this year’s EAGx Boston!</p><h2>The future of the Prize</h2><p>When we launched the EA Forum Prize, we planned on running the program for three months before deciding whether to keep awarding monthly prizes. We still aren’t sure whether we’ll do so. Our goals for the program were as follows:</p><ol><li>Create an incentive for authors to put more time and care into writing posts.</li><li>Collect especially well-written posts to serve as an example for other authors.</li><li>Offer readers a selection of curated posts (especially those who don’t have time to read most of the content published on the Forum).</li></ol><p><strong>If you have thoughts on whether the program should continue, please let us know in the comments, or by contacting <u><a href="mailto:aaron@centreforeffectivealtruism.org">Aaron Gertler</a></u>. </strong>We’d be especially interested to hear whether the existence of the Prize has led you to write anything you might not have written otherwise, or to spend more time on a piece of writing.</p> aarongertler k7j7oxMcHsun2nC5H 2019-02-22T22:27:50.161Z Comment by aarongertler on Humane insecticides - 4 month update https://forum.effectivealtruism.org/posts/4RGWGQsNsgckbqqGr/humane-insecticides-4-month-update#iEYGgJyGWL4NcDwKR <p>A couple of formatting notes:</p><p>1. If you want to make a linkpost in the &quot;standard&quot; format, you can click the small hyperlink symbol underneath the post title in the editor to open a text field where you can paste the link.</p><p>This will add the following text:</p><p><em>This is a linkpost for (LINK APPEARS HERE)</em></p><p>2. If you don&#x27;t add all the text from the article into the Forum post, I&#x27;d at least recommend including a brief summary of the main points in the post. This makes it easier for people to get the main ideas behind the update without reading something long, and helps them decide whether they want to click through and read the full article.</p> aarongertler iEYGgJyGWL4NcDwKR 2019-02-22T02:38:05.254Z Comment by aarongertler on Can the EA community copy Teach for America? https://forum.effectivealtruism.org/posts/uWWsiBdnHXcpr7kWm/can-the-ea-community-copy-teach-for-america#mXdPkrxQ27yoktqHq <p>Strong-upvoted for raising an important question, providing a relevant example from within EA, quoting directly from sources you wanted to reference, and giving a good definition of the kind of thing you&#x27;re looking for. I really loved your formatting and &quot;content design&quot;; my only suggestion there would be to add some headers.</p><p>--</p><p>I don&#x27;t know of any single &quot;Cause Y&quot; that can easily absorb hundreds of people with a standardized training protocol, but I suspect that there are dozens of small projects that would be worth trying and wouldn&#x27;t take much individual research to prepare for. </p><p>For example, <a href="https://forum.effectivealtruism.org/posts/Ns3h8rCtsTMgFZ9eH/ea-giving-tuesday-donation-matching-initiative-2018">EA Giving Tuesday</a> was an independent project run by a couple of people who noticed an opportunity and took it, in turn giving hundreds of other people a chance to boost their own impact. (The EA Project for Awesome example you listed here is similar.)</p><p>There are also various lists of <a href="https://forum.effectivealtruism.org/posts/LG6gwxhrw48Dvteej/concrete-project-lists">project</a> and <a href="https://forum.effectivealtruism.org/posts/dRXugrXDwfcj8C2Pv/what-are-some-lists-of-open-questions-in-effective-altruism">research</a> ideas online. No one person will be suitable for all of these, and perhaps no one training program could reliably prepare anyone for a particular project, but any given person may be able to find at least one project idea that &quot;fits&quot;, even if their role is market sizing or design or copyediting rather than direct research. </p><p>There&#x27;s also lots of volunteer work available. EA Global and <a href="https://rtcharity.org/volunteer/">Rethink Charity</a> use quite a few volunteers, for example, and plenty of other EA projects would benefit from more eyes/hands/minds:</p><ul><li>If you speak another language, you can translate something important for a new audience. </li><li>If you&#x27;re a good editor, you can help someone with an unpolished paper on <a href="https://www.facebook.com/groups/effective-altruism-editing-and-review-458111434360997/">Effective Altruism Editing and Review</a>. </li><li>If you have design skills, you can ask an author if they&#x27;d like you to create an infographic based on a paper or Forum post. <a href="https://mindlevelup.wordpress.com/">Owen Shen</a> does something similar for EA San Diego, creating flyers and graphics for upcoming events.</li><li>There&#x27;s an <a href="https://www.facebook.com/groups/1392613437498240/">EA Volunteering</a> Facebook group with lots of other opportunities and ideas.</li></ul><p>While we don&#x27;t have a single Task Y, there are a lot of ways to get involved, many of which could help you qualify for an <a href="https://www.effectivealtruism.org/grants/">EA Grant</a> or find a job down the line. Anyone who reads this and wants ideas beyond what I&#x27;ve listed here is welcome to <a href="mailto:aaron@centreforeffectivealtruism.org">reach out to me</a>.</p><p></p> aarongertler mXdPkrxQ27yoktqHq 2019-02-21T20:25:46.916Z Comment by aarongertler on Ben Garfinkel: How sure are we about this AI stuff? https://forum.effectivealtruism.org/posts/9sBAW3qKppnoG3QPq/ben-garfinkel-how-sure-are-we-about-this-ai-stuff#T9X3R6C8G8cbb6vDk <p>Currently, co-authorship only produces karma for the &quot;lead author&quot;. The same is true on LessWrong, where most of the Forum&#x27;s code comes from, and they&#x27;re interested in changing that at some point (I submitted a Github request <a href="https://github.com/LessWrong2/Lesswrong2/issues/1568">here</a>), but it would require a more-than-trivial infrastructure change, so I don&#x27;t know how highly they&#x27;ll prioritize it.</p> aarongertler T9X3R6C8G8cbb6vDk 2019-02-21T20:21:20.277Z Comment by aarongertler on Rodents farmed for pet snake food https://forum.effectivealtruism.org/posts/pGwR2xc39PMSPa6qv/rodents-farmed-for-pet-snake-food#MATn39LucbXty4Gep <p>Edited my reply to reflect the correct organization, thanks!</p> aarongertler MATn39LucbXty4Gep 2019-02-21T01:37:36.024Z Comment by aarongertler on Rodents farmed for pet snake food https://forum.effectivealtruism.org/posts/pGwR2xc39PMSPa6qv/rodents-farmed-for-pet-snake-food#J3uzvuc5dzCj5mQAi <p>This is a really good analysis! Thanks for posting.</p> <p>A few notes I thought of as I read:</p> <ul> <li> <p>You state that you're worried about campaigns to stop or prevent snake ownership possibly increasing publicity around pet snakes and increasing their numbers. I think you could try to estimate this effect by looking at similar cases of "negative publicity against a certain pet".</p> </li> <li> <p>For example, when a pet dog <a href="https://en.wikipedia.org/wiki/Fatal_dog_attacks_in_the_United_States">kills someone</a> in a way that gets widely reported, do sales of that dog breed tend to go down, or up? Did <a href="https://news.nationalgeographic.com/2018/01/snake-owner-killed-pet-python-aspyxiated-spd/">this story</a> lead to less python ownership in the UK? (These numbers may not be possible to find, but since this question may apply to other CE analyses around pet predators, seems worth a shot!)</p> </li> <li> <p>Since RP is considering interventions to prevent mouse suffering, are there plans to look at changing agricultural policy to protect field mice? <a href="http://www.anthropocenemagazine.org/2018/07/how-many-animals-killed-in-agriculture/">This article</a> estimates 6-40 animals killed per acre of grain, per year (seems to be mostly mice), but notes high uncertainty around both the number and the counterfactual outcome for these animals.</p> </li> <li> <p>I didn't see you mention "recommending alternative snakes" as a possible intervention. Even if all the most popular snakes are whole-animal carnivores, I wonder how many people who want to buy a snake would be open to choosing one that <a href="https://pethelpful.com/reptiles-amphibians/Pet-Snakes-You-Dont-Need-to-Feed-Rodents">eats insects or eggs</a>, rather than whole mice? (I'm not sure how insect/chicken suffering would be affected by this choice, but intuitively it seems less bad than raising so many mice in such poor conditions.)</p> </li> </ul> aarongertler J3uzvuc5dzCj5mQAi 2019-02-20T23:59:02.642Z Comment by aarongertler on [Link] Surveying US College and University Dining Services for Potential Collaboration on Diet Change Research 2017-2018 https://forum.effectivealtruism.org/posts/gGT9EuSsoDa3ichqr/link-surveying-us-college-and-university-dining-services-for#TpAFjg6aKBwjZDLBG <p>Thanks for sharing this solid (and concise) paper! I really like research on ways to collect better data.</p><p>Compared to grocery stores, &quot;restaurant-style&quot; areas like a college cafeteria seem likely to have artificially inflated prices on meat and meat-free dishes, such that the proportional price difference is smaller (example with fake numbers: chicken is $5/pound and veggies are $2/pound, while a chicken sandwich is $5 and a veggie sandwich $4). I wonder if that difference exists, and if so, whether it inflates meat consumption relative to what students would buy in a grocery store. (Sorry if this was addressed in the paper; if so, I didn&#x27;t see it.)</p><p></p> aarongertler TpAFjg6aKBwjZDLBG 2019-02-20T23:37:08.071Z Comment by aarongertler on EA Survey 2018 Series: Geographic Differences in EA https://forum.effectivealtruism.org/posts/t2Wqszc4wpKxMinSs/ea-survey-2018-series-geographic-differences-in-ea#mboGfhAEhSFhNkd8i <p>Thanks for the new post -- I&#x27;d thought the series was over, and it&#x27;s fantastic to have even more data (plus charts!).</p><p>I&#x27;m confused by the group layout in North America. It portrays a large group of EAs somewhere in southern California, with no large group in the Bay Area, which doesn&#x27;t match the data tables in the post. Were some of the circles misplaced?</p> aarongertler mboGfhAEhSFhNkd8i 2019-02-19T20:11:12.707Z Comment by aarongertler on Pre-announcement and call for feedback: Operations Camp 2019 https://forum.effectivealtruism.org/posts/o5P86PR7HGt3nXjKw/pre-announcement-and-call-for-feedback-operations-camp-2019#4ypGwKcZv9jtATB8t <p>When I see projects that involve teaching someone a skill or set of skills (especially for something nebulous and difficult-to-summarize like &quot;operations&quot;), I want to hear about people who successfully learned the skill, and which aspects of their experience they believe are replicable.</p><p>Are there any people currently working in an EA operations position who you think are a good example of this? That is, people who didn&#x27;t have much in the way of innate operations talent, but managed to teach themselves one or more new skills and got hired as a result?</p><p>I did some EA ops work before starting my current role at CEA, but found that most of what I was doing had been honed over years and years of making from to-do lists, managing a variety of small projects, etc. -- I can&#x27;t point to any single short-term experience that significantly &quot;upgraded&quot; my ops ability, so I&#x27;m especially curious to hear from people who <em>did </em>have an experience like that.</p> aarongertler 4ypGwKcZv9jtATB8t 2019-02-19T20:06:43.828Z Comment by aarongertler on Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post https://forum.effectivealtruism.org/posts/W94KjunX3hXAtZvXJ/evidence-on-good-forecasting-practices-from-the-good#vzaPHwH656iHZp532 <p>If anyone wants to read about a non-superforecaster who still took the tournament moderately seriously, I wrote up <a href="https://aarongertler.net/good-judgment-project/">my experience participating in Season 4</a> (and getting a good-but-not-great score by relying on a couple of basic heuristics).</p> aarongertler vzaPHwH656iHZp532 2019-02-19T19:59:49.101Z Comment by aarongertler on Tech volunteering: market failure? https://forum.effectivealtruism.org/posts/9Rj82MgZMDaqmkYsC/tech-volunteering-market-failure#KmctoXRm9ZBDQRfoF <p>Technical people interested in doing good with their time should consider making open-source contributions to LessWrong! </p><p>Almost all of the EA Forum&#x27;s code comes from the LessWrong database, and we regularly adopt their most recent changes, so we&#x27;re really enthusiastic about people making those contributions (they&#x27;ll usually reach us after a few weeks).</p><p>Here&#x27;s their <a href="https://github.com/LessWrong2/Lesswrong2#contributing">guide to helping out</a>, and their Github tag for <a href="https://github.com/Lesswrong2/Lesswrong2/issues?q=is%3Aissue+is%3Aopen+label%3A%221.+Important+%28Easy%29%22">important issues that seem easy to fix</a>. </p><p>--</p><p>Also related: MIRI used to use a platform called Youtopia to organize volunteers, until they moved in a more technical/less outreach-focused direction. Not sure how well it would work for tech projects, but it was apparently pretty helpful in general. </p><p>From an email sent to the volunteer list after MIRI stopped using Youtopia:</p><blockquote>Many of the projects you contributed to through Youtopia were very valuable. From internet research, to translation and transcription efforts, to helping promote MIRI online, and proofreading what became “Rationality: From AI to Zombies”, your help was invaluable. </blockquote> aarongertler KmctoXRm9ZBDQRfoF 2019-02-19T19:46:39.039Z Comment by aarongertler on One for the World: update after 6 months of our first staff member https://forum.effectivealtruism.org/posts/3wDm3FagqGCnkFvrg/one-for-the-world-update-after-6-months-of-our-first-staff#pZZ47XPP4EKpPz4MS <p>Good point: &quot;Average giving&quot; =/= &quot;what a typical American gives&quot;, and the latter is a better reference point. I&#x27;ve had the title of an article called &quot;<a href="https://www.philanthropy.com/article/The-Stubborn-2-Giving-Rate/154691">The Stubborn 2% Giving Rate</a>&quot; stuck in my head for too long, but the number doesn&#x27;t really apply here.</p> aarongertler pZZ47XPP4EKpPz4MS 2019-02-19T14:05:09.394Z Comment by aarongertler on Ben Garfinkel: How sure are we about this AI stuff? https://forum.effectivealtruism.org/posts/9sBAW3qKppnoG3QPq/ben-garfinkel-how-sure-are-we-about-this-ai-stuff#LKAQvym87zioEXoFT <p>A note on leverage: One clear difference between AI and non-leverage-able things like wheels or steam engines is that there are many different ways to build AI. </p><p>Someone who tried to create a triangular wheel wouldn&#x27;t have gotten far, but it seems plausible that many different kinds of AI system could become very powerful, with no particular kind of system &quot;guaranteed&quot; to arise even if it happens to be the most effective kind -- there are switching costs and market factors and branding to consider. (I assume that switching between AI systems/paradigms for a project will be harder than switching between models of steam engine). </p><p>This makes me think that it is possible, at least in principle, for our actions now to influence what future AI systems look like.</p> aarongertler LKAQvym87zioEXoFT 2019-02-15T21:31:39.430Z Comment by aarongertler on Rob Mather: Against Malaria Foundation — What we do, How we do it, and the Challenges https://forum.effectivealtruism.org/posts/C8pNyigL9pCh6ufkt/rob-mather-against-malaria-foundation-what-we-do-how-we-do#DghN7PgPdmY8SgRN8 <p>I&#x27;m really glad to have this talk available as a written transcript! I&#x27;ve seen versions of it presented in two other locations, but the Q&amp;A adds a lot.</p><p>Anecdotes like this still confuse me:</p><blockquote>When we wanted to translate the website into German, the thinking was we could go to a professional company and they&#x27;d charge us five grand to do it, or we could go to a lot of other human beings and say, &quot;You&#x27;re a professional translator. Who do we talk to in your industry such that we could get four people who would translate two and a half thousand words each?&quot; That&#x27;s 10,000 words. We can now put our website in a language and show people the courtesy in Germany of being able to read the website in their own language. </blockquote><blockquote>So I sent out 48 e-mails, not dear all because that doesn&#x27;t work, but Dear Claudia and Dear Claus and Dear Matthew and so on. And in 24 hours I had 44 positive responses out of 48 [...] And the same thing happened in every other language. </blockquote><p>How in the world does anyone get a <em>consistent </em>90% <em>positive </em>response rate, <em>within 24 hours</em>, from a group of working professionals asked to provide free services for a charity campaign? </p><p>If Mather gets these results consistently, he may be one of the best copywriters in the nonprofit sector, and I&#x27;d really like to know his secrets. Maybe I&#x27;ll ask him the next time he comes to EA Global.</p> aarongertler DghN7PgPdmY8SgRN8 2019-02-15T21:20:11.834Z Comment by aarongertler on Two (huge) EA health hypotheses https://forum.effectivealtruism.org/posts/Fmw8hMQty4rQYAjfq/two-huge-ea-health-hypotheses#aZ3TjDPfNptsHNizw <p>Speaking as someone with no way to approach this argument aside from the usual &quot;assume the authorities have a point&quot;: </p><ul><li>What makes the &quot;depression is infectious&quot; viewpoint so controversial? Is it closer to &quot;something many people know about and disagree with&quot; or &quot;something almost no one knows about&quot;.</li><li>Are there medical professionals who have strong counterarguments to Canli, or have strong arguments of their own for why depression is not infectious? (The classic serotonin-related explanations seem like they&#x27;d rule out infection, but I don&#x27;t know how certain we are that those explanations account for any/most depression.)</li></ul> aarongertler aZ3TjDPfNptsHNizw 2019-02-15T21:10:44.816Z Comment by aarongertler on Small animals have enormous brains for their size https://forum.effectivealtruism.org/posts/5k6mJFBpstjkjv2SJ/small-animals-have-enormous-brains-for-their-size#YgqYm7zexRTBWuL7x <p>Upvoted! Thanks for the clear argument, descriptive title, and amusing cartoon. Ideally, good Forum posts will also be <em>memorable</em>, so that people remember them in future discussions, and images are a good way to achieve this.</p> aarongertler YgqYm7zexRTBWuL7x 2019-02-15T20:39:28.507Z Comment by aarongertler on The Need for and Viability of an Effective Altruism Academy https://forum.effectivealtruism.org/posts/c4GXKkgkdP44LdKwM/the-need-for-and-viability-of-an-effective-altruism-academy#YwS9vRWSWdTAMviCg <p>I&#x27;m glad you posted this! It&#x27;s an interesting idea, and the kind of thing that deserves to be discussed, because there&#x27;s a lot to be learned from different proposals for &quot;EA education&quot;. </p><p>Projects you might find valuable to look up, because they had/have certain features in common with your idea: <a href="https://forum.effectivealtruism.org/posts/9nyFLa4QsuMEpfnNt/shic-workshop-experiment-and-revised-impact-strategy-2018#cg2Doqkj5SqG5tRP2">Students for High-Impact Charity</a>, <a href="https://forum.effectivealtruism.org/posts/suGcEobbHZZ4Gspeh/a-guide-to-effective-altruism-fellowships">collegiate EA Fellowship programs</a>, and the <a href="https://forum.effectivealtruism.org/posts/JdqHvyy2Tjcj3nKoD/ea-hotel-with-free-accommodation-and-board-for-two-years">EA Hotel</a>.</p><p>--</p><p>Where I agree with you: I think it&#x27;s true that many people are out there somewhere in the world who agree with the general principles of EA (even if they don&#x27;t know what it is yet) and could be shifted onto a high-impact career path with the right nudge.</p><p>The question is: How can we find these people, and what&#x27;s the right nudge?</p><p>--</p><p>I don&#x27;t think something like an academy would be impossible to run, but it would be very difficult.</p><p>Before I lay out more detail, I&#x27;ll try to sum up my main concern in one sentence: <strong>&quot;It&#x27;s really hard to teach people to do the kinds of EA work that are most in-demand.&quot;</strong></p><p>If you look at the <a href="https://80000hours.org/job-board/">80,000 Hours job board</a>, you&#x27;ll find that most of the highest-impact jobs they know about require a pretty specific background and/or set of skills -- more than can be obtained in a few months. If we have money and time to spend on the education of people who want to make an impact, my best guess is that trying to help them obtain the requisite background/skills directly will be better than teaching EA principles and concepts more generally.</p><p>--</p><p>Some other points related to risk and difficulty:</p><ul><li>Even at schools focused on a simple and easily marketable skillset (programming), many students drop out of the program or graduate and still can&#x27;t find a job. </li><ul><li>Lambda may be an exception, but for every exception, there is a rule, and the rule of &quot;programming bootcamps&quot; seems to be that many things can go wrong on the road from novice to professional. </li></ul><li>Some of the most valuable &quot;skills&quot; for EA organizations are either not teachable (policy experience) or are very difficult to teach (research, general execution skills). </li><ul><li>My impression is that the most successful research organizations try to avoid teaching research skills by filtering heavily on that skill ahead of time, even though this greatly increases their hiring costs. </li><li>If Open Phil had a way to generate five new Research Analysts to work for two years each using, say, $500,000 and six total months of staff time (&quot;two instructors&quot;), I strongly suspect that they would do so. The fact that they don&#x27;t do this makes me think that teaching EA research is probably hard (though I don&#x27;t work at Open Phil, and this is speculative).</li></ul><li>The two instructors will be hard to find. </li><ul><li>They&#x27;ll need to have a very strong grasp on EA concepts, be reasonably good at teaching, be reasonably good at curriculum-writing (unless that&#x27;s a job for a third talented person with spare time), and not be working on something with higher EV.</li></ul><li>The students will be hard to find. </li><ul><li>If people apply from within the EA community, they probably know quite a bit of the material already and don&#x27;t need the &quot;nudge&quot; (instead, they might be better served joining a school like Lambda, doing a research internship, or something else more skill-building for their specific goals). </li><li>If people apply from elsewhere, and have valuable skills, is it better for them to enter an &quot;academy&quot; than to work with 80,000 Hours? </li></ul><li>This seems like a bad deal for anyone who plans to enter the private sector, since it charges a lot of money and doesn&#x27;t teach any &quot;marketable&quot; skills. </li><ul><li>As something like a college course, it might hold appeal for people who like learning for its own sake. But I think most people who would be good candidates for the class are also likely to be capable of learning the material through some combination of reading, participating in discussion online, traveling to an event or two, and maybe Skyping with more experienced people once in a while. (We don&#x27;t have an easy way to set up Skype calls like that right now, but building one would be a lot easier than building an academy.)</li></ul></ul><p>-- </p><p>This is me trying to play Devil&#x27;s advocate, so I apologize if I sound harsh! There are certainly positive aspects to the plan: It&#x27;s possible that finding good students would be easier than I envision, and if the program only succeeded half the time, that would still be a good outcome.</p><p>Questions I think you&#x27;ll want to answer going forward: How did current employees of EA organizations (or people running independent projects) get to their positions? If they could go back to the end of college (for example) and try to teach themselves to do their own jobs in a few months, how far would they get? What <em>would </em>they try to teach themselves? How do those answers map onto what you think an &quot;EA Academy&quot; should teach?</p> aarongertler YwS9vRWSWdTAMviCg 2019-02-15T19:58:13.354Z Comment by aarongertler on What are the easiest highly positive and effective things that people can do? https://forum.effectivealtruism.org/posts/6ZnrQZzpwWst9f7eG/what-are-the-easiest-highly-positive-and-effective-things#dk5jRoX6GADxNkfrQ <p>Suggestions:that take some emotional energy and restraint, but don&#x27;t require much time or any money (and might actually <em>save</em> you time): </p><ul><li>Be kind to yourself and others. </li><ul><li>Don&#x27;t self-criticize because you aren&#x27;t moving &quot;fast enough&quot; in the direction you think is best; it&#x27;s okay to be proud of progress, even minor progress. </li><li>Don&#x27;t criticize others unless you really believe it will be helpful (imagine how helpful the average piece of online criticism is, and remember that you might be closer to &quot;average&quot; than you think, especially if your response is fueled by anger and you don&#x27;t care very much about the other person&#x27;s welfare).</li></ul><li>Try to think about measures of effectiveness, especially scale, in your daily life. Getting used to &quot;seeing through an EA lens&quot; can help to guide you toward other ways of helping the world. </li><ul><li>This may be most applicable when it comes to news and social media. If you see a huge argument erupt over a small incident, consider whether your efforts will be helpful before you jump in to contribute. And consider, before reading the whole thread, whether there are other things you could be reading that concern a larger number of people, with more at stake.</li><li>This doesn&#x27;t mean completely abandoning local issues or your social circle, of course -- but it does mean remembering that what news outlets prioritize is not inherently <em>important </em>just because it is &quot;news&quot;.</li></ul><li>Clean up your systems. </li><ul><li>No matter what you want to work on the future, no matter how long it will take, you&#x27;ll want to have certain resources available when you arrive. These include things like &quot;a coherent to-do list&quot;, &quot;a reasonably well-organized physical space&quot;, &quot;a nutritious diet&quot;, and &quot;<a href="https://thezvi.wordpress.com/2017/09/30/slack/">slack</a>&quot;. </li><li>If you plan to help the world once your life is &quot;in order&quot; (something I&#x27;ve heard many times), <em>putting your life in order is helping the world.</em></li><li>I won&#x27;t turn this answer into a post about personal productivity. Things that have been helpful to many people I know include <em>Getting Things Done</em> and <em>The Life-Changing Magic of Tidying Up</em>, but reading some personal productivity websites and doing simple, commonsense things will help a lot for people who aren&#x27;t yet doing said things. </li></ul></ul> aarongertler dk5jRoX6GADxNkfrQ 2019-02-15T19:17:57.776Z Comment by aarongertler on One for the World: update after 6 months of our first staff member https://forum.effectivealtruism.org/posts/3wDm3FagqGCnkFvrg/one-for-the-world-update-after-6-months-of-our-first-staff#aa7x7asx8GJ4YTqpy <p>It&#x27;s really encouraging to see this kind of growth in overall giving!</p><p>I looked through your annual report and didn&#x27;t see these statistics (but please pardon me if I missed them): </p><p>1. What is the average % of income pledged by One for the World members so far? </p><p>2. Have you seen many cases of people signing up for a small pledge, but planning to increase it at some set point later // at a steady rate? (For example, &quot;1% more per year until retirement&quot;.)</p><p>One percent seems low for an initial pledge, given that the &quot;average American&quot; donates ~2% of income, but as a starting point that gets people to keep following OFTW, or a suggestion that most people surpass, I can see it fulfilling a strategic purpose.</p> aarongertler aa7x7asx8GJ4YTqpy 2019-02-15T18:50:08.326Z Comment by aarongertler on kbog did an oopsie! (new meat eater problem numbers) https://forum.effectivealtruism.org/posts/EztxZhPQ8Qv8xWe3v/kbog-did-an-oopsie-new-meat-eater-problem-numbers#dbStaM46MhsCv9buL <p>Upvote for noticing an error in your model and announcing the update -- that&#x27;s a good habit, and I like the idea of posts on the Forum which encourage said habit.</p><p>I&#x27;m sure you had a strategy in mind for the current title, but I&#x27;ll still suggest choosing something more descriptive; this seems like it might be confusing to link to in a few months (especially for people who don&#x27;t recognize the name &quot;kbog&quot;).</p> aarongertler dbStaM46MhsCv9buL 2019-02-15T18:20:54.114Z Comment by aarongertler on 2018 AI Alignment Literature Review and Charity Comparison https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison#cg94F4zQ7NZRWmDYX <p>Have you seen any study/analysis (even a solid Fermi estimate) showing that AI bias, either similar to variants identified so far or hypothetical future variants, could plausibly be sufficiently large*tractable to be worthy of further investigation? </p><p>I&#x27;ve always grouped this issue in the large category of &quot;issues that are bad and should be worked on by someone, but that get plenty of coverage in the non-EA world and don&#x27;t seem especially compelling for our tiny community to look at&quot;. AI bias gets a <em>lot </em>of attention from large tech firms and large media companies relative to long-term concerns about safety/alignment.</p> aarongertler cg94F4zQ7NZRWmDYX 2019-02-12T07:33:45.562Z Comment by aarongertler on Administration costs when giving to more than one organisation https://forum.effectivealtruism.org/posts/iCdXBhcdFru4itMJw/administration-costs-when-giving-to-more-than-one#nDszYgKtb6v3AdqWp <p>These are good questions to ask! They can be quite complex, since tax law is different in every country (and I don&#x27;t know anything about German tax law). </p><p>Fortunately, the EA Foundation (a European organization that works in several countries) offers a way to <a href="https://ea-foundation.org/donate-ea/">make tax-deductible donations</a> from Germany to many different effective charities around the world. I&#x27;m not entirely certain of how their system works, but they have a page with more details <a href="https://ea-foundation.org/donate/faq/">here</a>. I&#x27;d recommend looking into them, and determining whether the slight complication of donating through the Foundation is worth the money you&#x27;ll save on taxes. (The first link in my post also includes an email address where you can reach the Foundation.)</p><p>As for fees; most credit card fees are almost entirely percentage-based, so I generally wouldn&#x27;t worry much about splitting up a donation between two charities vs. giving to one charity. Depending on the amount you plan to give, you may want to consider something like a check rather than a credit card, to avoid fees, but if convenience will help you become more likely to give, that&#x27;s important!</p> aarongertler nDszYgKtb6v3AdqWp 2019-02-12T04:26:51.058Z Comment by aarongertler on How a lazy eater went vegan https://forum.effectivealtruism.org/posts/sm3axpxJLtphvF6Jj/how-a-lazy-eater-went-vegan#vJjcktrLc7FaiEoPF <p>These products aren&#x27;t vegan, since they contain whey, but I&#x27;ll note that Optimum Nutrition routinely puts out very good powders. I&#x27;d recommend trying them first. Don&#x27;t worry too much about finding the &quot;best&quot; powder in a nutritional sense; most candidates are likely to be so similar that it won&#x27;t really matter (especially since nutrition labels usually aren&#x27;t quite 100% accurate).</p> aarongertler vJjcktrLc7FaiEoPF 2019-02-12T00:07:36.143Z Comment by aarongertler on The Narrowing Circle (Gwern) https://forum.effectivealtruism.org/posts/WF5GDjLQgLMjaXW6B/the-narrowing-circle-gwern#bRmny33LfHdi7NtJd <p><strong>Question: </strong>What are some other &quot;categories&quot; of people or animals that seem to have seen the circle shift away from them?</p><p>Some of my ideas:</p><ul><li>Various groups who are popular targets for discrimination in their respective regions. They may have lived in (relative) peace for a long time before things became very bad very quickly (the Tutsis, the Rohingya, the Uighurs)</li><ul><li>Jewish people may be the most prominent example throughout history. The story of my people can be seen as a story of shifting circles, with wildly varying levels of violence culminating in the Holocaust (and, some say, rising again today, though I&#x27;m not sure how 2019 compares to, say, 1979). </li></ul><li>Immigrants. There have been eras when it was much easier to move between countries and find opportunity (though there was less opportunity to go around). Nearly every Ellis Island applicant became a U.S. citizen; that&#x27;s much harder today.</li></ul> aarongertler bRmny33LfHdi7NtJd 2019-02-12T00:01:51.915Z The Narrowing Circle (Gwern) https://forum.effectivealtruism.org/posts/WF5GDjLQgLMjaXW6B/the-narrowing-circle-gwern <p><em>Content note: Discussion of infanticide and sexual violence.</em></p><p><em>Views I express in this essay are my own, unrelated to CEA.</em></p><hr class="dividerBlock"/><p><strong>Summary: </strong>Have our moral &quot;circles&quot; really expanded over time? While some groups get more moral consideration than they once did, others get less, or see their moral status shift back and forth. Gwern questions how much &quot;progress&quot; we&#x27;ve really made over the years, as opposed to mere shifts between the groups we care about. </p><hr class="dividerBlock"/><p>In <em><a href="https://www.gwern.net/The-Narrowing-Circle">The Narrowing Circle</a></em>, Gwern speculates that what we see as broad moral progress may instead be a series of moral <em>shifts</em>, embracing new beings/ideas and rejecting old ones in a way that isn&#x27;t as predictable or linear as &quot;expanding circle&quot; theory might hold.</p><p>I highly recommend reading the original essay, but here&#x27;s a brief summary of Gwern&#x27;s main points.</p><h2>Is there an expanding circle?</h2><ul><li><a href="http://www.amazon.com/The-Expanding-Circle-Evolution-Progress/dp/0691150699/?tag=gwernnet-20">Peter Singer</a> proposed that people tend to include more and more beings in their &quot;circle&quot; of moral regard over time. <a href="https://quoteinvestigator.com/2012/11/15/arc-of-universe/">Many others</a> hold a similar view (&quot;the arc of the moral universe is long, but it bends toward justice&quot;)</li><li>However, it&#x27;s easy to see patterns appear in random data. Between that phenomenon and confirmation bias, we should be careful not to jump too eagerly to an &quot;expanding circle&quot; explanation without considering that we could be ignoring beings that have been <em>excluded </em>from moral regard, perhaps because we no longer even <em>consider </em>those beings as potential inclusions.</li><li>Another question (not explored too deeply in this essay): Have we become more moral, or do we simply live in a world that is less morally challenging? It may be easier to feel compassion when we are rich and at peace, but if a truly threatening war broke out, would we become as bloodthirsty as ever? (We may not <em>believe </em>in witches, but if we did believe in witches, as our ancestors did, would we still execute<em> </em>them?)</li></ul><h2>How have we narrowed the circle?</h2><p><strong>Religion</strong></p><p>Compared to people in the past, people in the present hold very little regard (on average) for supernatural entities. This isn&#x27;t always because of atheism or agnosticism; many people claim to be religious but also make little or no effort to &quot;keep the faith&quot;. Has our disregard for the gods outpaced our disbelief?</p><p>This disregard extends to the case of &quot;sacred animals&quot;. Not only have we dramatically scaled up factory farming; we have also (on a smaller scale) removed &quot;protected&quot; status from certain categories of animals that had holy significance in the past. (We&#x27;ve also <a href="https://daily.jstor.org/when-societies-put-animals-on-trial/">stopped putting animals on trial</a>, though this seems to me like a separate phenomenon.)</p><p><strong>Infanticide</strong></p><p>Infants and the unborn have seen their moral status shift back and forth around the world and through the centuries. Some societies regularly cast out unwanted infants (or even mandated the killing of infants in some cases); others banned abortion from the time of conception.</p><blockquote>If one accepts the basic premise that a fetus is human, then the annual rate (as pro-life activists never tire of pointing out) of millions of abortions worldwide would negate centuries of moral progress. If one does not accept the premise, then per C.S. Lewis, we have change in facts as to what is human, but nothing one could call an expanding circle. </blockquote><p><strong>Disability</strong></p><p>In many ways, we take much better care of people with disabilities than we did in past eras. In other ways, we&#x27;ve come up with new reasons to exclude people; modern society may discriminate more viciously than past societies on the basis of weight or facial appearance. (I&#x27;ll add a quote from Aeon: &quot;<a href="https://aeon.co/essays/there-is-no-shame-worse-than-poor-teeth-in-a-rich-world">There is no shame worse than poor teeth in a rich world.</a>&quot;)</p><p><strong>Judicial Torture</strong></p><p>Many states, in both the East and West, have moved back and forth on policies related to the torture of prisoners and dissidents. We no longer hang prisoners in front of cheering crowds, but we lock tens of thousands of people in solitary confinement and make jokes about the sexual abuse of prisoners. (I&#x27;ll also note that society constantly redefines what a &quot;crime&quot; is; we&#x27;re much nicer to thieves than we once were, and probably harsher toward drug users.)</p><blockquote>Let’s not talk about how one is sentenced to jail in the first place; <a href="https://twitter.com/HunterFelt/status/317495942829965313">Hunter Felt</a>: Your third arrest, you <a href="https://en.wikipedia.org/wiki/Three-strikes%20law">go to jail for life</a>. Why the third? Because in <a href="https://en.wikipedia.org/wiki/Baseball">a game</a> a guy gets three times to swing a stick at a ball. </blockquote><p><strong>Ancestors</strong></p><p>We do a poor job of respecting the wishes of the dead, even when those people have made reasonable and non-harmful plans for the use of their assets (many trusts put away for charity are torn apart by lawyers and heirs).</p><blockquote>More dramatically, we dishonor our ancestors by neglecting their graves, by not offering any sacrifices or even performing any rituals, by forgetting their names (can you name your great-grandparents?), by selling off the family estate when we think the market has hit the peak, and so on. </blockquote><p>Gwern argues, convincingly, that people in the past were much more respectful in this sense (perhaps a useless gesture to those no longer able to receive it, but might it not have been a comfort to those who died long ago to know that they would be remembered, respected, even revered?).</p><p><strong>Descendants</strong></p><p>This is fairly standard EA material about planning for the long term, and is as such slightly out of date (&quot;there are no explicit advocates for futurity&quot;). But we are a tiny group within society, and when I think about the majority of living people outside of EA, this rings true:</p><blockquote>Has the living’s concern for their descendants, the inclusion of the future into the circle or moral concern, increased or decreased over time? Whichever one’s opinion, I submit that the answer is shaky and not supported by excellent evidence. </blockquote><h2>My thoughts</h2><p><em>I make no claim that any of these views are original, but I&#x27;m trying to note things I didn&#x27;t see in Gwern&#x27;s essay.</em></p><p>When we cease to grant moral regard to certain groups, it seems to happen for one or more of the following reasons:</p><p>1. We no longer view them as &quot;possible&quot; targets for moral regard (e.g. the gods, to an atheist)</p><p>2. While we acknowledge that they are &quot;possible&quot; targets, our modern morality doesn&#x27;t really &quot;cover&quot; them (e.g. fetuses, to some in the pro-choice movement, though this issue is complicated, nearly everyone wants fewer abortions, and any &quot;side&quot; in the debate holds a wide range of views about what to do and why)</p><p>3. We&#x27;ve learned new ways to take advantage of them (e.g. animals, in the case of factory farming)</p><p>4. We&#x27;ve genuinely become more antagonistic toward them (e.g. the view of Muslims by certain groups since 2001; the treatment of American prisoners)</p><hr class="dividerBlock"/><p>It seems to me as though (1) generally doesn&#x27;t interfere with the notion of the expanding circle. Neither does (3), necessarily; if our ancestors knew how to establish factory farms, I assume they would have done so, since they were no strangers to animal cruelty (e.g. bear-baiting, gladitorial combat).</p><p>(2) does complicate things, and while I favor expanding abortion rights, I&#x27;m not sure I&#x27;d think of them as a facet of the &quot;expanding circle&quot; in the same way as I do the expansion of civil rights for certain groups. And (4) implies that the expanding circle can, under the right circumstances, <em>shrink</em>, due to the same kinds of mass movements and meme-spreading that categorize expansion of the circle.</p><p>For example, it&#x27;s often argued that knowing a gay person makes you more likely to favor gay rights; as more people come out of the closet, more people know that they have gay friends and relatives, and support for gay rights spreads rapidly. </p><p>Could the opposite be true for prisoners? As the crime rate shrinks, and people with criminal records become less likely to re-integrate into society, perhaps fewer people know someone who&#x27;s been to prison. Would that make it easier to think of criminals as &quot;the other&quot;, people you&#x27;d never love or befriend? </p><p>(On the one hand, incarceration rose in the U.S. during a time of large increases in the crime rate; on the other hand, prison reform seems to have lagged substantially behind reduction in the crime rate, implying that some factor other than a direct &quot;fear of criminals&quot; is in play. Do we simply <em>care less </em>nowadays?)</p><p>This also makes me rethink my position on certain kinds of animal cruelty; as fewer and fewer people live on farms, might we care less and less about the way farm animals are treated?</p> aarongertler WF5GDjLQgLMjaXW6B 2019-02-11T23:50:45.093Z Comment by aarongertler on How to use the Forum https://forum.effectivealtruism.org/posts/Y2iqhjAHbXNkwcS8F/how-to-use-the-forum#ciwE6gtk9YuYpHDjx <p>I&#x27;d generally side with &quot;comment&quot;, since that lets anyone else who reads the old posts see your comment, and avoids having lots of unlinked posts spring up over the years around a single central post.</p><p>If you think an older post is worth revisiting (with or without your new comment as context), you can try sharing it on social media (here&#x27;s a <a href="https://www.facebook.com/EffectiveGroups/">list of EA Facebook groups</a>) or in the next <a href="https://forum.effectivealtruism.org/posts/jrN4CHJooBm3KCfBK/open-thread-43#ENzgozvgXGoHkqSSi">Open Thread</a> (I&#x27;m trying to encourage these to happen more often, since they&#x27;re useful for cases like the one you mention).</p><p>Of course, sometimes a comment winds up having enough material to be its own post, and that&#x27;s fine! I&#x27;d err on the side of &quot;comment&quot;, but new ideas related to an old post may well demand posts of their own.</p><p>(I work with Julia at CEA, and I help to moderate the Forum.)</p> aarongertler ciwE6gtk9YuYpHDjx 2019-02-11T20:23:28.816Z Comment by aarongertler on Ben Garfinkel: How sure are we about this AI stuff? https://forum.effectivealtruism.org/posts/9sBAW3qKppnoG3QPq/ben-garfinkel-how-sure-are-we-about-this-ai-stuff#QZukpKz3zGkGn2BvX <p>An update: The previous name on this account was &quot;Centre for Effective Altruism&quot;. Since the account was originally made for the purpose of posting transcripts from EA Global, I&#x27;ve renamed it to &quot;EA Global Transcripts&quot; to avert further confusion.</p> aarongertler QZukpKz3zGkGn2BvX 2019-02-11T07:09:57.525Z Comment by aarongertler on Ben Garfinkel: How sure are we about this AI stuff? https://forum.effectivealtruism.org/posts/9sBAW3qKppnoG3QPq/ben-garfinkel-how-sure-are-we-about-this-ai-stuff#tQNfrFKAYHPCq8d7i <p>Thanks for this suggestion, Misha. I&#x27;ve changed the headline to include Ben&#x27;s name, and I&#x27;m reviewing our transcript-publishing process to see how we be more clear in the future (e.g. by posting under authors&#x27; names if they have an EA Forum account, as we do when we crosspost from a user&#x27;s blog).</p> aarongertler tQNfrFKAYHPCq8d7i 2019-02-11T06:09:30.436Z Comment by aarongertler on EA Funds - An update from CEA https://forum.effectivealtruism.org/posts/oC2cJM5aiJHnMtvmE/ea-funds-an-update-from-cea#SgfbkrQQiGyqujiZe <p>These are good questions! I don&#x27;t work directly with EA Funds, but I wanted to mention a few things that might interest you:</p><ul><li>Since this post was produced, new fund management teams have been <a href="https://forum.effectivealtruism.org/posts/yYHKRgLk9ufjJZn23/announcing-new-ea-funds-management-teams">announced</a> (substantially increasing the number/diversity of people helping to make grants), and the new teams have run an <a href="https://forum.effectivealtruism.org/posts/2CvraxEYBc8qBPhK2/announcing-ea-funds-management-amas-20th-december">Ask Me Anything session</a> on the Forum.</li><li>GiveWell just <a href="https://forum.effectivealtruism.org/posts/xSBSojpb8L5xjTzbZ/how-givewell-s-research-is-evolving">announced</a> that it will be doubling the size of its research team and looking at a much broader set of global development interventions than it has in the past. If the Global Health and Development Fund continues to follow their recommendations, I wouldn&#x27;t be surprised to see new organizations in future years. </li><ul><li>Also, GiveWell recently used EA Funds to make a <a href="https://www.givewell.org/research/incubation-grants/innovation-in-government-initiative/december-2018-grant">large grant</a> to a development research program working to help governments adopt better policy: <em>&quot;Our understanding is some donors give to that fund because they want to signal support for GiveWell making grants which are more difficult to justify and rely on more subjective judgment calls, but have the potential for greater impact than our top charities. &quot;</em></li></ul><li>On the question of whether AMF and others are &quot;still the best use of funds&quot; -- &quot;best&quot; is always difficult to define, but one reason these charities continue to appear on the Top Charities list is that they continue to do a good job of finding opportunities and spending money quickly. In a year when GiveWell believed that AMF temporarily didn&#x27;t have much &quot;room for more funding&quot;, they <a href="https://blog.givewell.org/2013/11/26/change-in-against-malaria-foundation-recommendation-status-room-for-more-funding-related/">removed it from their Top Charities list</a> until more opportunities for net distribution opened up. They <a href="https://blog.givewell.org/2018/11/26/our-updated-top-charities-for-giving-season-2018/">barely allocated any money to AMF</a> this year compared to several charities they hadn&#x27;t supported as much or at all in past years, and <a href="https://blog.givewell.org/2018/11/19/update-on-no-lean-seasons-top-charity-status/">removed a program that seemed less effective after a new RCT came out</a>. I don&#x27;t necessarily agree with every point in GiveWell&#x27;s models, but I think they have a solid track record of changing their views as evidence changes.</li></ul> aarongertler SgfbkrQQiGyqujiZe 2019-02-09T01:57:45.739Z Comment by aarongertler on Open Thread #43 https://forum.effectivealtruism.org/posts/jrN4CHJooBm3KCfBK/open-thread-43#DgWdSPgoyGMuTqF9D <p>John: For future open threads, I&#x27;d recommend removing this line:</p><p>&quot;This is also a great place to post if you don&#x27;t have enough karma to post on the main forum.&quot;</p><p>The new Forum doesn&#x27;t have a karma restriction for creating your first post.</p> aarongertler DgWdSPgoyGMuTqF9D 2019-02-08T20:10:06.030Z Comment by aarongertler on Introducing Sparrow: a user-friendly app to simplify effective giving https://forum.effectivealtruism.org/posts/ZcXqD9AMhcEMRgKxy/introducing-sparrow-a-user-friendly-app-to-simplify#nvXzwAeMph86eqAEJ <p>It also seems risky as a feature to develop when we can&#x27;t predict what Facebook will do in the future. (I&#x27;m a huge advocate for EA Giving Tuesday as a project, but only in the context of &quot;we&#x27;re pretty sure Facebook will have a match&quot;, and I think it&#x27;s still too new to be very confident that things will keep working in the same way.)</p> aarongertler nvXzwAeMph86eqAEJ 2019-02-08T19:02:34.916Z Comment by aarongertler on EA Boston 2018 Year in Review https://forum.effectivealtruism.org/posts/FTfmAeLBhHxBQYoJ8/ea-boston-2018-year-in-review#vPFKGbYpcNWNTQfHK <p>Thanks for sharing! I probably spelled it wrong on my initial search (and didn&#x27;t realize because someone on LinkedIn made the same spelling error). Gave up too soon.</p> aarongertler vPFKGbYpcNWNTQfHK 2019-02-08T02:06:36.816Z Comment by aarongertler on Hit Based Giving for Global Development https://forum.effectivealtruism.org/posts/ZfD4cZgAcgQc5oRNN/hit-based-giving-for-global-development#SNWZqMZr6AzXXqy95 <p>I work part-time with a foundation that sometimes uses a &quot;hits-based&quot; approach to global development support. Here are some of the things we&#x27;ve looked at, though we didn&#x27;t end up funding all of them:</p><ul><li>Funding fundraising (especially fundraising &quot;experiments&quot;) for some of GiveWell&#x27;s recommended charities</li><li>Funding the Innovation in Government Initiative</li><li>Funding various studies through J-PAL&#x27;s regional offices</li><li>Funding IPA program officers to work alongside governments</li></ul><p>My impression from this experience, though I&#x27;m certainly not an expert in the area:</p><p>For a donor who is giving sub-$10,000 amounts, many of these approaches are difficult to take; there isn&#x27;t a lot of public information on individual research projects, the state of a project might change quickly, and it&#x27;s hard to get much additional information by talking to researchers. Because the foundation gives larger amounts, we&#x27;re able to routinely arrange phone calls with charities and even with outside researchers who are interested in an opportunity to educate a large donor.</p><p>Something like a crowdfunding platform (or some other running list of &quot;underfunded projects&quot;) might help to close the gap, but it would take a lot of work from J-PAL, IPA, and other development research organizations to put this together, and I don&#x27;t know how high the demand would be outside a fraction of the EA community.</p> aarongertler SNWZqMZr6AzXXqy95 2019-02-08T01:42:26.205Z Comment by aarongertler on What are some lists of open questions in effective altruism? https://forum.effectivealtruism.org/posts/dRXugrXDwfcj8C2Pv/what-are-some-lists-of-open-questions-in-effective-altruism#a3iDwJyekGAeZ6HLy <p>One of CEA&#x27;s future goals for the Forum is to have an &quot;Open Questions&quot; page which is the most comprehensive source for such questions. Having interactivity around question posts (e.g. commenters adding papers that contribute to a question, with summaries of what progress was made) seems potentially really good.</p><p>I can&#x27;t promise a particular timeline for this, and I&#x27;m not sure what form it will take, but I do think the Forum is a good place for questions to exist. If anyone sees this and has thoughts/questions, I&#x27;d be very happy to talk to them about what might make an &quot;Open Questions&quot; list especially good/helpful.</p> aarongertler a3iDwJyekGAeZ6HLy 2019-02-07T00:18:08.782Z Comment by aarongertler on EA Boston 2018 Year in Review https://forum.effectivealtruism.org/posts/FTfmAeLBhHxBQYoJ8/ea-boston-2018-year-in-review#6fKDRekNHyhj7Ea7N <p>By &quot;fundraiser for Google&quot;, I meant &quot;fundraiser within Google&quot;. Thanks for noting how that was confusing, and for more clearly explaining how the operation works! The bake-off system in particular may not be easy to replicate (you&#x27;d have to find EAs who baked), but the general &quot;join us in funding&quot; idea seems promising as a base.</p> aarongertler 6fKDRekNHyhj7Ea7N 2019-02-06T18:31:11.925Z Comment by aarongertler on EA Website Search Optimization https://forum.effectivealtruism.org/posts/ukCyMczbXfMxj3KXG/ea-website-search-optimization#5QZjt6tKFy88Brf9n <p>GiveWell began to <a href="https://blog.givewell.org/2017/12/19/update-on-our-work-on-outreach/">focus more on marketing</a> in late 2017. This particular post doesn&#x27;t mention SEO, but others might (Googling &quot;GiveWell marketing&quot; brings up quite a few results for me). </p><p>The most important point from that post: Almost all of GiveWell&#x27;s funding comes from large donors ($2000 and up), who are probably less likely to start with a search engine compared to other sources. The audience they can reach through Ezra Klein&#x27;s podcast is probably much more promising than the Google search audience.</p><p>I&#x27;d also suspect that Google ads are more competitive than podcast ads; Charity Watch might outbid GiveWell if they go after the #1 result for &quot;best charities&quot;.</p><p>As for 80,000 Hours, I&#x27;d guess that they are like GiveWell but even more so; almost all the people who might find their site from a search like the one you describe are likely not to be a very good fit for their coaching, and perhaps not even for their advice (since many of the jobs they recommend are easiest to obtain for people with degrees from highly-ranked colleges -- people who are perhaps less likely than the average person to use Google for this sort of thing).</p><p>But this is just a guess, since I&#x27;m not sure whether 80K has published data on their marketing strategy or where they hear about the people they end up coaching.</p> aarongertler 5QZjt6tKFy88Brf9n 2019-02-06T07:32:48.981Z Comment by aarongertler on EA Boston 2018 Year in Review https://forum.effectivealtruism.org/posts/FTfmAeLBhHxBQYoJ8/ea-boston-2018-year-in-review#uQaPh95fW8Be7oMzE <p>Excellent writeup! I was surprised never to have heard of several of these projects (including Pinker&#x27;s class on EA -- I hope he settles on a &quot;share both sides&quot; position for X-risk by then).</p><p>I couldn&#x27;t find anything about the &quot;Philanthropic Advisory Fellowship&quot; online, outside of one person&#x27;s LinkedIn. Would the folks behind that be willing to share their experience? (I&#x27;d understand if the work was best kept private for now.)</p><p>Jeff&#x27;s fundraiser for Google is inspiring, and makes me wonder whether it would be possible to get similar ones set up at other Google offices -- after all, there&#x27;s no shortage of EA-aligned people working for Google. It takes a lot of skill to run a fundraiser <em>that </em>well, but maybe a clear instruction guide/shared folder for materials could still help to bring in a few extra tens of thousands of dollars?</p><p>(I&#x27;ll contact both of the groups I mentioned here separately, but ask them to leave a public comment if they&#x27;d be comfortable doing so.)</p> aarongertler uQaPh95fW8Be7oMzE 2019-02-06T01:37:22.234Z Comment by aarongertler on Effective Altruism & Slate Star Codex Readership https://forum.effectivealtruism.org/posts/xonCKbCfRaCGEC2uM/effective-altruism-and-slate-star-codex-readership#Evt7FHKiP2sBEsB5D <p>The link is currently broken. But I&#x27;d love to read the analysis, if you make it available again!</p> aarongertler Evt7FHKiP2sBEsB5D 2019-02-05T21:05:55.321Z Comment by aarongertler on How to assess employment impact https://forum.effectivealtruism.org/posts/MMHobD3YSR3XebXPb/how-to-assess-employment-impact#aKjmhNZK7yAQzDAw3 <p>Try asking some of the same questions 80,000 Hours does when they look at careers for themselves! </p><p>(I don&#x27;t claim to reflect their views perfectly here -- this is a quick answer that aims to sum up the basics without any major mistakes.)</p><p>For example, you can see that their list of career reviews uses five elements to &quot;score&quot; each career path. They are:</p><ul><li><strong>Direct impact: </strong>This is the hardest thing to calculate, but an easy substitute question is: &quot;How much am I helping the world, compared to if I didn&#x27;t exist and someone else had taken this job?&quot; </li><ul><li>For some jobs, the personal views and strategies of the employee matter a lot (an election choosing which politician gets a &quot;job&quot; could have a huge effect on how much good the person in that &quot;job&quot; can do). Other jobs aren&#x27;t this way (which of two accountants gets a job probably won&#x27;t matter very much to the world, unless one of them was grossly incompetent). </li><li>For your engineering position, you might think: &quot;I was chosen over someone. How good would that person have been if I didn&#x27;t take the job? What are some unusual, unlikely, or particularly skilled things I&#x27;ve done on the job?&quot; Even if your job itself has a lot of high-impact features, your direct impact may not be as high unless the person you &quot;replaced&quot; wouldn&#x27;t have done a very good job.</li><li>Questions about this idea (known as &quot;replaceability&quot;) are <a href="https://80000hours.org/2015/07/replaceability-isnt-as-important-as-you-might-think-or-weve-suggested/">complicated to figure out</a>, since you can never really know who would have taken your job (or what that person is doing now, since they <em>didn&#x27;t </em>take it), but it still provides a useful starting point.</li></ul><li><strong>Advocacy potential: </strong>Does your job put you in a good position to reach a lot of people, or some very important people? (Some media or other &quot;public&quot; positions are good for this; most engineering positions don&#x27;t seem especially good, since the work tends to be done privately or in small teams.)</li><li><strong>Earnings: </strong>Engineering tends to do well on this criterion, but that still depends on what you do with the money you earn.</li><li><strong>Career capital: </strong>How well does your job set you up to do high-impact work later? Some ways that engineering might create this &quot;capital&quot;: You rise to an executive position later on, you start your own company using your experience, you consult for governments and help them set up better water policy than they would have otherwise, etc.</li><li><strong>Ease of competition: </strong>This factor only really matters for <em>choosing </em>a job, so it doesn&#x27;t seem relevant here.</li></ul><p>Your last question (about how your skillset might transfer to a more impactful domain) seems really important. Have you looked at open engineering positions on the <a href="https://80000hours.org/job-board/">80,000 Hours Job Board</a> or in the <a href="https://www.facebook.com/groups/1062957250383195/">EA Job Postings Facebook group</a>? Those positions are likely to have few &quot;competitors&quot; (since most EA orgs are small), and thus, high &quot;replaceability&quot; value (if you don&#x27;t take the job, they might not find anyone, or find a weaker candidate). </p><p>Let me know if you have questions about any of this!</p> aarongertler aKjmhNZK7yAQzDAw3 2019-02-05T02:54:34.684Z What are some lists of open questions in effective altruism? https://forum.effectivealtruism.org/posts/dRXugrXDwfcj8C2Pv/what-are-some-lists-of-open-questions-in-effective-altruism <p>One very good way to inspire research is to create a list of open questions. I&#x27;m aware of a few resources like this:</p><ul><li><a href="https://forum.effectivealtruism.org/posts/LG6gwxhrw48Dvteej/concrete-project-lists">Richard Batty&#x27;s Concrete Projects List</a> (and some of the comments).</li><li><a href="https://www.openphilanthropy.org/blog/technical-and-philosophical-questions-might-affect-our-grantmaking">Open Phil&#x27;s &quot;questions that might affect our grantmaking&quot; list</a>.</li><li><a href="https://foundational-research.org/open-research-questions/">FRI&#x27;s open research questions</a>.</li><li><a href="https://www.lesswrong.com/posts/kphJvksj5TndGapuh/directions-and-desiderata-for-ai-alignment">Paul Christiano&#x27;s sequence on iterated amplification</a>, which talks about open questions but never quite lists them as such.</li></ul><p>Are there other sets of EA-related open questions that I&#x27;ve left out of this list, and that aren&#x27;t on Richard&#x27;s list?</p><p>Specifically, I&#x27;m looking for questions that could be solved through research or experimentation, rather than &quot;projects&quot; that require competitive execution (&quot;could someone create an Amazon for charitable donations?&quot; doesn&#x27;t count, but &quot;what factors lead to someone repeatedly using a donation website?&quot; could).</p> aarongertler dRXugrXDwfcj8C2Pv 2019-02-05T02:23:03.345Z Are there more papers on dung beetles than human extinction? https://forum.effectivealtruism.org/posts/dvCuqKS825AqSm7fN/are-there-more-papers-on-dung-beetles-than-human-extinction <p><strong>Summary: </strong>Yes. But extinction can probably catch up if we put our minds to it.</p><hr class="dividerBlock"/><p>From a <a href="https://www.vox.com/future-perfect/2019/1/3/18165541/extinction-risks-humanity-asteroids-supervolcanos-gamma-rays"><em>Vox</em> article</a> by the wonderful Kelsey Piper:</p><blockquote>&quot;There are <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/1758-5899.12002">more academic papers on dung beetles than the fate of <em>H. sapiens</em></a>,&quot; Sandberg writes. That’s a bizarre state of affairs. What’s going on?</blockquote><p>And:</p><blockquote>Mitigating risks requires more careful and thoughtful development of new technology, measures to avoid deployment of unsafe systems, and international coordination to enforce agreements that reduce risk. All of those are uphill battles. No wonder it’s more rewarding to study dung beetles.</blockquote><p>I&#x27;m often suspicious of claims about the state of the world (e.g. the resources devoted to different academic subjects) based on imperfect signals about the state of the world (e.g. search results for paper topics). </p><p>&quot;A Google search for X returns Y results&quot; is a lazy cliche, almost totally uninformative in most places I&#x27;ve seen it used. &quot;Google auto-completes X with Y&quot; <a href="http://slatestarcodex.com/2013/04/04/lies-damned-lies-and-facebook-part-2-of-∞/">isn&#x27;t much better</a>. </p><p>But when Anders Sandberg makes a similar claim, I pay attention. Here&#x27;s a graph from the paper Piper picked:</p><span><figure><img src="http://aarongertler.net/wp-content/uploads/2019/02/Dung-beetles.png" class="draft-image " style="" /></figure></span><p>I&#x27;m too lazy to register for Scopus right now, but Google Scholar gives me similar results for 2012. For the rest of this post, I&#x27;ll use &quot;since 2018&quot; as my base year, rather than 2012 -- with EA&#x27;s influence, maybe X-risk is catching up to beetles?</p><hr class="dividerBlock"/><h2>Google Scholar results since 2018</h2><p><em>Data collected on 4 February, 2019.</em></p><p><strong>&quot;Dung beetle&quot; OR &quot;Dung beetles&quot;: </strong>1830 results</p><p><strong>&quot;Human extinction&quot;:</strong> 449</p><p><strong>&quot;Human extinction&quot; OR &quot;existential risk&quot; OR &quot;global catastrophic risk&quot;:</strong> 940 </p><p>Okay, the beetles are winning. What if I add a some of the natural threats to humanity mentioned by Sandberg and Piper?</p><p><strong>&quot;Asteroid detection&quot; OR &quot;existential risk&quot; OR &quot;human extinction&quot; OR &quot;global catastrophic risk&quot; OR &quot;supervolcano&quot; OR (&quot;gamma ray burst&quot; AND (&quot;human&quot; OR &quot;injury&quot; OR &quot;danger&quot;)): </strong>1180</p><p>Is this a decisive victory? A &quot;dung deal&quot;, as it were? </p><h2>Possible caveats</h2><p>I tried removing &quot;global&quot; from &quot;global catastrophic risk&quot; and <em>almost</em> beat the beetles, but I discovered that &quot;catastrophic risk&quot; is insurance lingo which usually refers to hurricanes and droughts and other non-global hazards.</p><p>I tried adding &quot;mass extinction&quot; and more than <em>doubled </em>the beetles&#x27; score, but nearly all papers using that term are about natural history rather than future risk.</p><p>I tried removing &quot;include citations?&quot; for my searches. This cut down on beetle numbers (from 1830 to 1080) and didn&#x27;t really affect my longest X-risk search term. Suddenly, risk came out ahead!</p><p>But then I read a sample of the papers returned by both searches. The dung beetle papers were all about, well, dung beetles. The X-risk papers referred to a motley array of topics, many of which had nothing to do with X-risk:</p><ul><li>The psychology concept of &quot;human <a href="https://www.hartleylab.org/uploads/5/3/1/0/53101939/extinction_learning.pdf">extinction learning</a>&quot;.</li><li><a href="https://www.sciencedirect.com/science/article/abs/pii/S0006320717322024">&quot;Human extinction <em>of</em>&quot; other animals</a> (the real X-risk was inside us all along).</li><li>Random physics papers summoned by my reckless use of gamma rays.</li></ul><p>...and so on. Take out all of that, and the beetles are still pushing us around.</p><h2>Do search term numbers matter?</h2><p>Some things this comparison probably tells us:</p><ul><li>More people are currently being paid to study biological topics to which dung beetles are relevant than to study human extinction.</li><li>It&#x27;s easier to publish papers about said topics than about human extinction.</li><li>Not enough people are <a href="https://vkrakovna.wordpress.com/2015/05/17/hamming-questions-and-bottlenecks/">Hamming themselves</a> before choosing research topics. (There may be more people in the world who could become solid beetle scholars than those who could become solid X-risk scholars, but still.)</li></ul><p>Some things this comparison can&#x27;t really tell us:</p><ul><li>How much work/investment goes toward each topic, in total. People working outside the scientific community, including governments, probably spend a lot more money and time on human extinction than they do on dung beetles.</li><li>How successful the average human extinction researcher is, compared to the average dung beetle researcher. (I actually don&#x27;t have a good guess as to which topic is more likely to get you a grant if you have a good idea.)</li></ul><p>The actual problem this post made me think about:</p><ul><li>How easy it is to get involved in human extinction research compared to dung beetle research. </li></ul><p>The former could be a vast field with innumerable open questions, but it still seems difficult for most people to contribute in any reliable way; few of those open questions are <em>listed </em>anywhere, few classes teach these subjects, few reliable methods exist for making progress, etc. </p><p>By contrast, if you are a college student and want to start studying dung beetles, you can <a href="https://www.researchgate.net/profile/Fabien_Muhirwa/publication/324594158_Dung_beetle_distribution_abundance_and_diversity_along_an_elevation_gradient_in_Nyungwe_National_Park_Rwanda_A_preliminary_survey/links/5ad75e19aca272fdaf7ed574/Dung-beetle-distribution-abundance-and-diversity-along-an-elevation-gradient-in-Nyungwe-National-Park-Rwanda-A-preliminary-survey.pdf">grab some bug traps, follow your professors to a nearby national park, and start taking samples</a>. </p><p>...I suppose it&#x27;s time for me to start soliciting lists of open questions. That post will be linked <a href="https://forum.effectivealtruism.org/posts/dRXugrXDwfcj8C2Pv/what-are-some-lists-of-open-questions-in-effective-altruism">here</a> in about 15 minutes. </p><hr class="dividerBlock"/><p>Please let me know if I missed anything, of course, from a more apt search term to a philosophical consideration. It would be nice to take the Dung Beetle Question from &quot;open&quot; to &quot;closed&quot;.</p><p></p> aarongertler dvCuqKS825AqSm7fN 2019-02-05T02:09:58.568Z Comment by aarongertler on Will companies meet their animal welfare commitments? https://forum.effectivealtruism.org/posts/XdekdWJWkkhur9gvr/will-companies-meet-their-animal-welfare-commitments#Etbo2twKZyEnWeXJP <p>This is an excellent post! Thanks for sharing it here -- this let me find out about it in time to include it in the February edition of the Effective Altruism Newsletter.</p> aarongertler Etbo2twKZyEnWeXJP 2019-02-02T00:27:01.620Z Comment by aarongertler on Podcast Discussion Meetings - A Potentially High-Value Event Template For Local Groups https://forum.effectivealtruism.org/posts/4d5zErfDfcovBgod3/podcast-discussion-meetings-a-potentially-high-value-event#3NLXF3sLSsswNDzGq <p>I love this idea! Thanks for cross-posting it from Facebook to a place where CEA can more easily link to it (as I suspect we will in future lists of resources for groups), and where it&#x27;s more likely to keep getting comments. </p><p>Having seen EA reading groups in action, I like that podcasts are:</p><p>1) A shorter time commitment.</p><p>2) Much easier to consume (people can listen while walking, or in the car, and be able to get through it even if they aren&#x27;t in a position to take notes).</p><p>3) Easier to annotate (having a document or notebook open while you listen &gt; having to switch back and forth between a book and a screen, or two book-shaped things).</p> aarongertler 3NLXF3sLSsswNDzGq 2019-02-01T21:13:24.076Z Comment by aarongertler on EA Forum Prize: Winners for December 2018 https://forum.effectivealtruism.org/posts/gsNDoqpB2pWq5yYLv/ea-forum-prize-winners-for-december-2018#4kHuYmWDL82zd5uYs <p>Thanks for noting this concern! We gather candidates using an extract directly from the Forum&#x27;s database, which contains every post. (There have been issues with certain posts not appearing in certain views, but we haven&#x27;t seen posts vanish from the site completely -- I&#x27;ve checked each post reported this way and have been able to find it with a quick Google search.)</p> aarongertler 4kHuYmWDL82zd5uYs 2019-02-01T21:04:36.274Z You Should Write a Forum Bio https://forum.effectivealtruism.org/posts/2j8ERGPu68L5Bd95y/you-should-write-a-forum-bio <p><em>I work for CEA and help to run the Forum, but this is my personal opinion as a person who likes having information about the EA community.</em></p><hr class="dividerBlock"/><h2>Why write a bio?</h2><p>If you click someone&#x27;s username, you can see a personal bio in their profile.</p><p>The EA Forum will be a slightly better place if more of its users write a bio. If you don&#x27;t mind sharing information about yourself, I recommend writing one.</p><p>Here are a few reasons why:</p><p><strong>1. It makes it easy to see your affiliations.</strong> If I&#x27;m talking to someone about a charity, it helps to know whether they work, or have worked, at said charity. Transparency!</p><p><strong>2. It lets you link to your other content</strong>. If you link to a blog/personal site in your bio, someone who likes your post or comment can read more of your work. They can also learn more about your favorite causes, or anything else you want to share.</p><p><strong>3. It can be especially helpful to newer community members.</strong> When you&#x27;re new (or otherwise aren&#x27;t very connected to the EA “social scene”), effective altruism can feel like a small club of people who already know each other. Helping people learn about you, if you don’t mind sharing that information, makes EA more welcoming.</p><p>This list is non-exhaustive, because bios are flexible and can serve many purposes.</p><hr class="dividerBlock"/><p><strong>If you prefer not to share personal information on the Forum, please don&#x27;t feel pressured to do so by this post. </strong>My intention is to remind people who want to share, or wouldn&#x27;t mind sharing, that bios exist and are helpful.</p><h2>How to write a bio</h2><p>1. Click your username, then click &quot;Edit Account&quot;.</p><p>2. Type your bio in the &quot;Bio&quot; box below your email address. For now, we only support plain text. You can use line breaks while editing, but the published bio will show on your profile without line breaks.</p><p>3. Click &quot;Submit&quot; at the bottom of the page, so that your bio will be saved.</p><p>Not sure what to write? Try some of these:</p><ul><li>Your name (if you don&#x27;t mind sharing).</li><li>Your EA affiliations (e.g. employment, group membership), or whatever else you&#x27;re working on. </li><li>Your favorite causes/organizations.</li><li>A link to any other writing you&#x27;d like to share.</li><li>Fun facts?</li></ul><p>For example, here&#x27;s my bio:</p><p><em>Aaron is a full-time content writer at CEA. He started Yale&#x27;s student EA group, and has volunteered for CFAR and MIRI. He also works for a small, un-Googleable private foundation that makes EA-adjacent donations. Before joining CEA, he was a tutor, a freelance writer, a tech support agent, and a music journalist. He blogs, and keeps a public list of his donations, at aarongertler.net.</em></p><p>No need to make yours this long, of course.</p><hr class="dividerBlock"/><p><strong>Suggestion: </strong>If you decide to write a bio after reading this post, leave a comment so that other people know to read it! (At the very least, I will read it, since I’m always curious about the Forum’s users.)</p> aarongertler 2j8ERGPu68L5Bd95y 2019-02-01T03:32:29.453Z Comment by aarongertler on Research on Effective Strategies for Equity and Inclusion in Movement-Building https://forum.effectivealtruism.org/posts/gWj6ikhTZp372uobf/research-on-effective-strategies-for-equity-and-inclusion-in#HHSEPMe4dRdwboxBq <p>Good post! Regarding casebash&#x27;s concern about tradeoffs: I think there are clear net benefits to many of these techniques, including matters of basic politeness (e.g. letting people know they are encouraged to bring partners of any gender to events, remembering an &quot;other&quot; option for gender on your forms) and sound business strategy (e.g. only listing actual requirements on your application form, defaulting to flexible hours when that&#x27;s feasible). If presenting these as &quot;strategies for equity and inclusion&quot; means they&#x27;re more likely to be adopted, that&#x27;s a promising development.</p><p>Of course, not every organization will benefit from every suggestion, but I like these kinds of &quot;toolbox&quot; posts, which offer a set of options (of varying degrees of implementation complexity) for organizations that want to accomplish something. Almost anyone trying to hire for an EA org is likely to find at least one useful idea here.</p><p>(I will note that, while literally every decision a business could make has &quot;tradeoffs&quot;, some of these ideas appear especially costly for certain kinds of organizations -- for example, committing to hiring criteria ahead of time might be dangerous if an organization has a lot of work that needs doing and meets someone who is capable of doing A and B, but who applied for a position that does C and D. That said, smaller organizations with more flexible roles and processes can probably work around issues of this nature without much trouble.)</p> aarongertler HHSEPMe4dRdwboxBq 2019-02-01T00:54:07.040Z Comment by aarongertler on Cost-Effectiveness of Aging Research https://forum.effectivealtruism.org/posts/JsL2kPWJYRxn9rCWR/cost-effectiveness-of-aging-research#vt9kaecLjTGrmuetB <p>Thanks for including a model! I hope that future Forum posts with cost-effectiveness models do the same (some other posts have, but not all).</p><p>I&#x27;m confused about the &quot;ten years&quot; figure you chose. I didn&#x27;t see it mentioned in the Longevity Panel&#x27;s report, or in de Grey&#x27;s (though I may have missed something). Why start with that number for the DALY estimation, rather than one year?</p> aarongertler vt9kaecLjTGrmuetB 2019-01-31T23:42:25.091Z EA Forum Prize: Winners for December 2018 https://forum.effectivealtruism.org/posts/gsNDoqpB2pWq5yYLv/ea-forum-prize-winners-for-december-2018 <p> CEA is pleased to announce the winners of the December 2018 EA Forum Prize!</p><p>In first place (for a prize of $999): &quot;<u><a href="https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison">2018 AI Alignment Literature Review and Charity Comparison</a></u>&quot;, by <u><a href="https://forum.effectivealtruism.org/users/larks">Larks</a></u>.</p><p>In second place (for a prize of $500): &quot;<u><a href="https://forum.effectivealtruism.org/posts/XWSTBBH8gSjiaNiy7/cause-profile-mental-health">Cause Profile: Mental Health</a></u>&quot;, by <u><a href="https://forum.effectivealtruism.org/users/michaelplant">Michael Plant</a></u>.</p><p>In third place (for a prize of $250): &quot;<u><a href="https://forum.effectivealtruism.org/posts/PbnvjtTFnPiaT5ZJQ/lessons-learned-from-a-prospective-alternative-meat-startup">Lessons Learned from a Prospective Alternative Meat Startup Team</a></u>&quot;, by <u><a href="https://forum.effectivealtruism.org/users/scottweathers">Scott Weathers</a></u>, <u><a href="https://forum.effectivealtruism.org/users/joangass">Joan Gass</a></u>, and an anonymous co-author.</p><p>You can see November’s winning posts <u><a href="https://forum.effectivealtruism.org/posts/k4SLFn74Nsbn4sbMA/ea-forum-prize-winners-for-november-2018">here</a></u>.</p><h2>What is the EA Forum Prize?</h2><p>Certain posts exemplify the kind of content we <u><a href="https://forum.effectivealtruism.org/about">most want to see</a></u> on the EA Forum. They are well-researched and well-organized; they care about <u><a href="https://ideas.ted.com/why-you-think-youre-right-even-when-youre-wrong/">informing readers, not just persuading them</a></u>.</p><p>The Prize is an incentive to create posts like this. But more importantly, we see it as an opportunity to showcase excellent content as an example and inspiration to the Forum&#x27;s users.</p><h2>About the December winners</h2><p>&quot;<u><a href="https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison">2018 AI Alignment Literature Review and Charity Comparison</a></u>&quot; is an elegant summary of a complicated cause area. It should serve as a useful resource for people who want to learn about the field of AI alignment; we hope it also sets an example for other authors who want to summarize research.</p><p>The post isn’t only well-written, but also well-organized, with several features that make it easier to read and understand. The author: </p><ul><li>Offers suggestions on how to effectively read the post.</li><li>Hides their conclusions, encouraging readers to draw their own first.</li><li>Discloses relevant information about their background, including the standards by which they evaluate research and their connections with AI organizations.</li></ul><p>These features all fit with the Forum’s goal of “information before persuasion”, letting readers gain value from the post even if they disagree with some of the author’s beliefs.</p><hr class="dividerBlock"/><p>&quot;<u><a href="https://forum.effectivealtruism.org/posts/XWSTBBH8gSjiaNiy7/cause-profile-mental-health">Cause Profile: Mental Health</a></u>&quot; is a strong investigation of a cause which hasn’t gotten very much attention from the EA movement.</p><p>Especially good features of the post:</p><ul><li>An introduction which serves as a useful guide to a long analysis.</li><li>Summaries of each section placed under the section headers, making navigation and comprehension even easier.</li><li>Endnotes which help readers verify information for themselves.</li><li>The use of a classic framework for impact analysis (<u><a href="https://80000hours.org/articles/problem-framework/">scale, neglectedness, and tractability</a></u>), which helps readers compare mental health to other cause areas that have been evaluated using the same framework.</li></ul><p>We hope to see more such investigations in the future for other promising causes. </p><hr class="dividerBlock"/><p>&quot;<u><a href="https://forum.effectivealtruism.org/posts/PbnvjtTFnPiaT5ZJQ/lessons-learned-from-a-prospective-alternative-meat-startup">Lessons Learned from a Prospective Alternative Meat Startup Team</a></u>&quot; is a well-organized and highly informative discussion from a team that tried to start a high-impact company. The authors provide useful advice about entrepreneurship and summarize the state of alternative-meat research, a key topic within animal welfare. While they decided not to move forward with a startup, the team learned from the experience and also produced value for the EA community by sharing their story on the Forum.</p><p>We’ve been impressed by similar “postmortem” articles published on the Forum in the past. Going forward, we hope to see other people share lessons from the projects they pursue, whether or not they “complete” those projects.</p><h2>The voting process</h2><p>All posts made in the month of December qualified for voting, save for those written by CEA staff and Prize judges.</p><p>Prizes were chosen by six people. Three of them are the Forum&#x27;s moderators (<u><a href="https://forum.effectivealtruism.org/users/maxdalton">Max Dalton</a></u>, <u><a href="https://forum.effectivealtruism.org/users/denise_melchin">Denise Melchin</a></u>, and <u><a href="https://forum.effectivealtruism.org/users/julia_wise">Julia Wise</a></u>).</p><p>The other three are the EA Forum users who had the most karma at the time the new Forum was launched (<u><a href="https://forum.effectivealtruism.org/users/peter_hurford">Peter Hurford</a></u>, <u><a href="https://forum.effectivealtruism.org/users/joey">Joey Savoie</a></u>, and <u><a href="https://forum.effectivealtruism.org/users/robert_wiblin">Rob Wiblin</a></u>).</p><p>Voters recused themselves from voting for content written by their colleagues. Otherwise, they used their own individual criteria for choosing posts, though they broadly agree with the goals outlined above.</p><p>Winners were chosen by an initial round of <u><a href="https://en.wikipedia.org/wiki/Approval_voting">approval voting</a></u>, followed by a runoff vote to resolve ties.</p><h2>Next month</h2><p>The Prize will continue with a round for January’s posts! After that, we’ll evaluate whether we plan to keep running it (or perhaps change it in some fashion). We hope that the Forum’s many excellent December posts will provide inspiration for more great work in the coming months. </p><h2>Feedback on the Prize</h2><p>We&#x27;d love to hear any feedback you have about the Prize. Leave a comment or contact <a href="https://forum.effectivealtruism.org/users/aarongertler">Aaron Gertler</a> with questions or suggestions. </p> aarongertler gsNDoqpB2pWq5yYLv 2019-01-30T21:05:05.254Z Comment by aarongertler on Introducing Sparrow: a user-friendly app to simplify effective giving https://forum.effectivealtruism.org/posts/ZcXqD9AMhcEMRgKxy/introducing-sparrow-a-user-friendly-app-to-simplify#crRjnFa6ZouvHoYNB <p>You write:</p><blockquote>Or you can give 10% of your income to a fund for the future, and it’ll automatically adjust with your salary.</blockquote><p>Does this imply that you hope to somehow integrate Sparrow with EA Funds? (Right now, EA Funds doesn&#x27;t have an API that would allow this to be automated.) Is the &quot;fund for the future&quot; something separate that you built within Sparrow?</p> aarongertler crRjnFa6ZouvHoYNB 2019-01-29T22:51:48.033Z Comment by aarongertler on [deleted post] https://forum.effectivealtruism.org/posts/2qBsGXJMYphPYGNL7/anyone-has-a-practical-guide-for-calculating-carbon#24jBi8nD77kFThHea <p>This is a good question! I appreciate how many relevant resources you collected in the course of putting it together.</p><p>In the future, I&#x27;d recommend trying out the &quot;Ask Question&quot; feature for posts like this -- it creates a post that&#x27;s a bit better-formatted for Q&amp;A, by separating out &quot;answers&quot; from other comments. You can find the &quot;Ask Question&quot; option right above &quot;New Post&quot; in your drop-down menu.</p><p>(If you want to do this now, you can move this question back to a draft and re-post using &quot;Ask Question&quot;, but no worries if you&#x27;d rather just leave it up. We don&#x27;t enforce the Post/Question distinction right now; I just want people to know the Question feature exists.)</p> aarongertler 24jBi8nD77kFThHea 2019-01-29T20:32:33.099Z Comment by aarongertler on Disentangling arguments for the importance of AI safety https://forum.effectivealtruism.org/posts/LprnaEj3uhkmYtmat/disentangling-arguments-for-the-importance-of-ai-safety#hiKXi8jHeheJupioS <p>Strong upvote. This is exactly the kind of post I&#x27;d like to see more often on the Forum: It summarizes many different points of view without trying to persuade anyone, points out some core areas of agreement, and names people who seem to believe different things (perhaps opening lines for productive discussion in the process). Work like this will be critical for EA&#x27;s future intellectual progress.</p> aarongertler hiKXi8jHeheJupioS 2019-01-29T01:33:16.512Z Comment by aarongertler on We should make academic knowledge easier https://forum.effectivealtruism.org/posts/fJrBHDMdba2s7jfmZ/we-should-make-academic-knowledge-easier#KxjduSG6ZDGrMsS4K <p>For causes that are especially complicated and difficult, like &quot;knowledge simplification&quot;, I recommend this technique to avoid getting lost in the weeds:</p><ul><li>Find examples of projects in this field that you think were highly effective, and see if you can think of ways to create similar projects that would also be effective. For example, if you think that StackOverflow simplified programming knowledge and helped more people become good programmers, producing a lot of value for every dollar that went into it:</li><ul><li>Is there some other field that needs its own StackOverflow, but doesn&#x27;t have one? (Nothing immediately comes to mind for me.)</li><li>Is there some way in which more people can be made to learn about, and use, StackOverflow-like projects? (It seems like people very frequently find StackOverflow and its ilk when they search for answers, but... maybe?)</li><li>How much should we value more people getting access to this kind of project? How would we track whether the thing we thought was valuable (e.g. &quot;more good programmers exist&quot;) was actually happening, and caused by our project?</li><li>How many other people have tried something like this? What was their success rate? Can we avoid the most common reasons that these projects fail? (Most StackOverflow and Wikipedia-like projects never get anywhere, especially those focused on hard things like &quot;simplifying knowledge&quot; rather than easy things like &quot;writing summaries of TV episodes for your favorite fandom&#x27;s wiki&quot;.)</li></ul></ul><p>Unless the above exercise gives you something useful, you should strongly consider the idea that another cause is more worthwhile.</p><p>There are exceptions to this (for example, fields like AI safety are novel and focused on future problems, so we wouldn&#x27;t expect to see past projects we knew had been highly effective). But in the case of &quot;knowledge simplification&quot;, I see a vast graveyard of doomed projects, with a view bright spots that were successful by chance and/or <em>so </em>successful that they&#x27;ve dominated their niche and no further work should be done (as far as I know, we don&#x27;t need any more StackOverflows for programming, or any more Wikipedias for general knowledge).</p> aarongertler KxjduSG6ZDGrMsS4K 2019-01-29T01:24:44.615Z Comment by aarongertler on Why we look at the limiting factor instead of the problem scale https://forum.effectivealtruism.org/posts/W2YChXz4f4nGzZMaE/why-we-look-at-the-limiting-factor-instead-of-the-problem#2eABYFEHPKRHuQmWu <p>Good piece! </p><p>When you use phrases like &quot;we have found&quot; in pieces on the Forum, I&#x27;d recommend you identify your organization right away. Someone who joins the Forum and then reads this without knowing that you work for Charity Entrepreneurship might be quite confused.</p><p>(I think it&#x27;s fine to write very technical pieces for the Forum, even if they risk confusing people, because it&#x27;s important to have high-fidelity work that isn&#x27;t constrained by a need to re-explain the basics. Noting which organizations we represent seems not to have this downside, though, especially since the names and staff members of EA orgs change pretty often.)</p> aarongertler 2eABYFEHPKRHuQmWu 2019-01-29T01:11:20.935Z Comment by aarongertler on Is intellectual work better construed as exploration or performance? https://forum.effectivealtruism.org/posts/QwLKTcte8LgTNmCM2/is-intellectual-work-better-construed-as-exploration-or#E6asrWwp8i26obqMN <p>My intuition is that becoming skillful is difficult, as it would be for most performance skills, but that it&#x27;s quite possible to do so without getting worse at intellectual work, as long as you continue to value that work and have a social circle that won&#x27;t let you slack off on truthseeking. Many intellectual &quot;performers&quot; who get a bit epistemically lazy may have been prevented from doing so if they&#x27;d had friends around to check their worst impulses.</p> aarongertler E6asrWwp8i26obqMN 2019-01-29T01:06:15.016Z Comment by aarongertler on Vox's "Future Perfect" column frequently has flawed journalism https://forum.effectivealtruism.org/posts/gBXyH9LHEdWKzeyjG/vox-s-future-perfect-column-frequently-has-flawed-journalism#5RQznHhHEegootiMZ <p>I agree with the other respondent that Dylan Matthews and Ezra Klein genuinely seem to care about EA causes (Dylan on just about everything, even AI risk [a change from his previous position], and Ezra at least on veganism). Hiring Kelsey Piper is one clear example of this -- she had no prior journalism experience, as far as I&#x27;m aware, but had very strong domain knowledge and a commitment to EA goals. Likewise, the section&#x27;s Community Manager, Sammy Fries, also had a background in the EA community. </p><p>It would have been easy for Vox to hire people with non-EA backgrounds who had more direct media experience, but they did something that probably made their jobs a bit harder (from a training standpoint). This seems like information we shouldn&#x27;t ignore (though of course, for all I know, Sammy and Kelsey may have been the best candidates even without their past EA experience).</p><p>Really good journalism is hard to produce, and just like any other outlet, Vox often succumbs to the desire to publish more pieces than it can fact-check. And some of their staff writers aren&#x27;t very good, at least in the sense that we wish they were good. </p><p>But still, because of Future Perfect, there has been more good journalism about EA causes in the last few months than in perhaps the entirety of journalism before that time. The ratio of good EA journalism to bad is certainly higher than it was before. </p><p>There is a model you could adopt under which the raw <em>amount </em>of bad journalism matters more than the good/bad ratio, because one bad piece can cause far more damage than any good piece can undo, but you don&#x27;t seem to have argued that Vox is going to damage us in that sense, and it seems like their most important/central pieces about core EA causes generally come from Kelsey Piper, who I trust a lot. </p><p>I agree that some of Vox&#x27;s work is flawed and systematically biased, but they&#x27;ve also produced enough good work that I hope to see them stick around. What&#x27;s more, the existence of <em>Future Perfect </em>may lead to certain good consequences, perhaps including:</p><ul><li>Other news outlets hiring people with EA backgrounds to write on similar topics, following in Vox&#x27;s footsteps.</li><li>News outlets using <em>Future Perfect </em>as a source when they write about EA issues (I&#x27;d much prefer a journalist learning about AI risk start with Piper than other mass-media articles on the subject).</li><li>Other EA people working with Vox and gaining valuable insight into how the media works; even if it turns out that we should try not to engage with the media whenever possible, at least having a few people who <em>understand </em>it seems good. </li></ul> aarongertler 5RQznHhHEegootiMZ 2019-01-29T01:00:50.407Z Comment by aarongertler on Talking about EA at an investors' summit https://forum.effectivealtruism.org/posts/oqztNrHZLMNNZX8md/talking-about-ea-at-an-investors-summit#oWQ7ijkgC3gNxsGRZ <p>Kit&#x27;s answer was very good and I agree with all of it, especially &quot;make sure they have something to remember you by&quot;. A physical thing they can put in their pocket, like a business-card-sized-thing or a small/foldable pamphlet, seems good to have available.</p><p>I&#x27;d recommend editing your answer to make your relationship with the conference more clear. Are you involved in venture capital or some other branch of finance? If so, it&#x27;s probably good to start off with that in conversations, rather than leading on the fact that you want to talk about charity. If you want a fun venture fact for these conversations, you could mention that 80,000 Hours and CEA <a href="https://docs.google.com/document/d/1_VuERKGdeFU_VnsJpybvpwuFW6KSkQUFG4XV5rCB_us/edit">both went through Y Combinator</a> and/or talk about the rapid growth of <a href="https://founderspledge.com/research">Founders&#x27; Pledge</a>.</p><p>Don&#x27;t be afraid to let people go if they don&#x27;t seem interested. The card/pamphlet idea I mentioned above is good for this -- allows a conversation to break up &quot;politely&quot; as they take something and you leave -- or you can think of a good &quot;script&quot; for what to say when you don&#x27;t think the other person is engaged. You won&#x27;t have time to talk to everyone even if you move fast (I assume), so there&#x27;s no loss in cycling through people quickly until you hit someone who &quot;gets it&quot; and wants to hear more.</p> aarongertler oWQ7ijkgC3gNxsGRZ 2019-01-28T07:20:55.280Z The Meetup Cookbook (Fantastic Group Resource) https://forum.effectivealtruism.org/posts/cAnYmiNDzCoDWsGtJ/the-meetup-cookbook-fantastic-group-resource <p>(This was also posted on <a href="https://www.lesswrong.com/posts/ousockk82npcPLqDc/meetup-cookbook">LessWrong</a> a few months ago, and has comments there.)</p><p>I love single-page websites. A fire still burns in my heart for <a href="https://web.archive.org/web/20181204175011/http://whatiseffectivealtruism.com/">What Is Effective Altruism</a>?, even if it&#x27;s a bit old-fashioned.</p><p>Today, The Meetup Cookbook lit another one of those fires. It&#x27;s almost everything you need to run a meetup, in a box. (The authors run rationality rather than EA meetups, but those are pretty similar on the level of &quot;planning and logistics&quot;.) </p><p>Here are some of my favorite excerpts:</p><blockquote>I make a schedule of the planned topics about six months in advance in a spreadsheet [...] This makes it extremely easy to post the meetups every week. Reducing friction for ourselves means that the meetup happens more reliably. </blockquote><p>As a former organizer for two different EA groups, just looking at that spreadsheet (photo on website) makes me feel calmer than I ever did when I was planning events week by week.</p><blockquote><strong>Should I ask for RSVPs, so I know how many people are coming?</strong> No. Probably don&#x27;t bother, it never works [...] most people seem to like to be able to decide day-of whether they&#x27;re going to come [...] RSVPs are usually poorly correlated with attendance.</blockquote><blockquote>Another strategy is to say &quot;I&#x27;m going to be at the location from X-Y PM, guaranteed,&quot; and hang out the entire time to see if anyone shows up. This way you catch people even if they show up very, very late - which does happen, in our experience. This is more useful if you have very low attendance, or you&#x27;re starting a new meetup and are not sure what to expect.</blockquote><p>The &quot;guaranteed location&quot; strategy is also the best one I&#x27;ve found. Schedules are hard; people miss trains, lose their keys, get out of work late, get caught up in a conversation on the way over... and in all those cases, they sometimes turn around and go home rather than show up late. &quot;Stop by whenever&quot; won&#x27;t work for all meetups (sometimes you need to prep in advance based on attendance, etc.), but it&#x27;s a great way to get started.</p><blockquote>You might feel awkward about taking charge of a group. That&#x27;s okay, and if you feel really uncomfortable, you can lampshade it by saying something like &quot;Hey, so I guess I&#x27;m running this thing.&quot; But you don&#x27;t really <em>need</em> to say things like that. Meetups are low-stakes. It&#x27;s not a dominance move to set up and run one; it&#x27;s a gift you give to other people. You may not be the best person possible to lead this group of people, but you&#x27;re the one who showed up and is doing your best, and that&#x27;s what matters. </blockquote><p>Yes! As it turns out, people actually tend to like other people who set up cool things for them, and give them a chance to sit back and relax and listen. Even if you make a mistake somewhere, there&#x27;s a good chance no one but you will notice. If someone notices, there&#x27;s a good chance they won&#x27;t mind. If they mind, there&#x27;s a good chance they&#x27;ll ask to help instead of getting mad. If they get mad, the most likely result is that they just don&#x27;t show up next time. Which really isn&#x27;t so bad.</p><h2>Other notes</h2><ul><li>When I reflect on my organizing experience, I remember one major problem not covered by the guide: I&#x27;m not very good at talking to strangers. I get anxious at the thought of a room filling with people I have to quickly befriend. Some ways to get around this:</li><ul><li><a href="https://www.benkuhn.net/twopeople">Have two people</a>. That is, even if you&#x27;re the one doing most or all of the planning, having someone you know come along and share the social duties relieves a lot of pressure. When I was struggling to start the Yale University group, my co-founder was really helpful in this way.</li><li>Message people ahead of time. This doesn&#x27;t have to mean taking RSVPs (as noted above, those are of limited value). It can also mean asking people to join a Facebook group if they want to <em>hear </em>about events (less pressure than promising to <em>attend</em> an event) and then sending a friendly message to every new member, introducing yourself and asking an icebreaker question. (The Cookbook offers some good questions for this.)</li></ul><li>You&#x27;re missing out if you don&#x27;t look at one of the Cookbook&#x27;s other links: Spencer Greenberg&#x27;s <em><a href="http://www.spencergreenberg.com/2017/03/better-formats-for-group-interaction-going-beyond-lectures-group-discussions-panels-and-mixers/">Better Formats for Group Interaction.</a> </em>If some of the Cookbook&#x27;s activities don&#x27;t feel like they&#x27;d apply to your EA group, maybe you&#x27;ll find inspiration here!</li></ul><p></p> aarongertler cAnYmiNDzCoDWsGtJ 2019-01-24T01:28:00.600Z The Global Priorities of the Copenhagen Consensus https://forum.effectivealtruism.org/posts/YReJJ8MZdASANojrT/the-global-priorities-of-the-copenhagen-consensus <p> </p><p>The Copenhagen Consensus is one of the few organizations outside the EA community which conducts cause prioritization research on a global scale.</p><p>Nearly everything on their &quot;<u><a href="https://www.copenhagenconsensus.com/post-2015-consensus">Post-2015 Consensus</a></u>&quot; list, which covers every cause they&#x27;ve looked at, fits into &quot;global development&quot;; they don&#x27;t examine animal causes or global catastrophic risks aside from climate change (though they do discuss population ethics in the case of <a href="https://www.copenhagenconsensus.com/post-2015-consensus/populationanddemography">demographic interventions</a>).</p><p>Still, given the depth of the research, and the sheer number of experts who worked on this project, it seems like their list ought to be worth reading. On the page I linked, you can find links to all of the different cause areas they examined; <a href="https://www.copenhagenconsensus.com/sites/default/files/post2015brochure_m.pdf">here&#x27;s a PDF</a> with just cost-effectiveness estimates for every goal across all of their causes.</p><p>I didn&#x27;t have the time to examine a full report for any of the cause areas, but I wanted to open a thread by noting numbers and priorities which I found interesting or surprising:</p><ul><li>The most valuable types of intervention, according to CC: </li><ul><li>Reduce restrictions on trade (10-20 times as valuable per-dollar as anything else on the list)</li><li>Increase access to contraception (CC says &quot;universal&quot; access, but I don&#x27;t see why we wouldn&#x27;t get roughly the same value-per-dollar, if not more, by getting half the distance from where we are to the goal of universal access)</li><li>Aspirin therapy for people at the onset of a heart attack</li><li>Increase immunization rates (their estimates on the value of this don&#x27;t seem too far off from GiveWell&#x27;s if I compare to their numbers on malaria)</li><li>&quot;Make beneficial ownership info public&quot; (making it clear who actually owns companies, trusts, and foundations, making it harder to transfer money illegally between jurisdictions). Notably, CC argues justifiably for reducing hidden information to zero, since &quot;a partial solution to the transparency issue would simply allow alternative jurisdictions to continue to be used&quot;.</li><li>Allow more migration</li><li>Two interventions within food security: Working to reduce child malnutrition (a common EA cause) and research into increasing crop yields (something EA has barely touched on, though The Life You Can Save does <a href="https://www.thelifeyoucansave.org/where-to-donate/one-acre-fund">recommend</a> One Acre Fund)</li></ul><li>Areas that CC found surprisingly weak, compared to what I&#x27;d expected:</li><ul><li>Cut outdoor air pollution (about 3% as valuable as cutting indoor air pollution)</li><li>Data collection on how well UN Millennium Development Goals are being met (<a href="https://www.copenhagenconsensus.com/post-2015-consensus/datafordevelopment">measurement is very expensive</a>, and could cost more than actual development assistance)</li><li>Social protection system coverage (helping more people access government benefits); CC estimates that this is less than one-fifth as valuable as cash transfers</li></ul></ul><p>Reading the full position papers for some interventions could be a really valuable exercise for anyone who cares a lot about global development (particularly if you think EA may be neglecting certain opportunities in that space). If you spot anything interesting (and/or anything that seems wrong), leave a comment!</p><p></p> aarongertler YReJJ8MZdASANojrT 2019-01-07T19:53:01.080Z Forum Update: New Features, Seeking New Moderators https://forum.effectivealtruism.org/posts/7etEYiorToG9KXGEw/forum-update-new-features-seeking-new-moderators <p> </p><p><em><strong>In this update: </strong>New features, moderation updates, and a call for new moderators.</em></p><h1>New features</h1><p>Since the new version of the EA Forum is a fork of <u><a href="https://www.lesswrong.com/">LessWrong</a></u>, it&#x27;s easy for us to pull new updates as they arrive on there. If we think that a new LessWrong feature is likely to also be a good fit for the Forum, we will likely merge it into our site. </p><h2>Floating table of contents </h2><p>This week, we are merging a major update which introduces a floating table of contents to the left of posts. This is a much-requested feature on the Forum which should help readers track the structure of longer posts, and we’re really pleased to have it. The table of contents tracks three levels of headers, and picks up on the header formats from the WYSIWYG (“what you see is what you get”) and Markdown editors. It interprets stand-alone bold text as a header. </p><h2>Comment Retraction</h2><p>Users can now retract their comments, which does not delete them, but will strike through the words while leaving them visible. (You can also un-retract a comment.)</p><p>This allows users to designate that they no longer endorse a past comment without deleting it entirely. We may implement more features related to retracted posts in the future (e.g. suppressing notifications for them, alerting users who replied to a comment that was later retracted).</p><h2>Question posts</h2><p>We have also released a new type of post: a Question post. </p><p>This allows users to pose questions, which can then be answered. Answers are shown below the question; there&#x27;s also a comment section for clarifying and interpreting the question, and for other thoughts which aren’t quite answers. You can see questions on <u><a href="https://forum.effectivealtruism.org/questions">this page</a></u> of the Forum, and they will also be posted to the Frontpage/Community sections as appropriate.</p><p>We think that this is an important feature for three reasons: First, this will give newer community members a place to receive high-quality answers to their questions. Second, those questions can encourage more knowledgeable members to write up content which is likely to be useful to the community. Finally, this can allow the community to make intellectual progress more reliably. The person who knows that a question is important is unlikely to be the best person to answer that question, and we hope this feature can match up people-with-questions to people-with-the-skill-to-answer-questions more reliably.</p><h2>Post page redesign</h2><p>In order to accommodate the question format, the Post page has been redesigned.</p><p>For more details on question posts and the table of contents feature, please see the <u><a href="https://www.lesswrong.com/posts/mrGeJ4Wt66PxN9RQh/lw-update-2018-12-06-table-of-contents-and-q-and-a">announcement post</a></u> on LessWrong. </p><h1>Moderation Updates</h1><h2>Cross-posting</h2><p>The Forum team has been in touch with lots of organizations and writers in the community, to ask for permission to cross-post their content. We encourage you to cross-post content from your blog or website, and to <u><a href="mailto:forum@effectivealtruism.org">get in touch</a></u> if you’d like us to do that for you. (We’ll post your content under your account, with formatting that works for the Forum — for example, by removing anchor links that only work in HTML.) We also encourage you to cross-post good EA-related content that you stumble across on the web. Cross-posting helps more people to find the best content and creates a space for moderated discussion.</p><h2>Personal blogs</h2><p>Until now, we have been moving all blog posts to the Frontpage or Community sections (see more on this distinction <u><a href="https://forum.effectivealtruism.org/posts/5TAwep4tohN7SGp3P/the-frontpage-community-distinction">here</a></u>). We are tentatively planning to relax this, and leave some types of post on people’s personal blogs. However, we are still likely to move the majority of posts to either Frontpage or Community.</p><p>Personal blogs are hosted on your user page (<u><a href="https://forum.effectivealtruism.org/user/[your">https://forum.effectivealtruism.org/user/[your</a></u> username], for instance, <u><a href="https://forum.effectivealtruism.org/users/maxdalton">see here</a></u>). Other users can follow your blog if they wish, and they’ll see notifications when you post. Personal blog posts are also included in the “all posts” view of the Frontpage section.</p><p>We are more likely to move posts to Frontpage/Community if:</p><ul><li>They receive lots of upvotes, and a high ratio of upvotes to downvotes</li><li>They are of broader relevance to the community, pursuing more interesting and important questions</li><li>They are clearly written and engaging</li><li>The analysis is high-quality (though it might still be brief and/or incomplete)</li></ul><p>The reasoning for this change is:</p><ul><li><u><a href="https://forum.effectivealtruism.org/posts/vhKfHHnCYbgSNL3Ci/should-you-have-your-own-blog#febP2NdTT8PtBZbDq">Some users</a></u> have expressed that they would feel more comfortable posting to the Forum if some of their lower-quality/less broadly relevant content was kept on their personal blogs. In general, we’d like to remove barriers to users posting content.</li><li>Some posts have met our Frontpage guidelines, but nevertheless not been posts that we would want to promote.</li><ul><li>For instance, there was a post (now deleted by the user) asking, hypothetically, why EA-type thinking couldn’t be applied to other areas of ethics, suggesting that violence might be justified for utilitarian reasons (without advocating for that violence). Whilst this post didn’t violate our guidelines for the Forum, and we think that it’s good that it was an opportunity for the poster to hear counterarguments, we don’t feel that this is the kind of content that we want to promote more broadly, or content that we want to endorse by actively moving it to Frontpage.</li></ul></ul><p>Our main worry about doing this is that it complicates the job of moderators, and introduces more judgement calls. We want to maintain the cause-neutral, community-driven atmosphere of the Forum, and we will try to do that in all of our decisions. We’re happy to talk about this more in the comments section.</p><h2>Call for new moderators</h2><p>Howie Lempel is stepping down from moderation duties in order to focus on his new full-time job. We’re very grateful for the work he’s been doing to moderate posts, and to shape the policies that we’re following.</p><p>This means that we’re looking for new moderators. This volunteer role takes 1-3 hours per week. Ideally, we’re looking for someone with a good knowledge of effective altruism, sound judgement, previous activity posting/commenting on the Forum, and some experience with building online or in-person communities. If you’re interested, please fill in this <u><a href="https://drive.google.com/forms/d/1qyaHCX924jRSDAU63ci6t5cQkfKSyF61TvVblaQ6TXc/edit">application form</a></u>. </p> aarongertler 7etEYiorToG9KXGEw 2018-12-20T22:02:46.459Z What's going on with the new Question feature? https://forum.effectivealtruism.org/posts/K3y8zNzMmkg8t5dbm/what-s-going-on-with-the-new-question-feature <p>I know it&#x27;s a new feature, but how does it work?</p> aarongertler K3y8zNzMmkg8t5dbm 2018-12-20T21:01:21.607Z EA Forum Prize: Winners for November 2018 https://forum.effectivealtruism.org/posts/k4SLFn74Nsbn4sbMA/ea-forum-prize-winners-for-november-2018 <p> </p><p>CEA is pleased to announce the winners of the November 2018 EA Forum Prize!</p><p>In first place (for a prize of $999*): stefan.torges, &quot;<u><a href="https://forum.effectivealtruism.org/posts/d3cupMrngEArCygNk/takeaways-from-eaf-s-hiring-round">Takeaways from EAF&#x27;s Hiring Round</a></u>&quot;.</p><p>In second place (for a prize of $500): Sanjay, &quot;<u><a href="https://forum.effectivealtruism.org/posts/RnmZ62kuuC8XzeTBq/why-we-have-over-rated-cool-earth">Why we have over-rated Cool Earth</a></u>&quot;.</p><p>In third place (for a prize of $250): AdamGleave, &quot;<u><a href="https://forum.effectivealtruism.org/posts/SYeJnv9vYzq9oQMbQ/2017-donor-lottery-report">2017 Donor Lottery Report</a></u>&quot;.</p><p>*As it turns out, a prize of $1000 makes the accounting more difficult. Who knew?</p><p> </p><h2>What is the EA Forum Prize?</h2><p>Certain posts exemplify the kind of content we <u><a href="https://forum.effectivealtruism.org/about">most want to see</a></u> on the EA Forum. They are well-researched and well-organized; they care about <u><a href="https://ideas.ted.com/why-you-think-youre-right-even-when-youre-wrong/">informing readers, not just persuading them</a></u>.</p><p>The Prize is an incentive to create posts like this, but more importantly, we see it as an opportunity to showcase excellent content as an example and inspiration to the Forum&#x27;s users.</p><p>That said, the winning posts weren&#x27;t &quot;exclusively&quot; great. Our users published dozens of excellent posts in the month of November, and we had a hard time narrowing down to three winners. (There was even a three-way tie for third place this month, so we had to have a runoff vote!)</p><p> </p><h2>About the November winners</h2><p>While this wasn&#x27;t our express intent, November&#x27;s winners wound up representing an interesting cross-section of the ways the EA community creates content.</p><p><strong>&quot;<u><a href="https://forum.effectivealtruism.org/posts/d3cupMrngEArCygNk/takeaways-from-eaf-s-hiring-round">Takeaways from EAF&#x27;s Hiring Round</a></u>&quot;</strong> uses the experience of an established EA organization to draw lessons that could be useful to many other organizations and projects. The hiring process is documented so thoroughly that another person could follow it almost to the letter, from initial recruitment to a final decision. The author shares abundant data, and explains how EAF’s findings changed their own views on an important topic.</p><p><strong>&quot;<u><a href="https://forum.effectivealtruism.org/posts/RnmZ62kuuC8XzeTBq/why-we-have-over-rated-cool-earth">Why we have over-rated Cool Earth</a></u>&quot;</strong> is a classic example of independent EA research. The author consults public data, runs his own statistical analyses, and reaches out to a charity with direct questions, bringing light to a subject on which the EA community doesn&#x27;t have much knowledge or experience. He also offers alternative suggestions to fight climate change, all while providing enough numbers that any reader could double-check his work with their own assumptions.</p><p>To quote one comment on the post:</p><blockquote><em>This sort of evaluation, which has the potential to radically change the consensus view on a charity, seems significantly under-supplied in our community, even though individual instances are tractable for a lone individual to produce.</em></blockquote><p><strong>&quot;<u><a href="https://forum.effectivealtruism.org/posts/SYeJnv9vYzq9oQMbQ/2017-donor-lottery-report">2017 Donor Lottery Report</a></u>&quot; </strong>is a different kind of research post, from an individual who briefly had resources comparable to an entire organization -- and used his fortunate position to collect information and share it with the community. He explains his philosophical background and search process to clarify the limits of his analysis, and shares the metrics he plans to use to evaluate his grants (which adds to the potential value of the post, since it opens the door for a follow-up post examining his results).</p><p> </p><p><strong>Qualities shared by all three winners:</strong></p><ul><li>Each post had a clear hierarchy of information, helping readers navigate the content and making discussion easier. Each author seems to have kept readers in mind as they wrote. This is crucial when posting on the Forum, since much of a post&#x27;s value relies on its being read, understood, and commented upon.</li><li>The authors didn&#x27;t overstate the strength of their data or analyses, but also weren&#x27;t afraid to make claims when they seemed to be warranted. We encourage Forum posts that prioritize information over opinion, but that doesn&#x27;t mean that informative posts need to <em>avoid</em> opinion: sometimes, findings point in the direction of an interesting conclusion.</li></ul><p> </p><h2>The voting process</h2><p>All posts made in the month of November, save for those made by CEA staff, qualified for voting.</p><p>Prizes were chosen by seven people. Four of them are the Forum&#x27;s moderators (<u><a href="https://forum.effectivealtruism.org/users/maxdalton">Max Dalton</a></u>, <u><a href="https://forum.effectivealtruism.org/users/HowieL">Howie Lempel</a></u>, <u><a href="https://forum.effectivealtruism.org/users/denise_melchin">Denise Melchin</a></u>, and <u><a href="https://forum.effectivealtruism.org/users/julia_wise">Julia Wise</a></u>). The other three are the EA Forum users who had the most karma at the time the new Forum was launched (<u><a href="https://forum.effectivealtruism.org/users/peter_hurford">Peter Hurford</a></u>, <u><a href="https://forum.effectivealtruism.org/users/joey">Joey Savoie</a></u>, and <u><a href="https://forum.effectivealtruism.org/users/robert_wiblin">Rob Wiblin</a></u>).</p><p>All voters abstained from voting for content written by themselves or by organizations they worked with. Otherwise, they used their own individual criteria for choosing posts, though they broadly agree with the goals outlined above.</p><p>Winners were chosen by an initial round of <u><a href="https://en.wikipedia.org/wiki/Approval_voting">approval voting</a></u>, followed by a runoff vote to resolve ties.</p><p> </p><h2>Next month</h2><p>The Prize will continue with rounds for December and January! After that, we’ll evaluate whether we plan to keep running it (or perhaps change it in some way).</p><p>We hope that the Forum’s many excellent November posts will provide inspiration for more great material in the coming months. </p><p> </p><h2>Feedback on the Prize</h2><p>We&#x27;d love to hear any feedback you have about the EA Forum Prize. Leave a comment or contact <a href="mailto:aaron@effectivealtruism.org">Aaron Gertler</a> with questions or suggestions.</p><p> </p> aarongertler k4SLFn74Nsbn4sbMA 2018-12-14T21:33:10.236Z Literature Review: Why Do People Give Money To Charity? https://forum.effectivealtruism.org/posts/gABGNBoSfvrkkqs9h/literature-review-why-do-people-give-money-to-charity <p><em>Notes: </em></p><ul><li><em>Cross-posting from <a href="https://aarongertler.net/thesis/">my blog </a>without much refinement. If you spot any non-typo errors, I will upvote you and correct the post on my website. </em></li><li><em>If you aren&#x27;t sure whether to read it, I&#x27;ll try to tip you over the edge by mentioning that someone at Charity Science did, and decided to add it to <a href="http://www.charityscience.com/outreach-research.html">their website</a>. </em></li><li><em>If you might be writing a thesis at some point in the future, consider <a href="http://effectivethesis.com/project/">picking a topic that could be helpful to the EA community</a>! And if you wrote an EA-ish thesis recently, consider writing a summary for the Forum! I&#x27;m really glad I wrote this summary; it helped ~300 hours of work not go to waste.</em></li></ul><p></p><p>In 2015, I wrote a senior thesis:</p><p><strong><a href="http://aarongertler.net/wp-content/uploads/2018/01/Aaron-Gertler-Senior-Thesis-full-bibliography-1.pdf">Charitable Fundraising and Smart Giving: How can charities use behavioral science to drive donations?</a></strong></p><p>It’s a very long paper, and you probably shouldn’t read the whole thing. I conducted my final round of editing over the course of 38 hours, during which I did not sleep. It’s kind of a slog.</p><p>Here’s a PDF of the five pages where I summarize everything I learned and make recommendations to charities:</p><p><strong><a href="http://aarongertler.net/wp-content/uploads/2015/06/Thesis-Conclusion-and-Advice.pdf">The Part of the Thesis You Should Actually Read</a></strong></p><p>In the rest of this post, I’ve explained my motivation for actually writing this thing, and squeezed my key findings into a pair of summaries: One that’s a hundred words long, one that’s quite a bit longer.</p><h1>Super Short Summary</h1><p>Americans only give about 2% of their income to charity, and most of that money goes to charities that don’t do an especially good job of helping people. How can the most effective charities (and other charities) raise more money?</p><p>There are many different techniques that have been shown to work well in multiple studies, but evidence on most techniques is still very mixed, and some popular techniques in the real world have no experimental evidence behind them. Charities really ought to run more experiments to figure out which techniques will work for them.</p><p>In the meantime, some general advice for all charities:</p><ul><li>Tell donors what their donation will accomplish (be specific!).</li><li>Tell stories about individual people you’ve helped.</li><li>Make donors feel like they’re on a winning team with lots of other cool donors, making real progress on an important problem.</li><li>Also, run experiments. I can’t emphasize that enough.</li></ul><h1>Regular Summary</h1><p>I began to study the nonprofit sector because I’m convinced that giving money to the <strong><a href="http://givewell.org/">right causes</a></strong> is one of the best ways for an average person to <strong><a href="http://effectivealtruism.org/">improve the world</a></strong>.</p><p>I’d seen a lot of studies on <strong><a href="http://amzn.to/1eTumAo">fundraising techniques</a></strong>, and on <strong><a href="http://www.amazon.com/gp/product/006124189X/ref=as_li_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=006124189X&linkCode=as2&tag=aarongertlerw-20&linkId=PKZQKMTCA6OMR2FL">techniques for persuading people in general</a></strong>, but it wasn’t easy to find a lot of studies in one place, and it was especially tough to figure out whether any techniques <em>at all</em> had super-strong evidence behind them. It seemed like some were overvalued thanks to the results of a single study that <strong><a href="http://slatestarcodex.com/2014/12/12/beware-the-man-of-one-study/">wouldn’t necessarily generalize</a></strong> to most nonprofits.</p><p>So I did something foolish. I decided that <strong>my senior thesis would attempt to review every experimental study ever conducted on a charitable fundraising technique.</strong></p><p>To ensure that I was saying something original, I added a special section on techniques that would apply especially to “effective” charities: Those which could present strong evidence to donors that they were actually making the world a better place (and doing so more efficiently than most other charities).</p><h2>The Result</h2><p>This isn’t the best-written literature review of fundraising techniques, nor the most comprehensive. But it is probably the most comprehensive review of studies conducted specifically using participants who <em>actually gave money. </em></p><p>This is actually a major problem in the fundraising literature: <strong>About half the studies I found didn’t measure the impact of a technique on real donations.</strong> Instead, researchers measured how much money the participants claimed they would give if someone asked them, or whether they gave tokens to people playing an “economic game” with them, or whether they helped a research assistant clean up spilled coffee.</p><p>(To make an uncharitable comparison, it’s as though Stanley Milgram had conducted his famous <strong><a href="https://en.wikipedia.org/wiki/Milgram_experiment">obedience experiment</a></strong> by asking participants whether they would be willing to shock the person on the other side of the curtain if he asked nicely.)</p><p>I excluded any study that didn’t measure real monetary donations, unless it dealt in some way with evidence-based giving — very little has been written in that domain, so I had to be a bit less selective.</p><h2>Limitations</h2><p>Take everything I say with a grain of salt: I was an undergraduate when I wrote this, and I probably missed important points in some of these papers.</p><p>Almost every study here involves a single request for money, even though donor retention is more important for most charities than getting new donors who only give once. Including donor retention would have made this thesis almost impossible to write, but it’s still an important topic. (<strong><a href="https://web.archive.org/web/20160605153131/http://www.studyfundraising.com:80/about-us/professor-adrian-sargeant/">Adrian Sargeant</a></strong> has some great papers on building long-term relationships with donors.)</p><p>There’s not a lot of research on most of the techniques I covered, considering how popular they are. I found about five studies per technique, and many of those were methodologically flawed. Sample sizes and effect sizes varied drastically, and the sheer number of techniques meant that a meta-analysis wouldn’t have made sense.</p><p>For that matter, nearly everything about these studies varied drastically: The context in which a request was made, the relationship of the participants to the charity, the size of the charity, and so on. <strong>What I wound up with, in the end, were a few solid general rules and a lot of results hinting that certain approaches <em>might </em>be effective. </strong>Still, it’s better to have hints than to have nothing.</p><h1>The Actual Literature Review!</h1><p><em>Reminder: This is a very abridged summary of the paper. Citations available from the <a href="https://aarongertler.net/wp-content/uploads/2018/01/Aaron-Gertler-Senior-Thesis-full-bibliography-1.pdf">actual paper</a>.</em></p><h3>Introduction</h3><p>Charitable giving is probably a net positive, as far as social phenomena go. And even if it isn’t, the most efficient, data-driven forms of giving are certainly good. (This is my first thesis, so I’m defending even the most basic assertions.)</p><p>The latter form of giving, or “effective altruism”, clearly helps the recipients of donations, but it’s not entirely clear whether giving actually makes people happier. There’s a good chance that happy people give more, or that people claim to be happy after giving so experimenters will like them. But it’s also quite possible that giving money makes us happier than spending it, especially once we’ve spent a certain amount on ourselves.</p><p>But even though charitable giving is a very good thing, <strong>we don’t give very much, and the rate at which we give</strong> <strong><a href="https://philanthropy.com/article/The-Stubborn-2-Giving-Rate/154691">hasn’t really changed since 1970.</a></strong> For some reason, charities are struggling to convince people to give money away.</p><p>But science can help! This literature review aims to summarize research on the efficacy of various fundraising techniques — particularly those which could be useful to the most effective charities.</p><p>By the way, <strong><a href="http://opinionator.blogs.nytimes.com/2012/12/05/putting-charities-to-the-test/">some charities are more effective than others!</a></strong> (I&#x27;ll skip this bit for the EA Forum post, you&#x27;ve all heard it before.)</p><h3>Method</h3><p>I read hundreds of pages of Google Scholar even more pages in a few specialized databases and lots of books and the reference sections of some truly epic literature reviews (which are linked at the end of this post).</p><p>Some of the techniques I reviewed could be used by just about any charity. Others should be especially useful for charities that have something in common with the most effective charities — that is, they help people in other countries, help lots of people, measure their results, etc.</p><p>With a few exceptions, I only reviewed studies where participants actually gave real money to charity, because most other ways of predicting giving behavior in the real world don’t seem very effective, and we really want to predict giving behavior! <strong>Prediction is the name of the game.</strong></p><p>I’m also not measuring religious gifts or gifts to colleges, because <a href="https://en.wikipedia.org/wiki/Reciprocity_(social_psychology)">“giving back”</a> to an institution that helped you isn’t quite the same as the giving I’d like to measure.</p><h3>Who Gives? Why?</h3><p>What motivates people to give money away? I’m not going to say “System One” and “System Two”, because that’s cliche, so instead I’ll say “warm giving” and “cool giving” to reflect the fact that giving is driven by a mix of “cool” motivations (an abstract desire to do good, careful calculation of your impact, strategic giving that will make people like you) and “warm” motivations (empathy toward the recipient, personal connections to the charity, a habit of giving a dollar to anyone who asks).</p><p>Yes, this is really just System One and System Two. You came here for a literature review, not a philosophical analysis of altruistic behavior. This section is lazy.</p><p>Anyway, who gives the most money away? Sometimes men give more than women, and sometimes the reverse it true. Older people give more until around the time they retire. Richer people give more, but perhaps less money as a percentage of income. Religious people might give more, but it’s really difficult to tell because 1/3 of all U.S. donations goes toward churches, which only spend a fraction of their income on traditional “charitable” activities.</p><h3>Fundraising Techniques that Probably Help</h3><p><strong>“Legitimizing paltry contributions”:</strong> Tell donors that “even a penny will help” (or something like that), and they’ll usually be more likely to give without giving much less. I have a bunch of theories about why this happens, but we have many more techniques to cover, so let’s move on!</p><p><strong><a href="https://en.wikipedia.org/wiki/Anchoring">Anchoring:</a></strong> Suggesting that donors give $20 tends to bring in more than suggesting they give $10, but a very high anchor scares donors away. Use experiments to figure out the optimal suggestion!</p><p><strong>Dialogue:</strong> Ask someone how they’re doing, wait for them to answer, and <em>then</em> ask for money. This is a much better idea than asking right away, but so far we’ve only seen it work in person, not over the phone. Bonus points if you mention having something in common with the donor!</p><p>(In my favorite “similarity” study, the experimenter lied about having the same birthday as the participant. I can’t believe an IRB <a href="http://slatestarcodex.com/2017/08/29/my-irb-nightmare/">let them get away with that</a>.)</p><p><strong>Publicity:</strong> When someone’s donations will be made public (or even seen by just a single experimenter), they tend to give more. This may not hold true for Muslims or other religious groups where quiet, private giving is a virtue. But that’s a minor exception hypothesized from a single study: Mostly, publicity is a good strategy in the context of these experiments.</p><p><strong>Photographs:</strong> Adding pictures to donation materials tends to make them more effective, though it’s unclear whether sad children are better than happy children. Especially sad or upsetting photos could backfire.</p><p><strong>Individuals:</strong> We really like helping individuals, possibly because it’s easier to empathize with one person than with a whole group of people. Rather than talking about the sheer scope of the problem your charity deals with, it’s generally best to talk about how a donation has helped, or could help, a single sympathetic person.</p><p>In fact, people will literally give more money to save the life of one child than to save the lives of eight, <strong><a href="http://aarongertler.net/wp-content/uploads/2015/06/KogutRitovIdentified.pdf">even when eight lives can be saved for the price of one!</a></strong></p><p>This is a troubling result, but one team of researchers may have discovered how to reverse it with something called <strong><a href="http://aarongertler.net/wp-content/uploads/2015/06/Hsee_2013_unit_asking_fundraising_technique.pdf">“the unit asking effect”.</a></strong> (That paper might be my favorite in the entire thesis — check it out if you can.)</p><p><strong>Follow the Leader:</strong> Potential donors give more after they learn about the gifts of past donors, especially those who were very generous or who resembled the potential donor in some way. This also works if the potential donor sees another donation happen, or is told that the amount they donate will be known by another person (so that they have the chance to become “leaders” themselves).</p><p><strong>Matching donations:</strong> <strong><a href="http://www.benkuhn.net/matching">Ben Kuhn</a></strong> is better at statistics than I am, and his summary of the literature on matching is very rigorous. If you really care about donation-matching, you should read it.</p><p>My shorter summary: If you have some money lying around, you might be able to use it to increase donations by “matching” the gifts of future donors, so that people feel like they can do more good for the same “price” (as though your nonprofit were having a buy-one-get-one-free sale). Ben Kuhn points out that most of the research on matching is sketchy, but it’s no sketchier than the rest of the research on fundraising. Also, matching is “free”, since your charity gets the matching dollars either way, so you might as well experiment.</p><p><strong>Seed donations: </strong>Announce that you’d like to raise a set amount, then “seed” part of that amount so that “success” seems more likely. Donors like giving to specific campaigns that seem like they will meet their goals, and seed donations work about as well as matching in head-to-head experiments. On the other hand, if you have money you could use to seed a campaign or match donations, you could also try…</p><p><strong>Overhead coverage: </strong>When a charity announces that donors’ gifts will only cover “programs” (like giving mosquito nets to families) rather than “overhead” (like paying the salaries of <strong><a href="https://www.salsalabs.com/get-know-us/blog/so-you-want-to-hire-a-professional-fundraising-consultant">professional fundraisers</a></strong>), donors give quite a bit more. This phenomenon can be hacked if a charity uses leftover funds to “cover” its own overhead, or convinces one particular donor to cover <em>all </em>of the overhead so that most donors never have to think about it. </p><p>Donors seem to prefer charities with lower overhead even when the overhead is “covered”, but it’s unclear whether that’s true independent of donors’ fear that their own money will pay for overhead rather than programs.</p><p><strong><a href="http://acumen.org/blog/our-world/why-overhead-ratios-are-meaningless-for-kiva-and-acumen-fund/">Many nonprofits</a></strong> claim that <strong><a href="http://overheadmyth.com/faqs/">“overhead doesn’t matter”</a></strong>, because forcing charities not to spend on overhead keeps them from growing or innovating. This is partly true, though especially high overhead can be a warning sign that something weird is going on. Anyway, what really matters is how much good each dollar does, however the charity spends it. (Still, donors speak the language of overhead, so charities may have to do the same.)</p><h3>Other Fundraising Techniques</h3><p>This summary is long enough already, so I’ll skip talking about techniques that only work sporadically, or don’t seem to work at all.</p><p>With one exception: <strong>Offering gifts or prizes in exchange for donations works <em>very badly </em>in every study that tries to do it.</strong> This may not be the case for gifts “related” to the nonprofit (like a PBS tote bag), but telling people you’ll give them random chocolate if they donate is a terrible idea.</p><p>On the other hand, telling donors they’ll feel great after they give works pretty well, despite playing on the same selfish motivation. And giving people gifts <em>before </em>you ask them to give leads to amazing results (at least in the fourth study mentioned on page 20 of <strong><a href="http://aarongertler.net/wp-content/uploads/2015/06/BIT_Charitable_Giving_Paper.pdf">this paper</a></strong>).</p><h3>Really Obvious Helpful Techniques</h3><p>Simple, evidence-based things that all nonprofits should probably be doing:</p><p><strong>Talk about your beneficiaries a lot.</strong> Make them sound like nice, hardworking people who have a lot in common with the donor.</p><p><strong>Talk about the progress your organization has been making,</strong> not the enormous scope of the problem (or, at the very least, talk about both). People want to be on the winning team.</p><p><strong>Look good.</strong> Dress nicely. Be attractive and high-status (donors can be shallow, especially male donors talking to female fundraisers). It might even help to play catchy music and smell good, though that study has yet to be funded.</p><p><strong>If someone signs up for your mailing list, send them an email <em>right away,</em></strong> and ask them to donate soon after. As of 2014, some of the largest charities in the U.S. <strong><a href="http://aarongertler.net/wp-content/uploads/2015/06/Online-Fundraising-Scorecard.pdf">didn’t send a single email to new subscribers within 30 days</a></strong> — enough time for a potential donor to completely forget about them.</p><p><strong>Use simple, visual language.</strong> One <strong><a href="http://aarongertler.net/wp-content/uploads/2015/06/Evangelidis_2013_fatalities_drive_aid-not-survivors.pdf">clever study</a></strong> took issue with the fact that newspapers tend to use the word “affected” to describe the people who survive natural disasters. Referring to these people as “homeless” (which is what “affected” really means in this context) substantially increases the amount donors are willing to give to them.</p><p>This isn’t surprising: <strong>I don’t know what an “affected” person looks like, but I can picture a “homeless” person without difficulty, and being able to imagine someone is an important step toward caring about them. Visual language is important.</strong></p><h3>Conclusion</h3><p>When we consider the size and sociological importance of the nonprofit sector, it becomes clear that we need more research on fundraising techniques!</p><p>Yes, like any person who researches a topic in great depth, I conclude that <strong><a href="https://instsci.org/supercut.html">more research is needed</a></strong>. On the other hand, I’m not going to grad school, so I’m not biased by the need to churn out more papers on things I already know about. You can trust me on this one.</p><p>There are a few topics I think would be especially neat to research in more depth, but I talk about those within the thesis. For the rest of this summary, all I’d like to say is that <strong>charities should be running more experiments, and publicizing their results.</strong></p><p><strong>Here’s why:</strong></p><p>One cool thing about the nonprofit sector is that it isn’t a zero-sum game. It’s true that charity money is limited. But if we somehow raise charitable spending from 2% to 3% of the U.S. GDP, the gains from that will dwarf the pain of charitable competition. And one of the ways charities <em>can </em>raise the national giving rate is to work together to figure out better fundraising techniques.</p><p>What would happen if charities with excellent websites — like Kiva or Acumen or charity: water — shared the results of their <strong><a href="https://vwo.com/ab-testing/">A/B testing</a></strong> with the rest of the nonprofit world?</p><p>What if the five largest charities in America pooled funds to hire a couple of full-time researchers who could run a dozen experimental replications of important studies over the next year, and begin to figure out which techniques <em>consistently</em> had <em>large </em>effects on charitable giving?</p><p>What if the hundred largest charities in America hired a couple of extra lobbyists to push for a U.S. version of <a href="https://www.gov.uk/donating-to-charity/gift-aid">Gift Aid</a>, which could push the giving rate from 2% to 2.5% within a couple of years?</p><p>I don’t know if any of this would help, but it seems like it would be worthwhile to try. Fundraising experiments are easy to run, and can even be profitable. Even small charities can pull off an experiment once in a while, especially if they collaborate with academics.</p><p>Many of the studies I examined found that some techniques can boost donations by 50% or more. Either these results don’t carry over to the real world, or charities can profit enormously from experimentation; I’d really like to know which one is true.</p><h3>Last Words</h3><p>It may be that no technique or set of techniques, however clever, is going to push the 2% giving rate to 3% or higher. If so, we’ll need to figure out other ways to do more good with our giving.</p><p>This is why I’m so excited about effective altruism: <em>And... skipping this, since y&#x27;all on the Forum have your own reasons to be excited. In retrospect, though, it&#x27;s funny that I went from writing this paper and speculating that I&#x27;d work for an EA charity someday to not even looking at open EA jobs, GiveWell excepted, for almost three years.</em></p><h2>Interesting Papers and Other Links</h2><p>As I discovered while writing this thesis, the science of fundraising isn’t very rigorous yet. The team at <strong><a href="http://www.charityscience.com/operations-details/there-is-no-good-science-on-fundraising">Charity Science</a></strong> explains why.</p><p>My favorite literature reviews on charitable giving, besides mine: <strong><a href="http://aarongertler.net/wp-content/uploads/2015/06/Zagefka_2015_disaster_donation_insights.pdf">Zagefka &amp; James (2015)</a></strong> and <strong><a href="http://aarongertler.net/wp-content/uploads/2015/06/Bekkers_2011_perfect_review.pdf">Bekkers &amp; Wiepking (2011)</a></strong> </p><p>The <strong><a href="http://www.behaviouralinsights.co.uk/publications">Behavioural Insights team</a></strong> designs very cool experiments, many of which use subconscious “nudges” to boost charitable giving.</p><p>One of the deepest threats to effective giving is “psychic numbing”: The more suffering we know about, the less likely we are to take an objective approach to dealing with it. When one person is in danger, we’ll make an enormous effort to save them; when hundreds of thousands are in danger, we often fall into despair and stop trying to help. <strong><a href="http://aarongertler.net/wp-content/uploads/2015/06/Slovic_2007_psychic_numbing_genocide.pdf">Paul Slovic explains.</a></strong></p><p>In 2010, a group of nonprofit foundations published the <strong><a href="http://aarongertler.net/wp-content/uploads/2015/06/Money20for20Good_Final.pdf">Money for Good</a></strong> study, which surveyed thousands of high-income donors in an attempt to figure out how people might be convinced to give more money, and give more to the “highest-performing nonprofits”. The results are fascinating, and a little sad: Only 35% of participants ever did research on their favorite charities, and only 10% of <em>those </em>people used neutral sources rather than the charities’ websites.</p><p>* * * * *</p><p>Also: I’d never have finished this thesis without the help of my wonderful advisor, <strong><a href="http://psychology.yale.edu/people/hedy-kober">Hedy Kober</a></strong>. I’d also like to thank <strong><a href="https://en.wikipedia.org/wiki/Dean_Karlan">Dean Karlan</a></strong>, my second reader, whose work helped inspire me to pursue this topic in the first place.</p> aarongertler gABGNBoSfvrkkqs9h 2018-11-21T04:09:30.271Z W-Risk and the Technological Wavefront (Nell Watson) https://forum.effectivealtruism.org/posts/HLiT2YbHBaLxqhNbY/w-risk-and-the-technological-wavefront-nell-watson <p>This is a linkpost for Nell Watson&#x27;s <a href="https://www.nellwatson.com/blog/technological-wavefront">&quot;The Technological Wavefront&quot;</a>.</p><p>Brief summary:</p><ul><li>Many ancient peoples made impressive discoveries (in some cases, better than what we have now) long before they discovered modern science.</li><li>Society generally becomes more advanced and complex over time as long as resources allow for this growth; this is the &quot;technological wavefront&quot;.</li><li>However, if we hit a resource bottleneck, the wave will break, and we will be forced to step back down the complexity ladder, losing access to some of our present technology.</li><li><strong>&quot;It is our momentum as a species that keeps the light of enlightenment burning steadily.&quot;</strong> If we lose momentum and &quot;step down&quot;, we may never recover the technology we lose, since much of our present knowledge exists either in memory or on media we won&#x27;t be able to access.<strong> This risk of permanent loss is W-risk (&quot;wavefront risk&quot;).</strong></li><li>&quot;The greatest <a href="https://en.wikipedia.org/wiki/Global_catastrophic_risk">existential risk</a> to the <em>meaningfulness and excellence </em>of the future of humanity may be something surprisingly benign, not to be experienced as a bang, but rather as a long drawn-out whimper.&quot;</li><li>W-risk seems more likely to the author than X-risk, so she recommends guarding against it by stockpiling documentation from multiple generations of tech and finding ways to rebuild our energy supply without much fossil fuel.</li></ul><p></p> aarongertler HLiT2YbHBaLxqhNbY 2018-11-11T23:22:24.712Z Welcome to the New Forum! https://forum.effectivealtruism.org/posts/h26Kx7uGfQfNewi7d/welcome-to-the-new-forum <p>Thanks for joining us as we launch the new EA Forum!</p><p>We&#x27;re thrilled to be sharing the Forum with you. We hope that it will become <a href="https://forum.effectivealtruism.org/posts/wrPacgwp3DsSJYbce/why-the-ea-forum">the online hub for figuring out how to do good</a>. </p><p><strong>We strongly recommend that you start by reading two posts, which <a href="https://forum.effectivealtruism.org/posts/wrPacgwp3DsSJYbce/why-the-ea-forum">set out the goals for the EA Forum</a>, and <a href="https://forum.effectivealtruism.org/posts/dMJ475rYzEaSvGDgP/what-s-changing-with-the-new-forum">explain what&#x27;s new</a>.</strong></p><p>The original EA Forum was built to foster intellectual progress and help the community coordinate. We&#x27;ve taken those ideas even further with the new version, adding <a href="https://forum.effectivealtruism.org/posts/dMJ475rYzEaSvGDgP/what-s-changing-with-the-new-forum">new features and moderation policies</a> to promote healthy discussion.</p><p>We hope you&#x27;ll explore everything the Forum has to offer! Let us know if there&#x27;s anything we can do to improve your experience; you can <a href="mailto:forum@effectivealtruism.org">email us</a> or use the blue speech box in the lower-right-hand corner.</p><p></p> aarongertler h26Kx7uGfQfNewi7d 2018-11-08T00:06:06.209Z What's Changing With the New Forum? https://forum.effectivealtruism.org/posts/dMJ475rYzEaSvGDgP/what-s-changing-with-the-new-forum <p> </p><p>This post is a guide to how the new Forum differs from the original. </p><p>Some of these topics -- for example, the new karma system and moderation standards -- are discussed in more detail in the Forum’s <u><a href="https://forum.effectivealtruism.org/posts/PoYi6fynNfHScf7qB/ea-forum-2-0-initial-announcement">initial announcement post.</a></u> The announcement is a few months old, so this guide adds up-to-date information on certain topics. </p><p>We also have a more general <u><a href="https://forum.effectivealtruism.org/about">guide to the new Forum</a></u>, which covers discussion norms and post creation. Some of the guide’s material is repeated in this post.</p><p>If you have questions or feedback about the forum, use the blue speech bubble in the lower-right-hand corner of the screen to get in touch!</p><p></p><h2>Categories</h2><p>Posts on the new Forum are split into two categories:</p><p><strong>Frontpage </strong>posts are timeless content covering the ideas of effective altruism. They&#x27;ll usually be posts that are useful or interesting to a wide range of readers, but they can also discuss more advanced ideas.</p><p><strong>Community </strong>posts include discussion of particular issues within the community, or updates from organizations. This content may not have ongoing relevance, but is useful for increasing coordination in the community in the short term, and discussing important community matters. </p><p>We’ve made this a separate category so that new users can learn about the ideas before they engage with the community, and so that people can select which types of content they want to engage with.</p><p></p><p>If your post is about applying EA methodology and perspectives to the world, it will be moved to Frontpage. It will go to Community if it is focused on the EA community. Keep in mind which section you’re writing for with each post. </p><p>If a post seems to fit both sections, it will be moved to Community by default, so that users around the world can discuss ideas on Frontpage without having to keep up-to-date on community issues.</p><p>You can view either category on its own page, or use the “All Posts” view to see everything. We may add more categories later, but these are the only active ones.</p><p></p><h2>Norms</h2><p>We’ve been talking with users about their experience engaging with the Forum, and have some suggestions for altered norms that will resolve some of the issues they raised.</p><p></p><p><strong>What sort of posts do we encourage?</strong></p><ul><li>We encourage <strong>summaries</strong> and <strong>explanations</strong>, and see them as the foundation of intellectual progress. Debate is important, but high-quality debate is difficult unless each side’s point of view has been clearly explained, with sources that support their claims. </li><li>We encourage <strong>original research</strong>. We hope that students, academics, and independent researchers will post their work on the Forum, even if it’s incomplete or unpublished. </li><li>We encourage<strong> unpolished and shorter-form posts</strong>. We’d rather hear an idea that’s presented imperfectly than not hear it at all. If you&#x27;re struggling to polish an idea, or a piece of writing, others in the community may be able to help -- but only if you share it! Our karma system pushes popular posts to the top of the page, so you don&#x27;t have to worry that your post will “crowd out” other content.</li><li>We encourage <strong>linkposts</strong>. You can contribute a lot to the Forum by sharing interesting material, whether or not you wrote it yourself. By sharing to the Forum, you make it easier for others to find the idea, and create a space to discuss it.</li></ul><p></p><p><strong>Discussion norms</strong></p><p>In the past, we’ve received feedback from some users who found posting on the Forum to be intimidating. Posts sometimes got a lot of criticism without many positive suggestions, which led to brief and unproductive discussion.</p><p>We want users to feel comfortable and secure about posting new content. To this end, we encourage the use of <u><a href="http://effective-altruism.com/ea/dy/supportive_scepticism_in_practice/">supportive skepticism</a></u>. It’s fine to criticize an idea, but it’s even better to support its strongest parts, do your best to patch the holes in it, and be kind when handing it back to its owner. The goal of the Forum isn’t to defeat bad ideas; it’s to find good ideas, even when they appear in the context of a flawed argument.</p><p>We also accept anonymity. Many users publish under their real names, but we’d rather you publish under a pseudonym than not publish at all.</p><p></p><p><strong>Moderation</strong></p><p>On the old EA Forum, moderators mostly focused on removing spam and offensive posts. We don’t want to have much stronger moderation, but we do want to be a little more active, mostly aiming to encourage the best users and maintain the norms we’ve set out above. </p><p>We will do this according to our <u><a href="https://forum.effectivealtruism.org/about">moderation guidelines</a></u>. Mostly, this will simply involve giving positive feedback to contributors, and we expect to use moderation powers (e.g. deleting comments) very rarely.</p><p></p><h2>Features</h2><p><strong>Reading and Commenting</strong></p><ul><li>When you view a list of posts, those you haven’t read (or that have new comments) are highlighted in blue.</li><li>When you view a list of posts, you’ll only see titles by default, but you can preview the content by mousing over a title and clicking “show highlight”.</li><li>You can click a user’s name to be taken to their profile; from there, you can see their past posts and comments, message them, and (new feature!) subscribe to their content (you’ll get a notification whenever they post).</li><li>You can turn your vote into a “strong vote”, which adds or subtracts more karma, by holding the vote button for an extra moment. Your upvotes and downvotes gain karma as you accumulate karma; see <u><a href="https://forum.effectivealtruism.org/posts/PoYi6fynNfHScf7qB/ea-forum-2-0-initial-announcement">this post</a></u> for detailed numbers, and <u><a href="https://www.lesswrong.com/posts/7Sx3CJXA7JHxY2yDG/strong-votes-update-deployed">this post</a></u> for suggestions on when to use strong votes.</li></ul><p></p><p><strong>Writing</strong></p><ul><li>The default post editor is a WYSIWYG (What You See Is What You Get), so posts will look the same on the Forum as they do in your editor. </li><li>You can use Ctrl-4 (Cmd-4 for Macs) to add LaTeX to a post; this is especially useful for formatting equations. We like <u><a href="https://en.wikibooks.org/wiki/LaTeX/Mathematics#Symbols">this guide to writing math in LaTeX</a></u>.</li><li>You can request automatic cross-posting from your personal blog to the EA Forum, as long as you write about EA-relevant topics. Please <u><a href="https://docs.google.com/forms/d/e/1FAIpQLSf0M-pbfwqKsRGWoojZ6i2KuCDTDtmlBQ5mF07W1Vj404yzew/viewform?usp=sf_link">fill in this form</a></u> if you would like us to do this for you.</li></ul><p></p><h2>Prizes</h2><p>CEA will fund monthly prizes for the best posts published on the EA Forum. (We will do this for 3 months, and will consider further funding based on results.)</p><p>The prize amounts are as follows:</p><ul><li><strong>First: </strong>$999</li><li><strong>Second: </strong>$500</li><li><strong>Third: </strong>$200</li></ul><p>The first contest covers any posts made in November.</p><p>The winning posts will be determined by a vote of the moderators (<u><a href="https://forum.effectivealtruism.org/users/maxdalton">Max Dalton</a></u>, <u><a href="https://forum.effectivealtruism.org/users/HowieL">Howie Lempel</a></u>, <u><a href="https://forum.effectivealtruism.org/users/denise_melchin">Denise Melchin</a></u>, and <u><a href="https://forum.effectivealtruism.org/users/julia_wise">Julia Wise</a></u>) and the current top three Forum users (Peter Hurford, Joey Savoie, and Rob Wiblin). </p><p>The moderation team uses the email address <u><a href="mailto:forum@effectivealtruism.org">forum@effectivealtruism.org</a></u>; feel free to contact them with questions or feedback. </p><p></p> aarongertler dMJ475rYzEaSvGDgP 2018-11-07T23:09:57.464Z Book Review: Enlightenment Now, by Steven Pinker https://forum.effectivealtruism.org/posts/gQvaA9EbvmzQATHt3/book-review-enlightenment-now-by-steven-pinker <p>For most of history, it didn’t matter what century you lived in. With few exceptions, you would have suffered what we today consider “extreme poverty”:</p><ul><li>You’d spend your time hunting, gathering, or farming, using almost all your energy just to stay alive. Despite this effort, you’d eat the same food almost every day, and that food would barely be edible by modern standards. </li><li>Your only defenses against illness would be herbs, bed rest, or surgery performed with primitive tools and zero anesthesia.</li><li>You’d sleep when the sun went down -- <a href="https://lucept.com/2014/11/04/william-nordhaus-the-historic-cost-of-light/">light was expensive</a>.</li><li>And you’d probably die before the age of 60.</li></ul><p>But a few hundred years ago, things began to change. The world’s wealth exploded...</p><hr class="dividerBlock"/><span><figure><img src="http://aarongertler.net/wp-content/uploads/2018/10/Enlightenment-Now-GDP.png" class="draft-image center" style="" /></figure></span><p></p><p><strong>Source: </strong><em>Our World in Data, </em>Roser 2016, based on data from the World Bank and from Maddison Project 2014. </p><hr class="dividerBlock"/><p>...which gave us access to medicine, supermarkets, lightbulbs, and all sorts of other good things. Steven Pinker attributes this to the Enlightenment, an intellectual movement he breaks into four “themes”:</p><p><strong>Reason: </strong>Reason is our attempt to understand the world using evidence and logic, and to test our beliefs so that they evolve towards truth. During the Enlightenment, the spread of literacy and scholarship helped reason compete with its predecessors: “Faith, dogma, revelation, authority, [and] charisma.”</p><p><strong>Science: </strong>Science is the process of applying reason to understand the natural world. We’ve recently transitioned from near-universal superstition to an era when many people have a basic understanding of science. Millions of people work as <em>professional </em>scientists who expose new truths, or engineers who apply those truths to create wonders. Pinker sums up one of the greatest triumphs of science in two words: <u><a href="https://en.wikipedia.org/wiki/Smallpox#Eradication">“Smallpox was.”</a></u></p><p><strong>Humanism: </strong>The Enlightenment created a new system of morality: one which “privileges the well-being of individual men, women, and children over the glory of the tribe, race, nation, or religion.” This humanism has taught us to tolerate and care for each other to an <u><a href="https://press.princeton.edu/titles/9434.html">ever-greater degree</a></u>. In the process, war, slavery, and capital punishment have withered to husks of their former selves.</p><p><strong>Progress: </strong>In Pinker’s view, the Romantics of the 19th century (and the despots of the 20th) believed in twisting people to fit their ideals. But Enlightenment thinkers preferred twisting their ideals to fit people -- they tried to build a world more suitable for humans. In universities, governments, and markets, they created norms, laws, and machines that made our lives better in a thousand different ways. The Romantics sought “utopia”, but Pinker sees the goal of Enlightenment as “protopia”: we may not perfect the world, but we can always improve it.</p><hr class="dividerBlock"/><p>Though he discusses and defends the first three themes, Pinker’s main focus is progress, which he implies is driven by a virtuous cycle of increasing wealth, knowledge, and tolerance:</p><ul><li>New discoveries produce wealth, which can be used to fund more discoveries.</li><li>Some discoveries help us communicate globally, increasing our tolerance of “strangers” who no longer seem strange.</li><li>Wealth also makes us more tolerant. Nations with ample resources can afford social welfare programs and even the provision of aid to strangers in other nations.</li><li>Tolerance helps us produce wealth by trading, and gives us access to the ideas and discoveries of other people. (You get the idea.)</li></ul><hr class="dividerBlock"/><p>In a steady progression of strikingly similar graphs -- lines moving up for good things, down for bad -- Pinker shows that in the last few centuries, we finally escaped from stagnation. Human life has gotten better in almost every way, from a twenty-fold rise in average income since 1800 to a 50% reduction in young children killed by disease <em>since 2000</em>. </p><p>There are too many statistics to summarize, but some are especially surprising:</p><ul><li>Lethal lightning strikes in the U.S. are down 97% since 1900. In fact, there’s been a sharp decline in deaths from falls, fires, workplace injuries, and most other “accidents”. Our longer lives are due partly to medicine, but also to laws, regulations, and norms which promote safe behavior. </li><ul><li>Note: Pinker often focuses on the U.S., though historical trends are broadly similar for other developed countries (and many that are still developing).</li></ul><li>Deaths from natural disasters have also fallen drastically. Our wealth and knowledge give us innumerable small ways to defend ourselves (better hospitals, tougher structures, better early-warning systems, etc.).</li><ul><li>For an example of this, see Patrick McKenzie’s essay in “Further Reading”, at the end of this review.</li></ul><li>The average American has hundreds of hours of extra leisure time each year, compared to the early 20th century. This increase was driven both by shorter workweeks and by refrigerators, washing machines, and other appliances. Since 1900, we’ve cut weekly housework time in <em>half</em>.</li><li>Thanks to this extra time, American parents -- both mothers and fathers -- spend more time with their children than they did a century ago.</li></ul><p>Pinker holds that these improvements, while often grudgingly acknowledged, aren’t taken seriously enough by the modern counter-Enlightenment. Populist politicians attack every pillar of our present-day prosperity. Thinkers on the left and right criticize the “complacency” of modern society. And the media skips boring good news to promote negative stories.</p><p>Proposing a solution to these issues would require an additional book. Pinker mostly lets the numbers make his arguments for him, though he also addresses a few common counterarguments and pokes holes in his opponents’ logic. (When they even use logic, that is: one reviewer refers to Pinker’s numbers on violence reduction as <u><a href="https://www.theguardian.com/books/2015/mar/13/john-gray-steven-pinker-wrong-violence-war-declining">“amulets” and “sorcery”</a></u>).</p><hr class="dividerBlock"/><h2>Commentary</h2><p>Pinker is a stylish, entertaining writer whose book tells a number of important truths. His main claim -- that the world is getting better -- generally seems to be correct, and he backs up his best points with blistering prose. </p><p>But the claim isn’t universally true. And when the facts aren’t fully on his side, Pinker can descend into strawmanning and dodgy figures to justify his grand thesis.</p><p>One of the weakest chapters in the Progress section deals with existential risk -- which seems highly relevant, since even centuries of progress could be undone by a disaster of sufficient magnitude. As he tries to persuade us that we live in the best of times, Pinker undersells two problems that could endanger civilization: nuclear war and the development of artificial general intelligence. </p><p>On the nuclear side:</p><ul><li>He makes irrelevant points about the number of books using the words “nuclear war” and the political establishment’s current lack of interest in nuclear issues. (I don’t trust the political establishment to prioritize important problems, and I suspect that Pinker doesn’t, either.) </li><li>He also notes that “if we can reduce the annual chance of nuclear war to a tenth of a percent, the world’s odds of a catastrophe-free century are 90 percent”, but never acknowledges that a 10-percent chance of nuclear war is still uncomfortably high. </li><li>Finally, he points out the decline in nuclear danger since the end of the Cold War, but declines to mention new conflicts that could arise in the future; this is understandable, since he isn’t a military expert, but I’d have liked to see more evidence that our current low-risk state is stable. </li></ul><p>Still, he offers sensible proposals for reducing nuclear risk, and at least admits that the issue is worthy of attention. I left the chapter worrying slightly less about nuclear annihilation than I had before.</p><p>His discussion of artificial intelligence, on the other hand, felt perfunctory, as though he didn’t think the issue worthy of his full attention:</p><ul><li>As he did <u><a href="https://intelligence.org/2014/11/18/misconceptions-edge-orgs-conversation-myth-ai/">years ago</a></u>, he continues to state that the AI safety community fears intelligent systems that are malevolent or omniscient. But the expert consensus is more subtle and realistic. Safety researchers generally believe that a powerful AI doesn’t have to be evil or all-knowing to be <u><a href="https://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity#Importance">dangerous</a></u>. It just has to be capable of pursuing goals that endanger humans with enough intelligence to accomplish those goals. </li><li>He declines to engage with intellectual arguments about central topics like <u><a href="https://wiki.lesswrong.com/wiki/Orthogonality_thesis">orthogonality</a></u> or the <u><a href="https://en.wikipedia.org/wiki/AI_control_problem">control problem</a></u>. Instead, he cites <em>2001: A Space Odyssey</em> (as well as <em>Get Smart</em>, a shlocky comedy from the Sixties) as a counterpoint to Nick Bostrom’s <u><em><a href="https://www.amazon.com/dp/B00LOOCGB2/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1">Superintelligence</a></em></u>. One of his few expert quotes is an out-of-context line from Stuart Russell, whose views on the topic are nearly opposite Pinker’s. </li><li>In general, throughout the section, he selects weak points (some of which I’ve never seen argued by anyone in the community) and attacks their obvious flaws. In many other chapters, he takes time to find strong opposing arguments and make data-driven counterpoints; by comparison, the pages on AI feel rushed.</li></ul><p>Writers with relevant expertise (<u><a href="https://www.scottaaronson.com/blog/?p=3654">Scott Aaronson</a></u>, <u><a href="https://www.lesswrong.com/posts/C3wqYsAtgZDzCug8b/a-detailed-critique-of-one-section-of-steven-pinker-s">Phil Torres</a></u>) have contested Pinker’s points at length. I will add only that, given Pinker’s belief that humans have achieved incredible power and wealth through the use of reason and cooperation, it seems odd that he thinks AI will never be similarly capable. (Especially when so many people stand to make money by building smart, flexible systems that work well together.)</p><hr class="dividerBlock"/><p>Even when Pinker writes about present progress instead of future problems, some of the same problems emerge. George Monbiot’s <u><a href="https://www.theguardian.com/commentisfree/2018/mar/07/environmental-calamity-facts-steven-pinker">deep dive on the environmental chapter</a></u> found sketchy data and further out-of-context quotes. And while the numbers I spot-checked myself were accurate, some of them still had an odd spin. For example, Pinker argues that the true U.S. poverty rate has dropped sharply because today’s poor Americans can afford to buy more than poor Americans in past eras. This is true and important, but skirts other aspects of poverty -- feelings of inferiority, harassment by police, a lack of self-determination -- that haven’t necessarily changed for the better.</p><p>That said, most of his statistics are solid and well-selected, and the data-heavy sections are by far the strongest. The book begins to flag when Pinker turns away from numbers and toward his critics; he’s not particularly charitable in the book’s more argumentative sections, rarely yielding to a single opposing point.</p><p>The arguments also suffer from a simple lack of space. His critique of religion is shallow by necessity, since he can spare it only a few pages; the same goes for his critique of Romanticism, his critique of leftist academics, and so on. These sections read like newspaper op-eds; they’re fine, but they don’t give Pinker time to exert his full strength as an academic.</p><p>I almost wish he’d turned the social criticism into a separate book. I’d prefer a version of <em>Enlightement Now</em> that focused entirely on material and social progress, with complaints about Donald Trump replaced by deeper explanations of counterintuitive statistics.</p><hr class="dividerBlock"/><p>If I had to summarize all my complaints, I’d say that Pinker tends to over-argue his conclusion. Is <em>everything </em>really getting better? Are<em> all</em> risks truly decreasing? Is there really <em>nothing </em>of value in the Romantics and Postmoderns who followed the Enlightenment? </p><p>A few other points of note:</p><ul><li>Pinker makes a solid attempt to answer the troubling question at the heart of Yuval Harari’s <em><u><a href="https://www.amazon.com/dp/B00ICN066A/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1">Sapiens</a></u></em>: “For all of our progress, are we actually <em>happier</em>?” He finds some evidence that rising wealth has made most of us more satisfied with our lives. And while he avoids eras of the deeper past (knowing how the ancient Romans really felt is beyond us), he points out that our ancestors also suffered from boredom and ennui and a lack of time spent with family, all of which I’ve heard cited as issues specific to us moderns.</li><li>Animal welfare goes untouched, as it did in Pinker’s <em>The Better Angels of Our Nature, </em>which correctly noted a decline in human violence... against humans. It’s understandable that Pinker wants to focus on a single species, but graphs about the number of farm animals, many of whom live terrible lives, don’t look nearly as rosy:</li></ul><hr class="dividerBlock"/><span><figure><img src="http://aarongertler.net/wp-content/uploads/2018/10/Enlightenment-Now-meat-production.png" class="draft-image center" style="" /></figure></span><p></p><p><strong>Source: </strong>The Food and Agricultural Organization of the United Nations.</p><hr class="dividerBlock"/><ul><li>Pinker never tries to prove that economic growth will continue in the face of (theorized) <u><a href="https://en.wikipedia.org/wiki/The_Great_Stagnation">technological stagnation</a></u>, the aging of the developed world’s population, and the <u><a href="https://aarongertler.net/wp-content/uploads/2018/08/Are-Ideas-Getting-Harder-To-Find.pdf">ever-increasing cost of research</a></u>. This is harder to explain than the lack of animal welfare; economic decline could be just as dangerous as a nuclear exchange to Pinker’s “protopia”.</li><li>Along the same lines, while Pinker praises the modern regulatory system, he barely mentions the costs. Regulation certainly saves a lot of lives, but it can also <u><a href="https://ij.org/issues/economic-liberty/braiding/">become excessive</a></u> and <u><a href="https://www.mercatus.org/publication/cumulative-cost-regulations">slow down economic growth</a></u>. Like health and wealth, regulation tends to increase over time; unlike health and wealth, there is such a thing as <em>too much </em>regulation. </li></ul><hr class="dividerBlock"/><span><figure><img src="http://aarongertler.net/wp-content/uploads/2018/10/Enlightenment-Now-regulations.png" class="draft-image center" style="" /></figure></span><p></p><p><strong>Source: </strong>George Washington University Regulatory Studies Center.</p><hr class="dividerBlock"/><ul><li>Meanwhile, Pinker attributes the rise of <em>scientific</em> regulation, in the form of overly cautious ethics boards and bioethicists who slow medical progress, to the “stigmatization of science”. But another factor seems to be at work: have we not simply gotten carried away with our enlightened love of safety? Again, some of what Pinker defines as “anti-Enlightenment” just looks like an overabundance of progress.</li></ul><p>One last observation: <em>Enlightenment Now </em>has a lot more “now” than “Enlightenment”. As <u><a href="https://www.goodreads.com/review/show/2297689799?book_show_action=true&from_review_page=1">other reviewers</a></u> have noted, the book is light on intellectual history. Pinker gives a brief tour of names and ideas, but barely mentions how those ideas developed over the centuries, or how the Enlightenment’s philosophy influenced the Scientific and Industrial Revolutions. (Did we need Voltaire and Mill to get steam engines and assembly lines?) His most important points still hold without this material, but I wish he’d done more to connect his four “themes”.</p><p>In the end, I strongly endorse half of <em>Enlightenment Now</em>, tread with caution around a quarter, and would prefer the last quarter to have been published somewhere else. But the good material is often great, and Pinker’s occasional missteps shouldn’t obscure the beauty and joy of the facts he presents, which remain underrated. I’m glad we have him as a counterpoint to most of the media.</p><hr class="dividerBlock"/><h2>Who should read this book?</h2><ul><li>Pessimists who don’t think the world is getting better and want lots of counterarguments.</li><li>Optimists who like happy little graphs.</li><li>People of any outlook who want a brief tour of the last two centuries from a materialist perspective, with lots of citations for following up.</li></ul><hr class="dividerBlock"/><h2>Who shouldn’t read this book?</h2><ul><li>People who fundamentally distrust materialist perspectives.</li><li>People who prefer a few deep arguments to many surface-level arguments.</li><li>People who are familiar with this genre and don’t feel the need to remind themselves of all the ways things have gotten better.</li></ul><hr class="dividerBlock"/><h2>What questions does this book raise for the EA reader?</h2><p>Here are a few that were on my mind after I finished. Your questions might be entirely different; Pinker offers a lot to think about.</p><ul><li>Given the massive historical gains driven by economic growth, might it be worth putting more EA effort into research on growth and development?</li><li>If people really are more satisfied with their lives today than they were prior to the Industrial Revolution, how much of that satisfaction was dependent on material progress? Are there ways to capture similar life-satisfaction gains without an attendant order-of-magnitude increase in GDP?</li><li>How many of the lesser-known improvements cited by Pinker might help us think about new cause areas? </li><ul><li>For example, is there some form of technological progress that could, like the washing machine, save people multiple hours of tedium each week, and that EA could help bring into being? (Off-the-cuff example: Pushing forward full legal acceptance of self-driving cars by a few years might save billions of hours and many traffic deaths.)</li><li>Or are there particular safety regulations that could massively cut down on some obscure cause of death like industrial accidents, and might be relatively easy to push forward? What could we learn from an Open Phil <u><a href="https://www.openphilanthropy.org/research/history-of-philanthropy">“history of regulation”</a></u> case study?</li></ul></ul><hr class="dividerBlock"/><h2>Favorite Quotes</h2><ul><li>“Our greatest enemies are ultimately not our political adversaries but entropy, evolution (in the form of pestilence and the flaws in human nature), and most of all ignorance—a shortfall of knowledge of how best to solve our problems.”</li><li>“Bad things can happen quickly, but good things aren’t built in a day [...] if a newspaper came out once every fifty years, it would not report half a century of celebrity gossip and political scandals. It would report momentous global changes such as the increase in life expectancy.”</li><li>“Time spent on laundry alone fell from 11.5 hours a week in 1920 to 1.5 in 2014. For returning “washday” to our lives, Hans Rosling suggests, the washing machine deserves to be called the greatest invention of the Industrial Revolution.”</li><li>“In 1919, an average American wage earner had to work 1,800 hours to pay for a refrigerator; in 2014, he or she had to work fewer than 24 hours (and the new fridge was frost-free and came with an icemaker). Mindless consumerism? Not when you remember that food, clothing, and shelter are the three necessities of life, that entropy degrades all three, and that the time it takes to keep them usable is time that could be devoted to other pursuits.”</li><li>“On April 12, 1955, a team of scientists announced that Jonas Salk’s vaccine against polio—the disease that had killed thousands a year, paralyzed Franklin Roosevelt, and sent many children into iron lungs—was proven safe. According to Richard Carter’s history of the discovery, on that day, ‘people observed moments of silence, rang bells, honked horns, blew factory whistles, fired salutes, . . . took the rest of the day off, closed their schools or convoked fervid assemblies therein, drank toasts, hugged children, attended church, smiled at strangers, and forgave enemies.’”</li></ul><hr class="dividerBlock"/><h2>Further Reading</h2><ul><li><em>Civilization and Capitalism</em>, by Fernand Braudel, explores human material progress in meticulous detail. (Braudel spends as much time discussing improvements in bread quality as Pinker does improvements in GDP.) The full book is <u><a href="https://archive.org/stream/BraudelFernandCivilizationAndCapitalism/Braudel%2C%20Fernand%20-%20Civilization%20and%20Capitalism%2C%20Vol.%201#page/n27/mode/2up">free online</a></u>, but you should start with <u><a href="https://www.reddit.com/r/slatestarcodex/comments/8bypq0/reading_notes_civilization_capitalism_15th18th/">this excellent summary</a></u>.</li><li>MIT professor Scott Aaronson’s <u><a href="https://www.scottaaronson.com/blog/?p=3654">positive and pessimistic review</a></u> of <em>Enlightenment Now </em>(also linked above) includes a detailed critique of Pinker’s views on artificial intelligence.</li><li>Tyler Cowen, economist and champion book-reader, wrote a <u><a href="https://marginalrevolution.com/marginalrevolution/2018/02/enlightenment-now-new-steven-pinker-book.html">brief and thoughtful review </a></u>of <em>Enlightnment Now</em>.</li><li>Nathan J. Robinson offers a <u><a href="https://www.currentaffairs.org/2018/02/why-equality-is-indispensable">detailed rebuttal</a></u> of Pinker’s defense of inequality. (The rebuttal has its own flaws, of course, because <u><a href="https://quoteinvestigator.com/2015/06/15/complicated/">everything is more complicated than it seems</a></u>).</li><li>Patrick McKenzie produced a <u><a href="https://www.kalzumeus.com/2011/03/13/some-perspective-on-the-japan-earthquake/">stirring, detailed essay</a></u> about the effectiveness of modern disaster response (in the specific context of the 2011 Japanese earthquake).</li><li>One form of progress Pinker didn’t mention: The proportion of Wikipedia articles that meet a set of exacting quality standards has been <u><a href="https://en.wikipedia.org/wiki/Wikipedia:Good_article_statistics">steadily increasing</a></u> for years.</li><li>Many more forms of progress Pinker didn’t mention: <u><a href="https://www.gwern.net/Notes#ordinary-life-improvements">Gwern</a></u> lists the ways life has improved in the last three decades (the coffee has gotten better, for example).</li><li><u><a href="https://ourworldindata.org/wrong-about-the-world">Our World In Data</a></u> displays a set of surveys which show that most people are pessimistic about global development -- save for those in countries where the most development is happening, like China and Kenya. </li></ul> aarongertler gQvaA9EbvmzQATHt3 2018-10-21T23:12:43.485Z On Becoming World-Class https://forum.effectivealtruism.org/posts/4WwcNSGd3XcpBC72Y/on-becoming-world-class <p> In September, Bob Mueller posed an <a href="https://www.facebook.com/groups/437177563005273?view=permalink&id=1917381734984841">interesting question</a> on Facebook:</p><p><em>How does &quot;becoming (one of) the best in the world&quot; in some not-in-any-way effective field or niche compare to traditional EA careers?</em></p><p>Bob, who practices a niche, unspecified form of digital art, described himself as &quot;quite motivated about a lot of things&quot;. He didn&#x27;t seem perturbed by the prospect of switching to some more &quot;traditional&quot; career.</p><p>This made me wonder: If a person really does have equal access to both of these options -- possibly world-class at an unusual profession, or moderately talented at an EA-aligned profession -- and would be equally happy and satisfied in both places, how should they make the choice?</p><p>There are many skilled people in the community, so I suspect that Bob isn&#x27;t the only person who will end up thinking about this. I hope that my thoughts -- and those of the commenters! -- might come in handy.</p><hr class="dividerBlock"/><p><strong>Existing thoughts on this topic:</strong></p><p>80,000 Hours&#x27; latest career article notes that one high-impact career path involves <a href="https://80000hours.org/articles/high-impact-careers/#4-apply-an-unusual-strength-to-a-needed-niche">&quot;applying an unusual strength to a needed niche&quot;</a>.</p><p>Anthropologists, for example, helped the global health community contain Ebola by delivering critical information on local burial practices. And there are plenty of more common examples along these lines: Every organization needs someone who understands accounting, and someone who knows how to design a website.</p><p>That said, the EA community may be able to hire professionals in these fields as contractors, without needing to find accountants or designers who know anything about our particular beliefs. And while we need solid and competent people for these tasks, we can probably get by without anyone “world-class”. </p><p>Anyway, in Bob’s case, his work seems even less likely than anthropology to hold direct relevance for EA organizations. (As far as I can tell -- Bob, if you’re reading this, you’re welcome to provide more details in the comments!) So the 80,000 Hours advice isn’t too applicable for his situation, and the situation of other EAs with “not-in-any-way effective” talents.</p><hr class="dividerBlock"/><p>Given all of this, what value could someone like Bob bring to EA if he pursued a career in digital art, and rose to the top of his field?</p><p>Some Facebook commentators seemed a bit skeptical of the idea, with good reason:</p><ul><li>The top people in most fields, if those fields aren&#x27;t naturally practical or lucrative, probably won’t become wealthy or famous.</li><li>Even in fields where being at the top <em>does </em>mean fame or fortune, becoming a &quot;top person&quot; is a risky proposition. If Bob overestimates his skill, he may flounder somewhere below the peak of a <a href="http://createquity.com/2013/10/artists-not-alone-in-steep-climb-to-the-top/">winner-take-all market</a>.</li><li>Artistic skills in particular may not be very transferable. Someone who tries to become a top programmer in an obscure language could have more success falling back to a &quot;normal&quot; career in software than someone whose art goes out of style would have becoming a &quot;normal&quot; graphic designer. (Designers make a lot less money than programmers, and there aren&#x27;t nearly as many of them.)</li></ul><p>Every case is different, but these arguments do form a reasonable case that people like Bob should stick with a more standard EA-aligned career path.</p><hr class="dividerBlock"/><p>However, I suspect that there are serious potential upsides to becoming world-class even in an “irrelevant” field -- but that they&#x27;re harder to see or imagine than the downsides. Making art is risky, but when it&#x27;s done by someone with strong EA values, a lot of good possibilities open up.</p><p>For the rest of this piece, I&#x27;ll play Devil&#x27;s Advocate against the standard view, and consider what an EA who becomes a world-class artist -- or world-class in some other profession -- could do for the community.</p><hr class="dividerBlock"/><p><strong>1. Being world-class creates connections. </strong>Bob&#x27;s art has already been used by a &quot;classic&quot; music festival. That didn&#x27;t create any direct opportunity to advocate for EA, but let&#x27;s say his animations catch on -- and Daft Punk asks him to help out with their next tour. Suddenly, every EA who knows Bob is two degrees of separation from Daft Punk, and three degrees from much of the music industry.</p><p>&quot;But it&#x27;s not like Daft Punk is going to have an extended philosophical discussion with Bob, right? Why does this help?&quot;</p><p>This is true. But a personal connection, however brief, is a powerful thing, and can create opportunities well into the future.</p><p>If Thomas Bangalter decides to donate his electro-funk fortune to charity in a few years, and hits up his address book or reaches out on Twitter to ask for advice -- can we call this <a href="https://twitter.com/jeffbezos/status/875418348598603776?lang=en">&quot;pulling a Bezos&quot;</a>? -- he&#x27;ll be likely to notice a suggestion from his old friend Bob, who helped him out on his last tour.</p><p>And wow! Who knew that Bob was so into charity, or that he was personally acquainted with so many Oxford professors? Looks like Thomas was in luck; time to send a few emails...</p><p>In a more general sense, I&#x27;ve seen a lot of Facebook posts over the year from EAs whose wealthy friends wanted their advice on charitable donations. Any profession (choreographer, personal chef, stunt double) which exposes you to people with a lot of wealth and status -- even if you aren&#x27;t rich or famous yourself -- seems like it could have the same effect.</p><p>(Worth noting: Many professions with no obvious connection to wealth or status could <em>become </em>connected if you get good enough. <u><a href="https://hackernoon.com/im-32-and-spent-200k-on-biohacking-became-calmer-thinner-extroverted-healthier-happier-2a2e846ae113">Dr. Peter Attia</a></u> specializes in nutrition science, a rather neglected corner of medicine, but his work has led to his appearing on <u><a href="https://tim.blog/2014/12/18/peter-attia/">one of the world’s most popular podcasts</a></u> and serving as a personal health consultant for <u><a href="https://hackernoon.com/im-32-and-spent-200k-on-biohacking-became-calmer-thinner-extroverted-healthier-happier-2a2e846ae113">successful tech entrepreneurs</a></u>.)</p><hr class="dividerBlock"/><p><strong>2. Being world-class creates validity. </strong>I once worked for a recruiting agency that helped tech startups find programmers. But good programmers are inundated with recruiting messages, and even our most elite, experienced agents had low response rates when they reached out.</p><p>Once, we were in a desperate spot, and asked the CEO of a company we worked with to write the message themselves. His message, though it was dashed-off and totally un-optimized, performed at least twice as well as ours.</p><p>Being a CEO clearly helps you get attention, but I think it helped that this particular CEO, who was a programmer with a stacked LinkedIn page, had <em>validity</em>. He used technical jargon that we recruiters couldn&#x27;t use without sounding fake. He clearly had something in common with the people he wrote to, and could, by implication, understand their concerns in a way we couldn&#x27;t match.</p><p>Most professional communities contain a lot of people who would become interested in EA if they heard the right message -- which could mean hearing from the right <em>person</em>. If we want to convince digital artists to think about EA (say, if we want to see more art with EA themes), it will help to have digital artists in our community, even if they&#x27;re only &quot;well-known&quot; to other artists.</p><p>Same goes for advertisers and authors; lawyers and landlords; podcasters and pro gamers. My impression from working in a few different fields is that professional networks between &quot;elite&quot; members are very tight; even one or two people with EA leanings could have a surprising amount of direct influence.</p><p>(On &quot;connections&quot; vs. &quot;validity&quot;: The first relates to a world-class person meeting people in other fields, the second to meeting people in the same field.)</p><hr class="dividerBlock"/><p><strong>3. Being world-class creates diversity. </strong>Alice, a banker who read some articles about effective altruism and wanted to learn more, signs up for EA Global 2020.</p><p>The timeline splits, and two versions of Alice attend two different conferences:</p><p>a. At EA Global A, Alice talks on the first night to ten different people. Four of them are software developers. Two are philosophers. One is an economics PhD student. One studies some sort of esoteric computer science field that she doesn&#x27;t really understand. Two are undergraduates, who are respectively studying software development and biology, because something called &quot;pandemic risk&quot; is apparently a major concern. It&#x27;s all a bit overwhelming, and very abstract, and it sets the tone for the rest of her conference.</p><p>b. At EA Global B, Alice talks on the first night to ten different people. Most of them work on computer science or philosophy, but one is a professional poker player, which is very cool and not at all what she was expecting. And then there&#x27;s Bob, the digital artist, who helped out with Daft Punk&#x27;s last tour and shows her some <em>epic</em> animations on his phone. Alice goes back to her hotel feeling like the community is vibrant and diverse, has a few more interesting encounters over the course of the conference, and winds up reaching out to her local group when she gets home.</p><p>A community where almost everyone does the same few things might still thrive; there are many more ways to be diverse than just &quot;career&quot;. But having a wider range of professions has some advantages:</p><ul><li>We have a wider presence throughout our communities, such that more people are likely to run into an EA at some point in their lives.</li><li>We present a more balanced picture to a world that likes to stereotype us as a haven for tech nerds, weird philosophers, and no one else.</li><li>Within EA, we learn more from each other, and develop an internal understanding of more fields. An accountant hired from a random firm can keep the books for an EA organization, but they won’t write a post on the EA Forum about basic accounting principles that even small altruistic projects can use to save money and time.</li><ul><li>This post doesn&#x27;t actually exist, as far as I know, but if we had a few more EAs with backgrounds in accounting, perhaps it would!</li></ul></ul><p>(The &quot;career diversity&quot; consideration doesn&#x27;t just apply to people at the top of their fields. If you do something that isn&#x27;t common within the community, at even a reasonably skilled level, we may have a lot to learn from you!)</p><hr class="dividerBlock"/><p>None of these considerations apply universally. It may be the case that, between the risk of overestimating one&#x27;s skill and the difficulty of getting to the top of <em>any </em>profession, almost all EAs would be better off pursuing career paths with clear direct impact.</p><p>But it still seems important for our community to recognize and support someone who has a realistic chance of becoming, say, a <a href="https://www.npr.org/sections/deceptivecadence/2018/10/04/654327199/macarthur-fellow-matthew-aucoin-talks-composing-and-donating-his-genius-money">famous EA composer</a>. Or an EA bronze medalist in curling. Or the first EA in the U.S. House of Representatives...</p><p>...or even something <em>really </em>weird like &quot;the EA-aligned person who wrote the most popular piece of Harry Potter fanfiction ever&quot;. We can&#x27;t ignore the risk, but we also shouldn&#x27;t ignore the opportunity. </p><hr class="dividerBlock"/><p><strong>Questions for the comments: </strong>Have you ever tried to become world-class at something? Or had the chance to try, but opted to take another path? What were your results? Are you happy with the choice you made?</p> aarongertler 4WwcNSGd3XcpBC72Y 2018-10-19T01:35:18.898Z EA Concepts: Share Impressions Before Credences https://forum.effectivealtruism.org/posts/jhexFncC9KN76Z5ki/ea-concepts-share-impressions-before-credences <p>Hello, readers! I&#x27;m trialing for the Content position at CEA; as such, I&#x27;ve been asked to draft a couple of posts for the <a href="https://concepts.effectivealtruism.org/concepts/">concept map</a>. These are meant to be close to the current style (no links in body text, fairly concise). </p><p>I&#x27;d love to hear your feedback on this post. Specific questions:</p><p><strong>1. </strong>What are your favorite words for &quot;beliefs before updating on outside information&quot; and &quot;beliefs after updating on outside information&quot;? We&#x27;re trying to draw that distinction with &quot;impression&quot; and &quot;credence&quot;, but those may not be the best options.</p><p><strong>2.</strong> When you imagine this from the view of a reader who is newish to EA, and clicked on a link to read about the importance of &quot;sharing your impressions&quot;, does it make sense? Is it clear why this concept is useful</p><p><strong>3.</strong> Are there any other links we should add to &quot;further reading&quot;? (In particular, I think that a link to the Soviet example of &quot;everyone hates the government but is afraid to say so&quot; might be relevant, but I couldn&#x27;t find a good article summarizing the example.)</p><p>Thanks for your help! The other concept draft is <a href="https://forum.effectivealtruism.org/posts/hNJsTFLWLFbHh8uke/ea-concepts-inside-view-outside-view">here</a>.</p><hr class="dividerBlock"/><p><strong>Share Impressions Before Credences</strong></p><p>When we think through a question by ourselves, we form an “impression” of the answer, based on the way we interpret our experiences. (Even if you experience something that others have also experienced, what you take away from that is unique to you.)</p><p>When we discuss a question with other people, we may update our “impression” into a “credence” after updating on their views. But this can introduce bias into a discussion. If we update before speaking, then share our updated credences rather than our impressions, our conversation partners partly hear their own views reflected back to them, making them update less than they should.</p><p>Consider two friends, Aaron and Max, who are equally good weather forecasters. Aaron has the impression that there is a 60% chance of rain tomorrow. He tells Max about this. Max had formerly had the impression that there was an 80% chance of rain tomorrow, but he updates on Aaron’s words to reach a credence of 70%. </p><p>Aaron then asks Max for his view. Max tells him he thinks there’s a 70% chance of rain, so Aaron updates to reach a credence of 65%. Both friends used the same decision algorithm (average both probabilities), but because Aaron shared his impression first, and Max shared a view that “reflected” that impression, Aaron failed to update in the same way as Max. </p><p>This dynamic explains why it can be important to share your initial impressions in group discussions, even if they no longer reflect your up-to-date credences. Doing so helps all participants obtain as much information as possible from each participant’s private experience.</p><hr class="dividerBlock"/><p><strong>Further Reading:</strong></p><p>Kawamura, Kohei, and Vasileios Vlaseros. 31 July 2014. <a href="http://homepages.econ.ed.ac.uk/~kawamura/Expert_and_Majority_4.5.pdf">“Expert Information and Majority Decisions”</a>. </p> aarongertler jhexFncC9KN76Z5ki 2018-09-18T22:47:13.721Z EA Concepts: Inside View, Outside View https://forum.effectivealtruism.org/posts/hNJsTFLWLFbHh8uke/ea-concepts-inside-view-outside-view <p>Hello, readers! I&#x27;m trialing for the Content position at CEA; as such, I&#x27;ve been asked to draft a couple of posts for the <a href="https://concepts.effectivealtruism.org/concepts/">concept map</a>.</p><p>These are meant to be close to the current style (no links in body text, fairly concise). I&#x27;d love to hear your feedback on this post. Specific questions:</p><ol><li>Do you prefer &quot;we&quot; or &quot;you&quot; as the pronoun of choice?</li><li>Are there any other links we should add to &quot;further reading&quot;?</li><li>When you imagine this from the view of a reader who is newish to EA, and clicked on a link to read about something called &quot;outside view&quot;, does it make sense? Is it clear why this dichotomy could be useful?</li></ol><p>Thanks for your help! The other concept draft is here.</p><hr class="dividerBlock"/><p><strong>Forecasting with the Inside and Outside View</strong></p><p>When we make predictions, we often imagine the most subjectively “likely” way that something could happen, and then judge whether this scenario seems reasonable. This is called “inside view” thinking.</p><p>For example, if we want to predict whether we’ll get to work on time tomorrow, we might plan out a morning schedule, then judge whether we think we’ll be able to follow it.</p><p>However, another method -- &quot;outside view thinking&quot; -- tends to be more reliable. In the case of getting to work, we&#x27;re more likely to make an accurate prediction if we simply think about all the other mornings we’ve gone to work, and estimate how often we were late.</p><p>(This is most helpful if tomorrow is in the same “reference class” as those other days, without unusual factors -- like an important meeting or a blizzard -- that increase or decrease the odds of lateness.)</p><p>The outside view typically works better because it “automatically” includes data about unpredictable circumstances. If we’re late to work, it could be for many different reasons: oversleeping, missing the bus, forgetting our keys, etc. Calculating the odds for each <em>individual</em> reason would be very difficult, but looking at past workdays means that we can avoid doing so, by summing them into a single number. If we are late <em>for</em> <em>any reason</em> half the time, and today is a typical day, we probably have about half a chance of being late, <em>for any reason</em>, tomorrow.</p><p>If you ever find yourself trying to forecast about something in a common “reference class” (that is, you have information about a group of similar things), try using the outside view. The history of past events is often more reliable than your own judgment.</p><hr class="dividerBlock"/><p><strong>Further Reading:</strong></p><p>Hanson, Robert. 2017. <a href="http://www.overcomingbias.com/2007/07/beware-the-insi.html">Beware the Inside View.</a></p><p>Wikipedia. 2018. <a href="https://en.wikipedia.org/wiki/Reference_class_forecasting">Reference Class Forecasting.</a></p> aarongertler hNJsTFLWLFbHh8uke 2018-09-18T22:33:08.618Z Talking About Effective Altruism At Parties https://forum.effectivealtruism.org/posts/vwDjfnJ9656fAQvsC/talking-about-effective-altruism-at-parties <p>(Cross-posted from my <a href="http://aarongertler.net/party-talk-ea/"><strong>blog</strong></a>, with a few edits.)</p> <p>Many of the effective altruists I&apos;ve known were first introduced to EA through some kind of interpersonal connection -- a friend got interested first, or they heard about it at a college event, or something along those lines.</p> <p>I&apos;ve introduced a few people to EA this way myself. But it&apos;s a tricky thing to get right -- as we all know, many EA ideas can sound very strange at first, especially if the explanation is just a few words off.</p> <p>So I&apos;m listing some of the best ways I&apos;ve found to explain EA quickly, plus a few additional ideas. My goal is to explore many different frames for EA, so that we can find the best ways to explain it to many different types of people. The goal of any one of these frames is to open a short conversation, or create enough interest that someone will click on a link you send them later.</p> <p>(Note: No frame is perfect for all conversations, and some may be terrible if used in the wrong circumstances.&#xA0;Be careful.)</p> <p>&#xA0;</p> <p><strong>After you read this, please add your favorite frame(s) in the comments! (If you have any.)&#xA0;</strong>I&apos;d love to use this page to start collecting lots of examples, so we can collectively figure out the best-sounding frames, even if we&apos;re a long way from an RCT of EA frames.</p> <p>&#xA0;</p> <h1>List of Frames</h1> <h2>Excited Altruism</h2> <p>Some people see charity largely as a way to avoid moral guilt. I think that&apos;s a fair interpretation, but when I give, most of what I feel is excitement! I may never get the chance to save a child from a burning building [<strong><a href="http://amahighlights.com/william-macaskill/">source for the example</a></strong>], but I can still make a child&apos;s life much better, and maybe even help to save a child who would otherwise have died a preventable death. Why not be excited about that?&#xA0;I&apos;m also <strong><a href="http://blog.givewell.org/2013/08/20/excited-altruism/">excited to live in a time</a></strong> when we&apos;ve started to have really good evidence&#xA0;around how to help people on the other side of the world, so that I can be really efficient in the way that I give. When I give, I feel much the same as when I volunteer -- glad that I&apos;ve done something positive, and hopeful about the results. Hence, &quot;excited altruism&quot;. &#xA0;</p> <p>&#xA0;</p> <h3>The Feeling of Relief</h3> <p>&quot;Has there ever been a time when you&#xA0;started to get sick, and you knew it was going to be bad? And you had a moment of &apos;oh,&#xA0;no, please, anything but...&apos;?</p> <p>&quot;When I think about the people who are helped by groups that fight malaria and parasitic worms, I think about those moments. &quot;I&apos;ve had those &apos;oh no&apos; moments a few times, but that usually meant a really bad fever or a case of strep throat -- something that would go away in a few days when I took the right pills. And meanwhile, I&apos;d be more or less okay -- I could call in sick to work, get homework from my professor, and&#xA0;catch up on life when I got better.</p> <p>&quot;But if I lived&#xA0;in a village where malaria is very common, I&#xA0;might have a higher-than-50-percent chance of getting it during the rainy season. So when I woke up and felt sick, my &apos;oh no&apos; moment might&#xA0;mean several weeks spent in bed. During this time, I wouldn&apos;t&#xA0;be fit to farm,&#xA0;and&#xA0;I&apos;d lose quite a bit of money as a result -- meaning&#xA0;I might be skipping meals later. If it were a parasitic worm infection, the symptoms wouldn&apos;t be the same, but the general principle holds true; I&apos;d be in bad shape.</p> <p>&quot;When I give money to buy deworming pills or a malaria bednet, I imagine someone who would be having one of those &apos;oh no&apos; moments instead being perfectly healthy and not having their life disrupted. I imagine how good I&apos;d feel if someone stopped one of my strep infections before it happened. It feels awesome,&#xA0;and that makes me excited to give someone a chance at one fewer &apos;oh no&apos; moment.&quot; &#xA0;</p> <p>&#xA0;</p> <h3>Global Inequality (Money)</h3> <p>(For use in conversations where people claim to favor local giving.)</p> <p>&quot;I&apos;ll grant that inequality causes problems in the United States. But U.S. inequality is minor compared to inequality on the planet Earth. &quot;Did you know that Earth has a higher Gini coefficient than any single country? A lot of us are part of the global &apos;99 percent&apos;, so to speak.</p> <p>&quot;The average American CEO makes about 200 times as much as me. And I make about 200 times as much as some of the poorest people in the world.</p> <p>&quot;One big project of effective altruism is to reduce global inequality. By letting more migrants into the U.S. (where they&apos;ll send money back to their families), by cutting down on illicit cash flows (when rich people in poor countries don&apos;t pay taxes and hide money abroad), and also by literally asking rich people to give cash to poor people. That seems to work pretty well.&quot; &#xA0;</p> <p>&#xA0;</p> <h3>Global Inequality (Attention)</h3> <p>(For use in conversations where people claim to favor local giving.)</p> <p>&quot;There are a lot of Americans we tend to ignore. The homeless, American Indians, ex-felons... plus&#xA0;a lot of other groups. And that&apos;s terrible.</p> <p>&quot;But I think that we ignore people in some other countries to an equal&#xA0;extent. That happens with charitable giving, too. For every dollar an American donates, we send about <strong><a href="https://www.charitynavigator.org/index.cfm?bay=content.view&amp;cpid=42">six cents</a></strong> to other countries.</p> <p>&quot;Intellectually, I understand the philosophical position behind giving locally. But on a personal level, I find it really hard to see nationality as a thing that should guide me, apart from any other factors. Maybe direct friendship, or shared membership in a small group, but not nationality.&quot; &#xA0;</p> <p>&#xA0;</p> <h3>Doubling Income</h3> <p>&quot;Major social problems in the U.S. are generally harder to solve with money than major social problems in the developing world.&#xA0;Still, there are obviously ways you could spend money to change the life of a fellow American. And that would often by a kind, world-improving thing to do.</p> <p>&quot;But money goes so much further abroad that it took me a long time to understand exactly how big the difference is.&#xA0;</p> <p>&quot;GiveDirectly lets me send money straight to someone in Kenya. If I give them about $700, they can use that money to double a family&apos;s income for the entire year. Can you imagine what the impact would be if you doubled someone&apos;s income in the U.S. for a year?</p> <p>&quot;But that would be about 20 times as expensive in the U.S. And the impact would be similar either way.&#xA0;$700 transforms a poor Kenyan&apos;s life in about the same way you&apos;d expect $14,000 to transform a poor American&apos;s life.&quot; &#xA0;</p> <p>&#xA0;</p> <h3>Shared Humanity</h3> <p>&quot;Does the cold, calculating part of EA mean that I lose some of my empathy?</p> <p>&quot;Well, maybe. It&apos;s definitely hard for me to feel empathy for someone who survives&#xA0;on a couple of dollars a day. It would feel arrogant to claim that I &apos;understand&apos; that person. Our lives are different in almost every way.</p> <p>&quot;But even if I&apos;ll never know what it&apos;s like to eat the same thing for almost every meal, I think there are basic human pleasures that I share with pretty much every other human who ever lived.</p> <p>&quot;I know what it&apos;s like to learn something new. I know what it&apos;s like to see an old friend after a long separation. I know what it&apos;s like&#xA0;to sit inside when the weather is bad and hear the raindrops on the roof and think &apos;yep, I&apos;m glad I&apos;m not outside right now&apos;. I know what it&apos;s like to fall in love with someone and wake up smiling just because that person&#xA0;<em>exists.</em></p> <p>&quot;So when I need an emotional boost, I imagine the person I&apos;m helping. And the way that, even though we are just about as different as two humans can be, we still share those awesome things. And I&apos;m hopefully freeing that person up to not feel as much stress, and to have more time to feel the kinds of happiness I think we&#xA0;<em>do&#xA0;</em>share.&quot; &#xA0;</p> <p>&#xA0;</p> <h3>Ridiculously Lucky</h3> <p>I keep my version of this frame on <strong><a href="http://aarongertler.net/donations/">the page where I track my giving.</a></strong> I might also whip out my phone and quote my idol,&#xA0;<strong><a href="http://theunitofcaring.tumblr.com">The Unit of Caring:</a></strong></p> <blockquote>I firmly refuse to feel guilty about my outrageous cosmic luck. I find it far more satisfying to pay it forward. See, luck, like pretty much everything else, can be bought with money [...] I was born by sheer chance into a country that has eradicated malaria already but I can buy a couple bednets towards the project of stamping it off this earth entirely [...] Almost every advantage I have, everyone ought to have, and giving them money is the closest I can come to putting a finger on the cosmic scales.</blockquote> <p>&#xA0;</p> <h3>Revenge</h3> <p>&quot;I have a hard time getting angry at people. I usually feel like people&apos;s reasons for doing bad things made sense to them at the time, and whenever I get mad at them I remember times people were mad at me for doing bad things, and then I feel kind of sick.</p> <p>&quot;But getting angry is really satisfying. So instead I get angry at problems. I&apos;m angry at meteors that have the sheer <em>nerve</em>&#xA0;to get within a billion miles of Earth. I&apos;m angry at mosquitoes, because they&apos;re always biting&#xA0;people. I&apos;m angry at the social systems, built on purpose or by accident, that ruin lives with no remorse, because they are abstract concepts that can&apos;t even <em>feel </em>remorse.</p> <p>&quot;EA is my way of saying &apos;screw you, problems!&apos; You want to keep people in jail? I&apos;ll bail them out. You want to make people sick? I&apos;ll&#xA0;<em>murder&#xA0;</em>you. You want to threaten my planet? I&apos;ll wipe the very&#xA0;<em>possibility&#xA0;</em>of you from existence.&#xA0;And I&apos;ll do it with the cold, brutal efficiency of an executioner.&quot;</p> <p>&#xA0;</p> <h3>Social Justice</h3> <p>&quot;I&apos;m generally a fan of the modern social justice movement. I think they&apos;ve done more good than harm, and will end up doing much more good than harm in the long run.</p> <p>&quot;But like any movement, they wind up focusing on some people more than others, to avoid being stretched really thin. I think EA does a good job of catching some of the groups&#xA0;SJ sometimes doesn&apos;t catch.</p> <p>&quot;A focus on local movements and protests means we don&apos;t always catch people who aren&apos;t from our country, and that&apos;s one of EA&apos;s major focuses. And while SJ has a lot of vegetarians and vegans, I haven&apos;t seen a lot of animal-rights rhetoric; that&apos;s another major EA focus.</p> <p>&quot;When I think &apos;all lives matter&apos;, I&apos;m not going for a counterpoint to &apos;black lives matter&apos;. I&apos;m going for &apos;don&apos;t forget the lives of people who live outside the classic U.S. race/gender/class spectrum!&apos; Even if &apos;racism&apos; manifests very differently in India or Nigeria or Myanmar, poverty and the lack of education still cause very familiar problems there.&quot; &#xA0;</p> <p>&#xA0;</p> <h3>Being Embarrassed in the Future</h3> <p>&quot;<strong><a href="http://paulgraham.com/say.html">Paul Graham</a></strong> wrote a great essay on &apos;what we can&apos;t say now&apos;. It&apos;s about how the world might change in the future.</p> <p>&quot;We can look back at every other era and find problems with how they lived. Segregation, slavery, wars of conquest... there&apos;s always something.</p> <p>&quot;What will our great-grandchildren be embarrassed about when they look at the year 2016? Probably the &apos;rights issues&apos; we&apos;re still struggling with. But I think they&apos;ll also be confused and angry about how many people we left out to dry because they weren&apos;t in the same country, or because of generic arguments about how &apos;aid doesn&apos;t help&apos; or &apos;start helping in your own backyard&apos;.</p> <p>&quot;This extends to some uncomfortable opinions, too. Like the idea that some causes, or specific charities, are simply a waste of time and money. I wonder if our descendants will look&#xA0;at the amount we give to museums or symphony orchestras, and just be completely confused as to why so many people were dying for lack of really cheap medicine in other parts of the world.</p> <p>&quot;Basically, I&apos;m assuming my descendants will be smarter than I am, and I try&#xA0;to donate&#xA0;in part so that my giving will make sense to them.&quot;</p> aarongertler vwDjfnJ9656fAQvsC 2017-11-16T20:22:46.114Z Meetup : Yale Effective Altruists https://forum.effectivealtruism.org/posts/D32ujb3MbKS6DTCCZ/meetup-yale-effective-altruists <h2>Discussion article for the meetup : <a href="/meetups/s">Yale Effective Altruists</a></h2> <div> <p> <strong>WHEN:</strong> <span>11 October 2014 10:59:03PM (-0400)</span><br> </p> <p> <strong>WHERE:</strong> <span>New Haven, CT</span> </p> </div><!-- .meta --> <div> <div><p>Come hang out with Yale&apos;s student EA group! Contact 302-824-2026 or aaron.gertler@yale.edu for more information on location and details.</p></div> </div><!-- .content --> <h2>Discussion article for the meetup : <a href="/meetups/s">Yale Effective Altruists</a></h2> aarongertler D32ujb3MbKS6DTCCZ 2014-10-07T02:59:35.605Z