aarongertler feed - EA Forum Reader aarongertler’s posts and comments on the Effective Altruism Forum en-us Comment by aarongertler on Reducing EA job search waste https://forum.effectivealtruism.org/posts/6Dqb8Fkh2AnhzbAAM/reducing-ea-job-search-waste#AKvFtEMoySG2TmRcr <p>There are serious legal risks to giving feedback of any kind, let alone feedback that is neither &quot;constructive&quot; nor &quot;easy to understand&quot;. I found <a href="https://www.amazon.com/Excuse-Factory-Walter-Olson/dp/1416576231">this book on U.S. employment law</a> to be an accessible introduction to legal restrictions around hiring with good citations (though it is written in an alarmist, of-the-moment tone).</p><p>We might hope that candidates with an EA mindset wouldn&#x27;t sue after getting feedback, but not all candidates will have strong EA ties, and even people with strong EA ties sometimes do surprising things.</p><p>Other difficulties with feedback include:</p><ul><li>Making it harder to implement work tests in the future (Open Phil tells me I didn&#x27;t do X on their test, so I do it next time and tell my friends to do it next time and everyone&#x27;s natural ability is now a bit murkier)</li><li>Creating arguments with disgruntled candidates (&quot;that&#x27;s not enough justification for not hiring me, I&#x27;m going to send you nasty emails now&quot;; &quot;you told me I didn&#x27;t have X, but I actually do and accidentally left it out of my resume, you&#x27;d better hire me now&quot;)</li><li>Creating a sense of bias/favoritism (person A is a really strong candidate on the cusp of getting hired and gets detailed feedback; person B is a really weak candidate and would be much less useful to provide with feedback; person B hears that person A got feedback and is angry)</li></ul><p></p><p>Personally, I love feedback, and I appreciate Ben West of Ought for giving the best feedback of any org I applied to in my last round of job-hunting, but I can understand why organizations often don&#x27;t give out very much.</p> aarongertler AKvFtEMoySG2TmRcr 2019-04-17T22:57:16.691Z Comment by aarongertler on EA Forum Prize: Winners for February 2019 https://forum.effectivealtruism.org/posts/b3oTGiMpEKY4MnAEB/ea-forum-prize-winners-for-february-2019#5kz6o2dTwy3DF67Ck <p>Do you think of this as an argument against the existence of the Prize? Do you like the Prize, but think we should have a different voting system?</p> aarongertler 5kz6o2dTwy3DF67Ck 2019-04-17T06:43:30.067Z Comment by aarongertler on Who is working on finding "Cause X"? https://forum.effectivealtruism.org/posts/AChFG9AiNKkpr3Z3e/who-is-working-on-finding-cause-x#aj6SCJfrr2jcuwJdk <p><a href="https://blog.givewell.org/2019/02/07/how-givewells-research-is-evolving/">GiveWell</a> is searching for cost-competitive causes in many different areas (see the &quot;investigating opportunities&quot; table).</p> aarongertler aj6SCJfrr2jcuwJdk 2019-04-12T10:18:13.065Z Comment by aarongertler on EA Forum Prize: Winners for February 2019 https://forum.effectivealtruism.org/posts/b3oTGiMpEKY4MnAEB/ea-forum-prize-winners-for-february-2019#QYyLiYEjaAu5S5WNk <p>All of the topics you discussed are indeed useful, and posts about them are eligible for the Prize. The only non-eligible posts are those written by voters or those that come from CEA. </p><p>I hope that &quot;being in the 97% of posts that don&#x27;t win a prize this time around&quot; isn&#x27;t a major disinc, and I think it would be disingenuous to specifically favor critical pieces in the voting process. For now, we&#x27;re doing a really hands-off process with no formal guidelines for voters, which has led to a mix of research and meta posts winning the prize, some of which contained direct criticism of EA organizations and charity recommendations.</p> aarongertler QYyLiYEjaAu5S5WNk 2019-04-09T18:21:34.332Z Comment by aarongertler on Activism to Make Kidney Sales Legal https://forum.effectivealtruism.org/posts/Gh5vrW8ctw8QLcoXZ/activism-to-make-kidney-sales-legal#gwLcoviiPxBELn2YZ <p>I upvoted this post, because I thought the question was at least interesting enough to think about, and I liked that you didn&#x27;t come in with a confident declaration of the idea&#x27;s quality.</p><p>However, in my experience, posts that propose some major, novel form of action without any cost-effectiveness estimates (and without much other background) don&#x27;t tend to do well on the Forum. I personally don&#x27;t mind &quot;questioning&quot; posts where only the bare bones of an idea are brought forward, but I think the following would have helped with engagement:</p><ul><li>Some background on the impact of other protracted legal battles fought with the goal of changing a law; how often does this sort of thing actually work? What factors typically separate successful from unsuccessful battles?</li><li>An estimate of how much this &quot;battle&quot; might cost, and for how valuable it would be to pass a law legalizing kidney sales (with consideration of positives and negatives). Even if X is a plausible strategy for accomplishing Y, we need to know the magnitude of Y&#x27;s impact before deciding whether thinking about X is worthwhile. (For more on factors behind the impact of an activity, see <a href="https://80000hours.org/articles/problem-framework/">this 80,000 Hours post</a> on the &quot;scale/neglectedness/solvability&quot; framework.)</li></ul><p>My guess is that, while some people are concerned about the PR implications of the idea, many others have a view along the lines of: </p><p>&quot;Okay, this is one of a thousand things that might be better than AMF under some model or other. One thousand things is too many to evaluate, so posts like this aren&#x27;t very helpful unless they have information that lets us actually get a sense for the <em>likelihood </em>that this is better than AMF.&quot;</p> aarongertler gwLcoviiPxBELn2YZ 2019-04-09T10:12:30.910Z Comment by aarongertler on Can my filmmaking/songwriting skills be used more effectively in EA? https://forum.effectivealtruism.org/posts/3Sb8B7wBKpYiHzviy/can-my-filmmaking-songwriting-skills-be-used-more#vtniZEBfBHPwaEcYn <p>Thank you very much for writing this! It&#x27;s really inspiring to see people with a wide range of talents who want to help.</p><p>I&#x27;ll respond with my reactions and thoughts as I read:</p><p>1. We aren&#x27;t really all that smart, and most of us are quite friendly; I realize that just <em>telling </em>you this is of limited use, but I hope you feel less nervous over time :-)</p><p>2. Thanks for donating so much already; that&#x27;s more than most people ever get around to, and giving is at the heart of effective altruism. You&#x27;ve done a lot of good so far.</p><p>3. The work you&#x27;re doing is clearly high-quality enough that it matches or exceeds production values for most EA-related video content (there&#x27;s very little musical content to which I could compare your work).</p><p>4. Some EA organizations are very interested in sharing information about EA with the greater public in a very accessible way. Others still want new people to find EA, but are wary of doing anything that might look like &quot;mass marketing&quot;, for a few reasons:</p><ul><li>The more accessible an idea becomes, the more &quot;fidelity&quot; tends to be lost. (This is the point you mentioned; things get skewed through a lack of nuance.)</li><li>Even media that accurately reflect the core principles of EA may not go over well if they reach the wrong audience. Some magazines have written articles about EA that got a lot of the basics right, but put a very negative spin on them; we&#x27;d probably have preferred if the authors of those articles never heard about EA.</li><li>Similarly, even people who understand EA very well may come into the community with other goals. If a popular left-wing politician were to endorse EA to all their supporters, we might see a flood of new Forum users (great!), but also a flood of posts advocating that money be given to left-wing causes (not great) and possibly a flood of people sharing Facebook posts (or quotes with journalists) about the ways in which socialism is obviously the <em>real </em>effective altruism. They may understand EA quite well, but care less about digging into it than about using it to buffer their favorite positions. (I already see a lot of this on Twitter, from all over the political spectrum.)</li></ul><p>5. Given this, what can you do? I don&#x27;t immediately have any snappy video ideas, but other organizations might (The Life You Can Save and GiveDirectly come to mind as orgs that have done a lot of creative marketing). If you reach out to different EA charities with a link to this post, there&#x27;s a reasonable chance one of them will want to work with you on something.</p><p>The key thing is not to work alone. Checking your scripts and such with people who know the subject well should help you avoid mistakes. If you ever feel inspired to make something on your own, you can post about it on the Forum or in the <a href="https://www.facebook.com/groups/effective-altruism-editing-and-review-458111434360997/">EA Editing and Review</a> Facebook group, and it will be quite likely that at least one person will check for possible improvements.</p><p>I was really excited to read this, and I hope you find a way to contribute through your art!</p> aarongertler vtniZEBfBHPwaEcYn 2019-04-09T09:55:10.227Z Comment by aarongertler on Long Term Future Fund: April 2019 grant decisions https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions#n4D4fZvdgpPJ6ohm7 <blockquote><strong>Robert Miles, video content on AI alignment</strong> - 39k. Isn&#x27;t this something you guys and/or MIRI should be doing, and could do quickly, for a lot less money, without having to trust that someone else will do it well enough? </blockquote><p>Creating good video scripts is a rare skill. So is being able to explain things on a video in a way many viewers find compelling. And a large audience of active viewers is a rare resource (one Miles already has through his previous work). </p><p>I share some of your questions and concerns about other grants here, but in this case, I think it makes a lot of sense to outsource this tricky task, which most organizations do badly, to someone with a track record of doing it well.</p><p>--</p><p><em>I work for CEA, but these views are my own.</em></p> aarongertler n4D4fZvdgpPJ6ohm7 2019-04-08T22:41:56.136Z Comment by aarongertler on Long Term Future Fund: April 2019 grant decisions https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions#AE2tdGwmhkaG3eu3r <blockquote>The large gift you received should be used to expand the influence of EA as an entity, not as a one-off [...] and of course in general I support de-emphasizing AI risk in favor of actual charity.</blockquote><p>While I&#x27;m not involved in EA Funds donation processing or grantmaking decisions, I&#x27;d guess that anyone making a large gift to the Far Future Fund does, in fact, support emphasizing AI risk, and considers funding this branch of scientific research to be &quot;actual charity&quot;.</p><p>It could make sense for people with certain worldviews to recommend that people not donate to the fund for many reasons, but this particular criticism seems odd in context, since supporting AI risk work is one of the fund&#x27;s explicit purposes.</p><p>--</p><p><em>I work for CEA, but these views are my own.</em></p> aarongertler AE2tdGwmhkaG3eu3r 2019-04-08T22:38:19.513Z Comment by aarongertler on My Q1 2019 EA Hotel donation https://forum.effectivealtruism.org/posts/BM2DYWpM6rSxyZ7AS/my-q1-2019-ea-hotel-donation#mGM6gtW3jWXzgB9qT <p>Mike,</p><p>To clarify the difference between &quot;personal blog&quot; and other categories: If you&#x27;d prefer not to have a post marked as &quot;meta&quot; or &quot;frontpage&quot; (and thus displayed to more people), you can leave a note at the top of the post requesting that it be left as a &quot;personal blog&quot; post, or message me to let me know I shouldn&#x27;t add a meta/frontpage category. (I&#x27;m the Forum&#x27;s lead moderator.)</p> aarongertler mGM6gtW3jWXzgB9qT 2019-04-08T22:29:33.450Z Comment by aarongertler on Open Thread #44 https://forum.effectivealtruism.org/posts/j4ASyayDWFwgEEzid/open-thread-44#R9ksjF7BFYhzM2LzE <p>I&#x27;ve also noticed this occasionally; thanks for providing a specific example I hadn&#x27;t seen so far. The Forum&#x27;s search feature has improved since we launched, but I also use Google sometimes, and it seems like they should be able to index us well. </p><p>The tech team will look into what&#x27;s going on (especially with regard to the &quot;underline&quot; character response, which I agree is strange).</p> aarongertler R9ksjF7BFYhzM2LzE 2019-04-03T16:37:04.106Z Comment by aarongertler on Is any EA organization using or considering using Buterin et al.'s mechanism for matching funds? https://forum.effectivealtruism.org/posts/FFrJC7TTQo5Se9HnM/is-any-ea-organization-using-or-considering-using-buterin-et#LGynxWXZoGcBbquSW <p>MIRI took part in a Liberal Radicalism-based fundraiser at the end of 2018 (see &quot;WeTrust Spring&quot; in <a href="https://intelligence.org/2019/02/11/our-2018-fundraiser-review/">this post</a>).</p> aarongertler LGynxWXZoGcBbquSW 2019-04-01T21:04:55.792Z Comment by aarongertler on Innovating Institutions: Robin Hanson Arguing for Conducting Field Trials on New Institutions https://forum.effectivealtruism.org/posts/AbL4SMPdM3cbgmJMc/innovating-institutions-robin-hanson-arguing-for-conducting#P5zGeFyRSXrvw7Cwu <p>From the comments of the post:</p><blockquote><strong>Comment:</strong> My knowledge concerning Effective Altruism is pretty superficial, so this might be a naive question: Have you gotten feedback on this from folks in the EA community? If so (or if EA supporters reading this have such feedback) I&#x27;d love to hear about it.</blockquote><blockquote><strong>Robin Hanson: </strong>I haven&#x27;t gotten feedback, and would also like to hear.</blockquote> aarongertler P5zGeFyRSXrvw7Cwu 2019-04-01T11:35:13.479Z Comment by aarongertler on My Q1 2019 EA Hotel donation https://forum.effectivealtruism.org/posts/BM2DYWpM6rSxyZ7AS/my-q1-2019-ea-hotel-donation#G2BQSatsZAKyLnyBu <p>Strong upvote. This was beautifully written, and I love the categorical breakdown of each positive or negative consideration. I hope that other donation-related Forum posts are written with this level of care (and the associated donations made with this level of analysis, at least by people who have the time to do so and the desire to fund smaller projects).</p> aarongertler G2BQSatsZAKyLnyBu 2019-04-01T11:28:53.469Z Comment by aarongertler on EA's siren call to appease my longing for a just world https://forum.effectivealtruism.org/posts/5KkhSm5aYpBh2TBHq/ea-s-siren-call-to-appease-my-longing-for-a-just-world#FfTfpRyLGFmwct4hn <p>I came away from this post a bit confused. It seems like you wish the EA community had better or more consistent models, which is understandable. But when you say:</p><blockquote>I&#x27;ve also been disappointed again and again when meeting or hearing about prominent EAs in person or when hearing about what it&#x27;s like working in EA or EA related (=trying to do good effectively in x cause, e.g. effective animal advocacy) spaces. They&#x27;re all just.. human? </blockquote><p>What are some of the ways in which you are disappointed? Do the people you meet commonly make logical or factual errors? Are they not productive enough? Do they not communicate as effectively as you would have wished?</p><blockquote>I think I just always naively felt that <em>somewhere in the entire world</em>, there had to be superpeople who could reliably tell me exactly what to do with my life. </blockquote><p>I empathize tremendously with this. In my case, I began to lose this feeling as I read more and more stories of people who thought they&#x27;d found such a superperson, or that they <em>were </em>such a superperson, but turned out to be wrong. I gave up on the idea of finding my own superperson without needing to evaluate too many of those people in my own life. </p><p>On the other hand, I do think the EA community, and the organizations within it, have made a lot of progress over the last ten years. The &quot;prominent people&quot; you meet here have flaws, but there are also flaws they <em>don&#x27;t </em>have (mostly), making them different from almost every other prominent person in the history of the world. They care a lot more about the truth (seeking it and speaking it), they&#x27;ve <a href="https://slatestarcodex.com/2017/04/07/yes-we-have-noticed-the-skulls/">noticed the skulls</a>, and they are actively trying to get better at things rather than basking in the community&#x27;s admiration. </p><p>(It helps that the EA community isn&#x27;t a very good place for said basking; someone is always ready to point out your mistakes.)</p><p>Ten years from now, I expect the community to have accomplished a lot more, and for its most prominent people to be even better at their jobs, even if they never reach the point where they can guide every community member with absolute reliability.</p> aarongertler FfTfpRyLGFmwct4hn 2019-04-01T11:19:17.861Z Comment by aarongertler on [Link] New Founders Pledge report on existential risk https://forum.effectivealtruism.org/posts/gzKexa8tCFwodAozH/link-new-founders-pledge-report-on-existential-risk#eezEQEeSbGoSjZ5Q2 <blockquote>We are also finding quite a high level of interest in the area among our members. </blockquote><p>Do you have a rough sense for how members&#x27; level of interest in the classic &quot;EA&quot; existential risks compares to their level of interest in climate change?</p> aarongertler eezEQEeSbGoSjZ5Q2 2019-04-01T11:06:18.930Z Comment by aarongertler on The Case for the EA Hotel https://forum.effectivealtruism.org/posts/sigun924gsxN4oZq2/the-case-for-the-ea-hotel#mtuEkGkXWyA5MFH7L <blockquote>Similarly, the EA hotel has weekly check-ins to gauge the progress of their participants, and is working on implementing more stringent feedback loops for the people who enter the hotel. The goal, instead of trying to vet the people and projects up front, is to use the process itself to vet the project and the individual. As they pass increasingly high bars, they eventually cross the bar where they achieve good evidence for their project, and can then move on to the next stage of the pyramid.</blockquote><p>Has anyone been asked to leave the EA Hotel because they weren&#x27;t making enough progress, or because their project didn&#x27;t turn out very well? </p><p>If not, do you think the people responsible for making that decision have some idea of when doing so would be correct?</p> aarongertler mtuEkGkXWyA5MFH7L 2019-04-01T10:50:44.417Z Comment by aarongertler on The Case for the EA Hotel https://forum.effectivealtruism.org/posts/sigun924gsxN4oZq2/the-case-for-the-ea-hotel#uu5L34KprZLgFHzsm <p>I&#x27;d like to push back slightly against the notion of &quot;apologizing&quot; for writing something that others found hard to understand. The EA Forum should be a place to try out different kinds of content, and even if some experiments don&#x27;t work out, it&#x27;s generally good that experiments happen.</p><p>(That said, if you&#x27;re feeling apologetic, there&#x27;s also no problem with apologizing! I just want others who see this to know that it&#x27;s okay when a post doesn&#x27;t work out.)</p> aarongertler uu5L34KprZLgFHzsm 2019-04-01T10:45:35.555Z Comment by aarongertler on EA Forum Prize: Winners for February 2019 https://forum.effectivealtruism.org/posts/b3oTGiMpEKY4MnAEB/ea-forum-prize-winners-for-february-2019#2tqrLtNjGpZGQvdwF <p>As Peter noted, while CEA provides funding for the prizes, only two of the six voters work for CEA. I&#x27;m one of those two, and I vote according to a personal standard that doesn&#x27;t have anything to do with &quot;what CEA wants&quot;, and is more related to some combination of &quot;average utility per reader&quot; + &quot;sets a good example for how to write good Forum posts&quot; + &quot;other minor factors too numerous to list&quot;.</p><p>One note on upvotes: They correlate heavily with &quot;number of people who read something&quot;. If posts A and B are equally high-quality, and post B is shared in a bunch of large Facebook groups, B will almost certainly get more upvotes, but that doesn&#x27;t mean it was more useful to the average reader. (I don&#x27;t think any kind of voting metric should be the sole standard for the Prize, but if we were thinking about such metrics, we could look for something like &quot;among posts with 100+ unique visitors, which had the highest karma-to-visitor ratio?&quot;)</p> aarongertler 2tqrLtNjGpZGQvdwF 2019-04-01T10:32:41.853Z Comment by aarongertler on The career and the community https://forum.effectivealtruism.org/posts/Lms9WjQawfqERwjBS/the-career-and-the-community#Zdz95vA2FQSgYjbiW <p>Thanks for this reply! </p><p>Sorry for not realizing you worked at DeepMind; my comment would have looked different had I known about our shared context. (Also, consider <a href="https://forum.effectivealtruism.org/posts/2j8ERGPu68L5Bd95y/you-should-write-a-forum-bio#M4LYfKnNBK4SBd6BG">writing a bio</a>!)</p><p>I think we&#x27;re aligned in our desire to see more early-career EAs apply to those roles (and on most other things). My post aimed to:</p><p>1. Provide some background on some of the more &quot;successful&quot; people associated with EA. </p><p>2. Point out that &quot;recruiting people with lots of career capital&quot; may be comparable to &quot;acquiring career capital&quot; as a strategy to maximize impact. Of course, the latter makes the former easier, if you actually succeed, but it also takes more time. </p><p>On point (2): What fraction of the money/social capital EA will someday acquire &quot;already exists&quot;? Is our future going to look more like &quot;lots of EA people succeeded&quot;, or &quot;lots of successful people found EA&quot;? </p><p>Historically, both strategies seem to have worked for different social movements; the most successful <a href="https://www.effectivealtruism.org/articles/ea-neoliberal/">neoliberals</a> grew into their influence, while the <a href="https://slatestarcodex.com/2018/04/30/book-review-history-of-the-fabian-society/">Fabian</a> <a href="https://slatestarcodex.com/2018/04/30/book-review-history-of-the-fabian-society/">Society</a> relied on recruiting top talent. (I&#x27;m not a history expert, and this could be far too simple.)</p><p>--</p><p>One concern I have about the &quot;maximize career capital&quot; strategy is that it has tricky social implications; it&#x27;s easy for a &quot;most people should do X&quot; message to become &quot;everyone who doesn&#x27;t do X is wrong&quot;, as Richard points out. But career capital acquisition doesn&#x27;t lead to as much direct competition between EAs, and could produce more skill-per-person in the process, so perhaps it&#x27;s actually just better for most people.</p><p>Some of my difficulty in grasping the big picture for the community as a whole is that I don&#x27;t have a sense for what early-career EAs are actually working on. Sometimes, it feels like everyone is a grad student or FAANG programmer (not much potential for outsize returns). At other times, it feels like everyone is trying to start a company or a charity (lots of potential, lots of risk). </p><p><strong>Is there any specific path you think not enough people in the community are taking from a &quot;big wins early&quot; perspective?</strong> Joining startups? Studying a particular field?</p><p>--</p><p>Finally, on the subject of risk, I think I&#x27;m going to take <a href="https://forum.effectivealtruism.org/posts/yAFXfuwsebEhNgLTf/getting-people-excited-about-more-ea-careers-a-new-community#2bsodpXRu84XW6Gqr">this comment</a> and turn it into a post. (Brief summary: Someday, when we look back on the impact of EA, we&#x27;ll have a good sense for whose work was &quot;most impactful&quot;, but that shouldn&#x27;t matter nearly as much to our future selves as the fact that many unsuccessful people still tried their best to do good, and were also part of the movement&#x27;s &quot;grand story&quot;.) I hope we keep respecting good strategy and careful thinking, whether those things are attached to high-risk or low-risk pursuits.</p> aarongertler Zdz95vA2FQSgYjbiW 2019-03-29T04:47:16.086Z Comment by aarongertler on Should EA Groups Run Organ Donor Registration Drives? https://forum.effectivealtruism.org/posts/eLSAyzmKGi252mTCk/should-ea-groups-run-organ-donor-registration-drives#wcxghqqzL94x4XHf9 <blockquote>I don&#x27;t know how good these things are compared to donor registration drives.</blockquote><p>As I mentioned before, I&#x27;m not claiming that any of these examples are necessarily better! I&#x27;m just trying to gesture to the number of other options groups might have, and the fact that it would be good to see even very rough cost-benefit analyses for different options. </p><p>For example, organ donation registration drives are indeed easy, discrete, and scalable, but even if promoting legislation isn&#x27;t quite as good on those fronts, perhaps the expected impact is good enough to make it a better bet in some places and times.</p><p>CPR is rarely successful, especially when performed by non-professionals outside of a hospital, so I doubt it beats organ donation, but it&#x27;s a plausible thing a group could look into. Other first-aid-ish interventions seem more promising, and EA groups could even consider expanding into other &quot;social intervention&quot; areas (e.g. a workshop on when it makes sense to call 911 if you see someone who looks to be very ill/unconscious). </p><p>All of that said, I&#x27;d have no objection whatsoever to a group running a well-organized donor registration drive, for the reasons I noted in my first post.</p> aarongertler wcxghqqzL94x4XHf9 2019-03-29T04:11:55.501Z EA Forum Prize: Winners for February 2019 https://forum.effectivealtruism.org/posts/b3oTGiMpEKY4MnAEB/ea-forum-prize-winners-for-february-2019 <p>CEA is pleased to announce the winners of the February 2019 EA Forum Prize! </p><p>In first place (for a prize of $999): &quot;<u><a href="https://forum.effectivealtruism.org/posts/W94KjunX3hXAtZvXJ/evidence-on-good-forecasting-practices-from-the-good">Evidence on good forecasting practices from the Good Judgment Project</a></u>&quot;, by <u><a href="https://forum.effectivealtruism.org/users/kokotajlod">kokotajlod</a></u>.</p><p>In second place (for a prize of $500): &quot;<u><a href="https://forum.effectivealtruism.org/posts/5k6mJFBpstjkjv2SJ/small-animals-have-enormous-brains-for-their-size">Small animals have enormous brains for their size</a></u>”, by <u><a href="https://forum.effectivealtruism.org/users/eukaryote">eukaryote</a></u>.</p><p>In third place (for a prize of $250): &quot;<u><a href="https://forum.effectivealtruism.org/posts/XdekdWJWkkhur9gvr/will-companies-meet-their-animal-welfare-commitments">Will companies meet their animal welfare commitments?</a></u>&quot;, by <u><a href="https://forum.effectivealtruism.org/users/saulius">saulius</a></u>.</p><p>We also awarded prizes in <u><a href="https://forum.effectivealtruism.org/posts/k4SLFn74Nsbn4sbMA/ea-forum-prize-winners-for-november-2018">November</a></u>, <u><a href="https://forum.effectivealtruism.org/posts/gsNDoqpB2pWq5yYLv/ea-forum-prize-winners-for-december-2018">December</a></u>, and <u><a href="https://forum.effectivealtruism.org/posts/k7j7oxMcHsun2nC5H/ea-forum-prize-winners-for-january-2019">January</a></u>.</p><h2>What is the EA Forum Prize?</h2><p>Certain posts exemplify the kind of content we <u><a href="https://forum.effectivealtruism.org/about">most want to see</a></u> on the EA Forum. They are well-researched and well-organized; they care about <u><a href="https://ideas.ted.com/why-you-think-youre-right-even-when-youre-wrong/">informing readers, not just persuading them</a></u>.</p><p>The Prize is an incentive to create posts like this. But more importantly, we see it as an opportunity to showcase excellent content as an example and inspiration to the Forum&#x27;s users.</p><h2>About the winning posts</h2><p>&quot;<u><a href="https://forum.effectivealtruism.org/posts/W94KjunX3hXAtZvXJ/evidence-on-good-forecasting-practices-from-the-good">Evidence on good forecasting practices from the Good Judgment Project</a></u>&quot; is a thorough, well-organized summary of forecasting — a topic often discussed on the Forum, but rarely with this amount of data. </p><p>We may know that prediction markets are “useful”, but the author goes far beyond that, explaining how well different types of markets (and non-market mechanisms) have performed in prediction tournaments, and which characteristics the best forecasters tend to have. This research could be useful to any number of future forecasting projects in the community.</p><p>Additionally, the author:</p><ul><li>Uses numbered headers to separate sections.</li><li>Includes hyperlinked footnotes for all citations.</li><li>Notes cases where information from original sources is missing or uncertain, giving readers ideas for ways to contribute to his research. (For example, I’d love to learn more about Tetlock’s “perpetual beta” concept, if anyone cares to go and find it.)</li></ul><p>Overall, this is a remarkable post, and I hope that other Forum users create similarly excellent summaries of important concepts.</p><p>—</p><p>&quot;<u><a href="https://forum.effectivealtruism.org/posts/5k6mJFBpstjkjv2SJ/small-animals-have-enormous-brains-for-their-size">Small animals have enormous brains for their size</a></u>” makes a single, simple point (you can see it in the title), but does so with unusual elegance. </p><p>I still remember the core simile — &quot;you have as many neurons as a half-full bucket of ants&quot; — many weeks after I first read the article, and expect to remember it for years to come, thanks to the original art which enlivens the piece. Illustrations aren’t essential to Forum posts, but making good ideas memorable, however you choose to do it, amplifies their impact.</p><p>Additionally, the author:</p><ul><li>Recommends further reading for anyone who found the article interesting (this is surprisingly rare for EA Forum posts, despite the vast literature that informs many of our ideas).</li><li>Doesn’t overstate her point; instead, we get facts about neurons, plus a list of ways in which these facts could interact with certain beliefs to produce other beliefs, without advocacy <em>for </em>any of those beliefs. </li><ul><li>There’s nothing wrong with advocating beliefs, of course, but there can be major benefits to separating &quot;fact posts&quot; from &quot;belief posts&quot;. For example, a fact post is more likely to be cited by authors with a range of beliefs, making everyone’s belief posts more evidence-based in the process.</li></ul></ul><p>—</p><p>&quot;<u><a href="https://forum.effectivealtruism.org/posts/XdekdWJWkkhur9gvr/will-companies-meet-their-animal-welfare-commitments">Will companies meet their</a> <a href="https://forum.effectivealtruism.org/posts/XdekdWJWkkhur9gvr/will-companies-meet-their-animal-welfare-commitments">animal</a> <a href="https://forum.effectivealtruism.org/posts/XdekdWJWkkhur9gvr/will-companies-meet-their-animal-welfare-commitments">welfare commitments?</a></u>&quot;offers crucial context on one of the most popular causes in EA: animal-advocacy campaigns targeting corporations. </p><p>If companies don’t actually live up to their promises, we haven’t made an impact. The author pulls together dozens of different sources from inside and outside of the EA community to show that… well, these promises may not be as impactful as they first seemed. But he doesn’t just explain the issue; he also notes the high level of uncertainty around particular facts and figures (providing better information even at the risk of undercutting his “point”) and suggests ways to improve the situation. </p><p>Additionally, the author:</p><ul><li>Uses our built-in header system to separate sections (I&#x27;m repeating myself here, because this is a really useful feature and I strongly encourage authors to use it for anything longer than a page or so).</li><li>Proposes improvements that animal charities could make without harshly criticizing those charities (distinguishing between “things could be better” and “things are actively bad” is a good habit).</li><li>Points out the ways in which his findings might affect our cost-effectiveness estimates around animal advocacy. Explaining a crucial consideration is good; estimating its impact makes the explanation even better.</li></ul><h2>The voting process</h2><p>All posts published in the month of February qualified for voting, save for those written by CEA staff and Prize judges.</p><p>Prizes were chosen by six people:</p><ul><li>Three of them are the Forum&#x27;s moderators (<u><a href="https://forum.effectivealtruism.org/users/aarongertler">Aaron</a> <a href="https://forum.effectivealtruism.org/users/aarongertler">Gertler</a>,</u> <u><a href="https://forum.effectivealtruism.org/users/denise_melchin">Denise</a> <a href="https://forum.effectivealtruism.org/users/denise_melchin">Melchin</a></u>, and <u><a href="https://forum.effectivealtruism.org/users/julia_wise">Julia</a> <a href="https://forum.effectivealtruism.org/users/julia_wise">Wise)</a></u>. </li><li>The others were the three highest-karma users at the time the new Forum was launched (<u><a href="https://forum.effectivealtruism.org/users/peter_hurford">Peter Hurford</a></u>, <u><a href="https://forum.effectivealtruism.org/users/joey">Joey Savoie</a></u>, and <u><a href="https://forum.effectivealtruism.org/users/robert_wiblin">Rob Wiblin</a></u>).</li></ul><p>Voters recused themselves from voting on posts written by their colleagues. Otherwise, they used their own individual criteria for choosing posts, though they broadly agree with the goals outlined above.</p><p>Winners were chosen by an initial round of <u><a href="https://en.wikipedia.org/wiki/Approval_voting">approval voting</a></u>, followed by a runoff vote to resolve ties.</p><h2>The future of the Prize</h2><p>After reviewing feedback we’ve received about the Prize, we’ve decided to continue giving it out for another six months (February through July) before running a second round of review. We don’t have any current plans to change the format, but we won’t rule out potential changes in future months. </p><p>If you have thoughts on how the Prize has changed the way you read or write on the Forum, or about ways we should consider changing the current format, please let us know in the comments or contact <u><a href="mailto:aaron@centreforeffectivealtruism.org">Aaron Gertler</a></u>.</p> aarongertler b3oTGiMpEKY4MnAEB 2019-03-29T01:53:02.491Z Comment by aarongertler on Why is the EA Hotel having trouble fundraising? https://forum.effectivealtruism.org/posts/BNQbxX7bFRrgv8Yds/why-is-the-ea-hotel-having-trouble-fundraising#xsBSYTDFXMB77evN6 <p>While I liked seeing the reasons for your belief in your subsequent comment, I also really appreciate the meta-level point this comment evoked in me (though I don&#x27;t know whether this is what you meant):</p><p>&quot;In general, most causes are unlikely to be competitive with the very best causes. Thus, a simple explanation for an organization&#x27;s not getting funding is that potential funders didn&#x27;t think it was among the very best opportunities after carefully thinking about it. This may be a much more important factor than arguments like &#x27;too risky&#x27; or &#x27;not tax-deductible&#x27;.&quot;</p><p>(Of course, risk may be factored into calculations about &quot;the best opportunity&quot;, but I can also imagine some funders just looking at the portfolio of projects, estimating a reasonable &quot;helpfulness&quot; coefficient for how much value the Hotel adds, and deciding that the number didn&#x27;t add up, even without consideration of risk.)</p><p>Similarly, if AMF had trouble fundraising one year, and someone asked why, the explanation I&#x27;d think of immediately wouldn&#x27;t be &quot;risk of mosquito-net fishing&quot; or &quot;concerns about their deal with Country X&quot;. It would be &quot;their program&#x27;s EV slipped below the EV of several other global-health charities, and many donors chose other charities on the margin instead of AMF, even if AMF is still a great charity&quot;.</p><p>--</p><p><em>I work for CEA, but these views are my own.</em></p> aarongertler xsBSYTDFXMB77evN6 2019-03-28T01:24:12.882Z Comment by aarongertler on $100 Prize to Best Argument Against Donating to the EA Hotel https://forum.effectivealtruism.org/posts/ek299LpWZvWuoNeeg/usd100-prize-to-best-argument-against-donating-to-the-ea#nSgdmbwNMY5oto2kG <p>I&#x27;m going to break your rules a bit and instead start by critiquing the proposition:</p><p><em>$X to the EA Hotel has at least as much EV as $X to the most promising person at the Hotel.</em></p><p>It may not be easy to fund individuals, but if someone wanted to give, say, $10,000, what&#x27;s to stop them from looking at the Hotel&#x27;s guest list, picking the best-sounding project, and offering money directly to the person behind it? (Then, if that person doesn&#x27;t need/want the money, they move to the next-best person, and so on.)</p><p>This may burn time on vetting, but it&#x27;s at least easier than vetting everyone at the Hotel to get a sense for its average impact.</p><p>--</p><p>You could also try to estimate the Hotel&#x27;s value as a tool for creating networks -- boosting research productivity by giving people an easier way to start conversations and help with one another&#x27;s work. If that&#x27;s the case, the comparison to EA Meta grantees becomes more apt. </p><p>That said, there are a <em>lot </em>of Meta grantees, and trying to find the &quot;best&quot; of them is difficult by any measure. So people may end up wanting to fund organizations with longer histories (like LEAN or The Life You Can Save), or organizations with an extremely good &quot;best-case&quot; scenario (like the Center for Election Science or Sparrow). It&#x27;s hard to think of which sub-factor the EA Hotel is &quot;best at&quot; compared to all those other organizations. </p><p>Just to give one example: For $10,000, I could fund Giving Games where several hundred people are introduced to EA and make their first &quot;EA-aligned&quot; donation, or pay for 1.3 years of EA Hotel time. Those are very different things, and I could imagine at least 50% of potential meta donors thinking that the first option is better. </p><p>If the rest of those donors then compare the Hotel to the next project on the list, and the next... well, there aren&#x27;t many people who make large donations to individual meta projects, and it&#x27;s not surprising if only a small fraction of that already-small pool lands on the Hotel as their &quot;final answer&quot;. </p><p>(This model is too simple, of course, since many donors give to multiple organizations. The most important point is that the Hotel has a lot of competition, and may not stand out enough compared to all the other options.)</p><p>--</p><p><em>I work for CEA, but these views are my own.</em></p> aarongertler nSgdmbwNMY5oto2kG 2019-03-28T01:17:52.734Z Comment by aarongertler on The career and the community https://forum.effectivealtruism.org/posts/Lms9WjQawfqERwjBS/the-career-and-the-community#7t3NtBJwBETa2b2tq <p>Welcome to the EA community! I liked your first post on the Forum, and I hope you&#x27;ll come back to make many more. </p><p>Now that that&#x27;s been said, here&#x27;s my response, which may sound oppositional, but which I intend to be more along the lines of &quot;trying to get on the same page, since I think we actually agree on a lot of stuff&quot;. Overall, I think your vision of success is pretty close to what people at 80K might say (though I could be wrong and I certainly don&#x27;t speak for them).</p><blockquote>Where are the Elon Musks and Peter Thiels (early career trajectory-wise) in the EA community? Why are so few EAs making it into leadership positions at some of the most critical orgs?</blockquote><p>The thing about Elon Musk and Peter Thiel is that it was hard to tell that they would become Musk and Thiel. There are many more &quot;future Elon Musks&quot; than there are &quot;people who become Elon Musk after everything shakes out&quot;. </p><p>For all I know, we may have some of those people in the community; I think we certainly have a higher <em>expected </em>number of those people per-capita than almost any other community in the world, even if that number is something like &quot;0.3&quot;. (The tech-billionaire base rate is very low.)</p><p>I don&#x27;t really know what you mean by &quot;the most critical orgs&quot;, since EA seems to be doing well there already:</p><ul><li>The Open Philanthropy Project has made hundreds of millions of dollars in grants and is set to do hundreds of millions more -- they aren&#x27;t on the scale of Sequoia or Y Combinator, but they&#x27;re similar to a mid-size venture fund (if my estimates about those funds aren&#x27;t too off-base). </li><li>GiveWell is moving $40-50 million/year and doesn&#x27;t seem likely to slow down. In fact, they&#x27;re looking to <a href="https://forum.effectivealtruism.org/posts/xSBSojpb8L5xjTzbZ/how-givewell-s-research-is-evolving">double in size</a> and start funding lots of new projects in areas like &quot;changing national law&quot;. </li><li>DeepMind and OpenAI, both of which could become some of the most influential technical projects in history, have a lot of employees (including executives) who are familiar with EA or active participants in the community.</li><li>A former head of IARPA, the CIA&#x27;s R&amp;D department (roughly speaking), is now the head of an AI think tank in Washington DC whose other staffers also have <a href="https://cset.georgetown.edu/about-us/">really impressive resumes</a>. (Tantum Collins, a non-executive researcher who appears halfway down the page, is a &quot;Principal for Research and Strategy&quot; at DeepMind and co-authored a book with Stanley McChrystal.)</li><li>It&#x27;s true that we haven&#x27;t gotten the first EA senator, or the first EA CEO of a FAANG company (Zuckerberg isn&#x27;t quite there yet), but I think we&#x27;re making reasonable progress for a movement that was founded ten years ago in a philosopher&#x27;s house and didn&#x27;t really &quot;professionalize&quot; until 2013 or so.</li></ul><p>Meanwhile...</p><ul><li>EA philosophy seems to have influenced, or at least caught the attention of, many people who are already extremely successful (from Gates and Musk to Vitalik Buterin and Patrick Collison). </li><li>We have support from some of the world&#x27;s most prominent philosophers, quite a few other major-league academics (e.g. Philip Tetlock), and several of the world&#x27;s best poker players (who not only donate a portion of their tournament winnings, but also spend their spare time running <a href="http://doubleupdrive.com">fundraisers for cash grants and AI safety</a>).</li><li>We have a section that&#x27;s at least 50% <a href="https://www.vox.com/future-perfect/">devoted to EA causes </a>in a popular online publication. </li></ul><p>There&#x27;s definitely room to grow and improve, but the trajectory looks... well, pretty good. Anecdotally, I didn&#x27;t pay much attention to new developments in EA between mid-2016 and mid-2018. </p><blockquote>When talking to someone really talented graduating from university and deciding what to do next, I&#x27;d probably ask them why what they&#x27;re doing immediately might allow for outsize returns / unreasonably fast growth (in terms of skills, network, credibility, money, etc.). If no compelling answer, I&#x27;d say they&#x27;re setting themselves up for relative mediocrity / slow path to massive impact.</blockquote><p>I generally agree with this, though one should be careful with one&#x27;s rocket ship, lest it crash. Theranos is the most obvious example; Tesla may yet become another, and plenty of others burned up in the atmosphere without getting much public attention.</p><p>--</p><p><em>I work for CEA, but these views are my own.</em></p> aarongertler 7t3NtBJwBETa2b2tq 2019-03-28T00:32:03.594Z Comment by aarongertler on What open source projects should effective altruists contribute to? https://forum.effectivealtruism.org/posts/izZssKwg9smCz5qrb/what-open-source-projects-should-effective-altruists#48GX2a8Fw7sts3pLd <p>One good option: Contributing to the EA Forum!</p><p>Almost all of our code actually comes from the <a href="https://github.com/LessWrong2/Lesswrong2">LessWrong codebase</a>. LessWrong is very excited about open-source contributions; here&#x27;s their <a href="https://github.com/LessWrong2/Lesswrong2#contributing">guide to helping out</a>, and their Github tag for <a href="https://github.com/Lesswrong2/Lesswrong2/issues?q=is%3Aissue+is%3Aopen+label%3A%221.+Important+%28Easy%29%22">important issues that seem easy to fix</a>.</p><p>Changes made in LessWrong will appear on the Forum in ~2 weeks on average, unless the part of LessWrong you changed isn&#x27;t something we have on the Forum. </p><p>We really appreciate everyone who wants to help; let me know if you have questions, and I&#x27;d be happy to direct you to the programmers who can answer them.</p> aarongertler 48GX2a8Fw7sts3pLd 2019-03-28T00:06:35.469Z Comment by aarongertler on Should EA Groups Run Organ Donor Registration Drives? https://forum.effectivealtruism.org/posts/eLSAyzmKGi252mTCk/should-ea-groups-run-organ-donor-registration-drives#2oZczD3SH84YEtCL4 <p><strong>Summary: </strong>Maybe groups should do this, but there are a lot of things groups should &quot;maybe&quot; do, and this doesn&#x27;t seem like a clear best candidate.</p><blockquote>4. It takes approximately 333 donor registrations to counterfactually increase the number of actual organ donors by 1, which suggests low tractability. (<a href="https://www.organdonor.gov/statistics-stories/statistics.html">&quot;[O]nly</a> <a href="https://www.organdonor.gov/statistics-stories/statistics.html">3 in 1,000 people die in a way that allows for organ</a> <a href="https://www.organdonor.gov/statistics-stories/statistics.html">donation.&quot;</a>) </blockquote><p>I agree with others here that EA groups should try activities that:</p><blockquote>1. Do a significant, easily quantifiable amount of good;</blockquote><blockquote>2. Address important problems;</blockquote><blockquote>3. Have some EA motivation; and</blockquote><blockquote>4. Give people a chance to talk about their EA worldview with non-EAs.</blockquote><p>I think organ donor drives hit points 2 and 3 well, and aren&#x27;t too bad on 1 or 4 (the impact isn&#x27;t easy to quantify and organ donation isn&#x27;t an EA cause area, but the topic is at least clearly positive/altruistic).</p><p>But even if a donor registration drive is a reasonable candidate for group activity, is it really better than other available options?</p><p>Examples:</p><ul><li>Promoting some important piece of local legislation (there Open Phil-related ballot initiatives in at least three states before the 2016 election).</li><li>Any of the other opportunities mentioned in the <a href="https://www.facebook.com/groups/1392613437498240/?hc_ref=ARTYOwQBJlpGDQH_MhZfABt-GTpoGT01SPk96dNFRXAEC3MbWF6ZjOZi45W93dfoSo8">EA Volunteering</a> group (including research and assistance with writing projects).</li><li>Other &quot;classic&quot; forms of volunteering that seem tractable/neglected (e.g. helping at a local shelter, manning an emotional support hotline)</li><li>Learning skills that might help them make a bigger impact later (not just skills related to research or other &quot;EA&quot; pursuits, but also things like CPR or &quot;how to administer Naloxone&quot;)</li></ul><p>This is far from a complete list. </p><p>I don&#x27;t know how good these things are compared to donor registration drives, but it seems like a group should examine a few different options before deciding to carry out an activity. Or, if they don&#x27;t have confidence in their ability to estimate impact (or time to do so), they may just want to choose an activity based on other features (e.g. how well it will help the group bond, how convenient it is, how appealing it will be to new members, how well it ties into EA).</p> aarongertler 2oZczD3SH84YEtCL4 2019-03-28T00:03:02.931Z Comment by aarongertler on Severe Depression and Effective Altruism https://forum.effectivealtruism.org/posts/ue47zyjMistdeBWo8/severe-depression-and-effective-altruism#ABgACbKec6f8NWuLu <p>Before reading the rest of this, please consider calling the <a href="https://suicidepreventionlifeline.org/">suicide prevention lifeline</a> for support. This may seem like generic advice, but the lifeline is a really valuable resource. </p><p>--</p><p>EA-specific notes:</p><p>You&#x27;re not alone. A lot of people involved in EA struggle with scrupulosity and feelings of guilt, and many have suffered from depression (sometimes related to the aforementioned feelings). </p><p><a href="https://www.facebook.com/groups/ea.peer.support/">EA Peer Support</a> is a Facebook group devoted to helping and supporting people through their personal problems. I&#x27;d really encourage you to check it out; there are a lot of warm-hearted, thoughtful people in the group.</p><p>Also, Kelsey Piper often writes about the feelings of guilt/scrupulosity that arise from EA thinking, and how to handle them. <a href="https://theunitofcaring.tumblr.com/post/179128921056/i-choose-not-to-donate-to-amf-thus-sitting-on-my">This is one good post about that</a>; there are many others, and frankly there are many worse ways to spend time than just reading her entire Tumblr to find all the things she&#x27;s written about self-care and emotional management.</p><p>--</p><p>I&#x27;m not a therapist or any other kind of counselor, but speaking from my own viewpoint/experience, the most important thing you can do is ensure that you are in a safe, stable position. No one is obliged to force themselves to suffer for the good of others; I can&#x27;t think of anyone I&#x27;ve ever met during my time in EA who would argue otherwise. </p><p>Even you currently feel that there is tension between your personal comfort and your capacity to do good for others, remember that this tension needn&#x27;t be a permanent feature of your life. I&#x27;ve known other people in EA who once felt the same tension, but eventually resolved it, with help from their friends and the wider community.</p><p>It&#x27;s okay to care about yourself. It&#x27;s okay to care about your parents. <a href="https://forum.effectivealtruism.org/posts/zu28unKfTHoxRWpGn/you-have-more-than-one-goal-and-that-s-fine">You don&#x27;t need to make all of your life decisions according to a single unified framework about doing good through charity</a>; in the long run, some kind of balance that includes a regard for your own health is much better. </p><p>--</p><p>On top of everything else, I think there are compelling reasons not to betray your parents&#x27; trust even if it seems like that would have good consequences. To quote <a href="https://blog.givewell.org/2013/12/12/staff-members-personal-donations/">Holden Karnofsky</a>:</p><blockquote>I try to perform very well on “standard” generosity and ethics, and overlay my more personal, debatable, potentially-biased agenda on top of that rather than in replacement of it. I wouldn’t steal money to give it to our top charities.</blockquote><p>A similar point exists within the broadly-endorsed <a href="https://forum.effectivealtruism.org/posts/Zxuksovf23qWgs37J/introducing-cea-s-guiding-principles">EA Guiding Principles</a>:</p><blockquote>Because we believe that trust, cooperation, and accurate information are essential to doing good, we strive to be honest and trustworthy. More broadly, we strive to follow those rules of good conduct that allow communities (and the people within them) to thrive.</blockquote> aarongertler ABgACbKec6f8NWuLu 2019-03-26T06:26:41.280Z Comment by aarongertler on Will the EA Forum continue to have cash prizes? https://forum.effectivealtruism.org/posts/X6Tv2vTiPBmdNmeFa/will-the-ea-forum-continue-to-have-cash-prizes#rXD5TabBHqDx4tsWr <p>Yes, for the next six months (as an extension of our original three-month test). We don&#x27;t have any active plans to change the system, but it&#x27;s possible that we might for some of those six months. Feedback has been highly positive so far, but we&#x27;d still like to collect more data on how the prize influences readership and authorship.</p><p>(The February prize will be announced around the end of this month; our process was a bit delayed from the usual timeline, since we spent the first two weeks of March collecting feedback and discussing our options.)</p> aarongertler rXD5TabBHqDx4tsWr 2019-03-26T00:36:04.849Z Comment by aarongertler on Why doesn't the EA forum have curated posts or sequences? https://forum.effectivealtruism.org/posts/ccc4fXMnQ63zkqGKo/why-doesn-t-the-ea-forum-have-curated-posts-or-sequences#2gtQxcinzjPxCHBkS <p>I&#x27;m sorry if my framing was misleading: When this feature goes live on the Forum, other users will be able to use it freely. CEA still wants to have its own &quot;collections&quot; be as close to &quot;definitive&quot; as we can reasonably get, with occasional updates/added material.</p><p>Meanwhile, until the feature goes live, I&#x27;m considering ways to more reliably expose Forum visitors to collections of introductory material that already exist, like the material compiled on <a href="https://www.effectivealtruism.org/articles/introduction-to-effective-altruism/">EA.org</a>. Maybe a pinned post, or maybe a page that shows by default to non-logged-in users; that&#x27;s still in the works.</p> aarongertler 2gtQxcinzjPxCHBkS 2019-03-24T21:58:19.003Z Comment by aarongertler on The Home Base of EA https://forum.effectivealtruism.org/posts/rrkEWw8gg6jPS7Dw3/the-home-base-of-ea#754PyptRKpZNGbY9d <p>I really liked the visual/story description you gave for what joining a group could look like; I really appreciate how memorable an idea can be when presented in that style. In that story, I also recognized the way I&#x27;ve felt in many of my interactions with the EA community thus far, which makes me wonder whether I&#x27;ve gotten a skewed sense for what &quot;most EA circles&quot; spend time on.</p><p>I&#x27;ve been a part of four different EA groups, three of which were more focused around social activity than anything cerebral (Madison, San Diego, Yale). The exception (EA Epic, a corporate group) had members who lived far apart, mostly existed during a Wisconsin fall/winter, and always met after workdays, which made planning social activities a bit harder. But my general sense is that most EA groups actually are fairly social/inclusive in the way you propose. </p><p>(This may be part of why we&#x27;re seen as <a href="https://forum.effectivealtruism.org/posts/eoCexTGET3eFQz3w2/ea-survey-2018-series-how-welcoming-is-ea">quite welcoming</a>, though <a href="https://forum.effectivealtruism.org/posts/eoCexTGET3eFQz3w2/ea-survey-2018-series-how-welcoming-is-ea#QK5qrNKgd6au49aAD">survey bias</a> is likely a stronger factor in that case.)</p><p>How much time groups spend on cerebral/meritocratic vs. social/inclusive activities might be a good thing to figure out through the EA Survey; I&#x27;ll suggest it as a potential topic for this year.</p> aarongertler 754PyptRKpZNGbY9d 2019-03-23T03:16:48.642Z Comment by aarongertler on EA Survey 2018 Series: How welcoming is EA? https://forum.effectivealtruism.org/posts/eoCexTGET3eFQz3w2/ea-survey-2018-series-how-welcoming-is-ea#H2XjQZXjxuzKyxjeP <p><em>Note: I don&#x27;t know very much about mental health, and the first two paragraphs of this comment are highly speculative.</em></p><p>That would be my theory, though I might not use the word &quot;sensitive&quot;. I&#x27;d think that part of the effect (probably most of it) has something to do with lower average happiness and/or higher rates of depression/anxiety among people who prioritize mental health. </p><p>I&#x27;d guess that people who strongly support that cause are more likely to have direct experience with mental health issues than other people in EA. Having a lower level of happiness/life satisfaction could then translate into generally lower &quot;scores&quot; on surveys asking about many different positive feelings, including &quot;how welcome you feel&quot;.</p><p>Of course, mental health isn&#x27;t a very well-supported cause area within EA, so it could also be the case that people who favor it have a hard time finding other people in EA who share their level of support. It&#x27;s probably much easier to find someone who knows a lot about animal advocacy at an EA event than to find someone who knows a lot about mental health as a cause area, and 1-on-1 conversations are a big driver of &quot;feeling welcome&quot;.</p><p>(Anecdotally, experiencing intermittent mild-to-moderate depression over the last few years seems to have made me more likely to read about EA work in mental health. Empathy tends to influence the causes to which I am emotionally drawn, inside or outside of EA.)</p> aarongertler H2XjQZXjxuzKyxjeP 2019-03-23T03:14:41.453Z Comment by aarongertler on Why doesn't the EA forum have curated posts or sequences? https://forum.effectivealtruism.org/posts/ccc4fXMnQ63zkqGKo/why-doesn-t-the-ea-forum-have-curated-posts-or-sequences#5THirqy96p3y9Afw5 <p>My intuition, having seen proposals from people both inside and outside of CEA, is that this collation will almost certainly take longer than a week or two:</p><ul><li>A higher standard than &quot;broadly acceptable&quot; seems important, since whatever posts are chosen will be seen as having CEA&#x27;s endorsement (assuming CEA is the one doing the collation). A few critics can contribute a lot of negative public feedback, and even a single unfortunate line in a curated post may cause problems later. </li><li>I also think there&#x27;s a lot of value to publishing a really good collection the first time around: </li><ul><li>Making frequent revisions to a &quot;curated&quot; collection of posts makes them look a lot less curated, and removes comments from the public eye that authors may have worked on assuming they&#x27;d stick around. </li><li>It&#x27;s also not great if Post A is chosen for curation despite Post B being a much stronger take on the same subject; assembling a collection of posts that are roughly the best posts on their respective topics takes a lot of experience with EA content and consultation with other experienced people (no one has read everything, and even people who&#x27;ve read <em>almost </em>everything may differ in which pieces they consider &quot;best&quot;).</li></ul></ul><p>That said, the task is doable, and I&#x27;m consulting with other CEA staff who work on the Forum to draft a top-level answer about our plans for this feature.</p> aarongertler 5THirqy96p3y9Afw5 2019-03-22T01:51:33.098Z Comment by aarongertler on I'll Fund You to Give Away 'Doing Good Better' - Surprisingly Effective? https://forum.effectivealtruism.org/posts/mBA6i5h2vbQzP4wQ7/i-ll-fund-you-to-give-away-doing-good-better-surprisingly#mhWxCkjpQEMJDDMH7 <p>I like that wording, and don&#x27;t have any changes to suggest.</p> aarongertler mhWxCkjpQEMJDDMH7 2019-03-21T23:48:35.421Z Comment by aarongertler on Why doesn't the EA forum have curated posts or sequences? https://forum.effectivealtruism.org/posts/ccc4fXMnQ63zkqGKo/why-doesn-t-the-ea-forum-have-curated-posts-or-sequences#Rfn8YD6P5wnfurtyW <p><a href="https://forum.effectivealtruism.org/posts/wrPacgwp3DsSJYbce/why-the-ea-forum">Here&#x27;s the post I believe Yannick was thinking of</a>. (Find the phrase &quot;core series of posts&quot;.) </p><p>This is still something we plan to do in the future; I&#x27;m consulting with other CEA staff who work on the Forum to draft a top-level answer to Richard&#x27;s question.</p> aarongertler Rfn8YD6P5wnfurtyW 2019-03-21T23:43:47.833Z Comment by aarongertler on Request for comments: EA Projects evaluation platform https://forum.effectivealtruism.org/posts/PagT8Fg6HZu6KjHDb/request-for-comments-ea-projects-evaluation-platform#CbXFmuSHARCMiRo2i <p>I share Habryka&#x27;s concern for the complexity of the project; each step clearly has a useful purpose, but it&#x27;s still the case that adding more steps to a process will tend to make it harder to finish that process in a reasonable amount of time. I think this system could work, but I also like the idea of running a quick, informal test of a simpler system to see what happens.</p><p>Habryka, if you create the &quot;discussion thread&quot; you&#x27;ve referenced here, I will commit to leaving at least one comment on every project idea; this seems like a really good way to test the capabilities of the Forum as a place where projects can be evaluated. </p><p>(It would be nice if participants shared a Google Doc or something similar for each of their ideas, since leaving in-line comments is much better than writing a long comment with many different points, but I&#x27;m not sure about the best way to turn &quot;comments on a doc&quot; into something that&#x27;s also visible on the Forum.)</p> aarongertler CbXFmuSHARCMiRo2i 2019-03-21T07:10:22.397Z Comment by aarongertler on EA jobs provide scarce non-monetary goods https://forum.effectivealtruism.org/posts/vMpuXz2zqS8iHya7i/ea-jobs-provide-scarce-non-monetary-goods#PSL9rn35vR3yJ2f8F <p>Good post! I share Greg&#x27;s doubts about the particular question of salaries (and think that lowering them would have several bad consequences), but I think you&#x27;ve summed up most of the major things that people get, or hope to get, from jobs at EA organizations. </p><p>Other than your reasons and &quot;money&quot;, I&#x27;d include &quot;training&quot;; if you want to learn to do Open Phil-style research, working at Open Phil is the most reliable way to do this.</p><blockquote>When I started at GiveWell, I was surprised at how people in these circles treated me when they found out I was working there, even though I was an entry-level employee.</blockquote><p>Are there any examples of this that stand out to you? I can certainly believe that it happened, but I&#x27;m having trouble picturing what it might look like. </p><p>(Since I began working at CEA five months ago, I haven&#x27;t noticed any difference in the way my interactions with people in EA have gone, save for cases where the interaction was directly related to my job. But perhaps there are effects for me, too, and I just haven&#x27;t spotted them yet.)</p><blockquote>Somewhat mixed in with the above points, I think there&#x27;s a lot of value to be had from feeling like a member of a tribe, especially a tribe that you think is awesome. I think working at a professional EA organization is the closest thing there is to a royal road to tribal membership in the EA community.</blockquote><p>I think you&#x27;re right that EA work is a quick way to <em>feel </em>like part of the tribe, and that&#x27;s something I&#x27;d like to change. </p><p>So I&#x27;ll repeat what I&#x27;ve said in the comments of other posts: <strong>If you believe in the principles of EA, and are taking action on them in some way (work, research, donations, advocacy, or taking steps to do any of those things in the future), I consider you a member of the EA &quot;tribe&quot;. </strong></p><p>I can&#x27;t speak for any other person in EA, but from what I&#x27;ve heard in conversations with people at many different organizations, I think that something like my view is fairly common.</p> aarongertler PSL9rn35vR3yJ2f8F 2019-03-21T06:59:44.226Z Comment by aarongertler on EA London Community Building Lessons Learnt - 2018 https://forum.effectivealtruism.org/posts/qGsAy8pEu6Stq4A3L/ea-london-community-building-lessons-learnt-2018#DRfwq65hhCBfZ9Sf6 <p><a href="https://80000hours.org/podcast/episodes/kelsey-piper-important-advocacy-in-journalism/">https://80000hours.org/podcast/episodes/kelsey-piper-important-advocacy-in-journalism/</a></p> aarongertler DRfwq65hhCBfZ9Sf6 2019-03-21T02:04:00.239Z Comment by aarongertler on I'll Fund You to Give Away 'Doing Good Better' - Surprisingly Effective? https://forum.effectivealtruism.org/posts/mBA6i5h2vbQzP4wQ7/i-ll-fund-you-to-give-away-doing-good-better-surprisingly#oMgZCG9FQDr6gJMsL <p>That version does sound better. One more suggested version:</p><p><em>Thank you for taking the time to share what you&#x27;ve done. Since we also asked about your future plans, could we follow up with one more short survey a year from now, to see what happened?</em></p><p><em>If that&#x27;s alright with you, please enter your email address below - it will not be shared with anyone, or used for any other purpose. </em></p><p>I&#x27;m hoping this feels a bit less high-pressure than &quot;what you may still do&quot;, but you could also remove &quot;to see what happened&quot; to help with that. </p> aarongertler oMgZCG9FQDr6gJMsL 2019-03-21T02:02:08.241Z Comment by aarongertler on Sharing my experience on the EA forum https://forum.effectivealtruism.org/posts/9JuhtbeHLH3TvyHMR/sharing-my-experience-on-the-ea-forum#MyMsK6RFAc4M22akN <p>I agree that this doesn&#x27;t run into the first two problems, though it could make giving anonymous feedback even more tempting. More practically, it seems like it would be pretty annoying to code, and provide less value than similarly tech-intensive features that are being worked on now. If I hear a lot of other calls for an &quot;anonymous feedback&quot; option, I may consider it more seriously, but in the meantime, I&#x27;ll keep pushing for open, honest criticism. </p><p>I haven&#x27;t read every comment on every post, but so far, I&#x27;ve seen barely any posts or comments on the new version of the Forum where someone was criticized and reacted very negatively. Mostly, reactions were like this post (asking for more details) or showed someone updating their views/adding detail and nuance to their arguments.</p> aarongertler MyMsK6RFAc4M22akN 2019-03-21T01:55:51.911Z Comment by aarongertler on EA London Community Building Lessons Learnt - 2018 https://forum.effectivealtruism.org/posts/qGsAy8pEu6Stq4A3L/ea-london-community-building-lessons-learnt-2018#h3TLMe3FFyZoXq64T <blockquote>When you are events focused, you are competing with many things - family, friends, hobbies, Netflix, cinema, etc. If your focus is more on helping people doing good, it’s no longer about having people turn up to an event, it’s about keeping people up to date with relevant info that is helpful for them. When there is a relevant opportunity for them to do something in person, they might be more inclined to do so.</blockquote><p>I really like this point, and the related Kelsey Piper quote. EA, like any social movement, is likely to grow and succeed largely based on how helpful it is for its members. Having a &quot;what can I do for you?&quot; mindset has been really useful to me in my time running a couple of different EA groups (and working at CEA).</p><p>-- </p><p>When you say that Meetup.com &quot;gave a worse impression of effective altruism&quot;, do you mean that it actually seemed to have negative value, or just that it was worse than Facebook because it didn&#x27;t give you an easy way to contact people soon after they&#x27;d joined? If the former, can you talk about any specific negative effects you noticed? (One of the groups I&#x27;m affiliated with is still using Meetup, so I&#x27;m quite curious about this.)</p> aarongertler h3TLMe3FFyZoXq64T 2019-03-19T23:51:37.985Z Comment by aarongertler on I'll Fund You to Give Away 'Doing Good Better' - Surprisingly Effective? https://forum.effectivealtruism.org/posts/mBA6i5h2vbQzP4wQ7/i-ll-fund-you-to-give-away-doing-good-better-surprisingly#mR9bDuGtj5wk4DY39 <p>Fantastic post, Jeremy! I&#x27;m a bit biased, since I had the chance to see earlier drafts, but I really like the generous spirit of this initiative, and it seems like a low-risk, high-potential way to grow the community. It&#x27;s very kind of you to offer funding to others who want to try their own giveaways.</p><p>In fact, I might just try this myself come Giving Season; I&#x27;ve set a reminder in my calendar to think about it on November 15th. Thanks for the idea.</p><p><strong>Regarding the survey: </strong>Consider changing the wording on question #9:</p><blockquote><strong>The bulk of the impact from introducing people to Effective Altruism probably happens over the long term.</strong> If you think you might make future changes, or you generally agree with the principles of the book, we&#x27;d love to be able to check in with you in a year, to see how things are going. </blockquote><p>I&#x27;d remove the section in bold. If people are really interested in EA, they&#x27;ll hopefully give you contact information either way; if they&#x27;re on the fence, they might feel a bit objectified being referred to as sources of impact, or guilty about donating once and planning not to do so in the future (I can imagine giving $100 to GiveWell, then seeing the survey and losing my warm glow because I haven&#x27;t had &quot;the bulk of my impact&quot;).</p><p>This is a highly speculative suggestion, though, and I don&#x27;t think it makes a big difference either way.</p> aarongertler mR9bDuGtj5wk4DY39 2019-03-19T23:39:46.918Z Comment by aarongertler on Sharing my experience on the EA forum https://forum.effectivealtruism.org/posts/9JuhtbeHLH3TvyHMR/sharing-my-experience-on-the-ea-forum#Z5K6amLsE5YcTjD6z <p>I don&#x27;t love the idea (suggested by one comment here) of having separate anonymous feedback, for these reasons:</p><ul><li>Public feedback allows people to upvote comments if they agree (very efficient for checking on how popular a view is)</li><li>Public feedback makes it easier for the author to respond</li><li>Most importantly, public feedback generally strengthens our norm of &quot;it&#x27;s okay to criticize and to be criticized, because no one is perfect and we&#x27;re all working together to improve our ideas&quot;.</li></ul><p>Of course, these factors have to be balanced against the likelihood that anonymous feedback mechanisms will allow for <em>more </em>and <em>more honest </em>feedback, which is a considerable upside. But I&#x27;d hope that the EA community, of all groups, can find a way to thrive under a norm of transparent feedback.</p> aarongertler Z5K6amLsE5YcTjD6z 2019-03-19T23:37:14.504Z Comment by aarongertler on Sharing my experience on the EA forum https://forum.effectivealtruism.org/posts/9JuhtbeHLH3TvyHMR/sharing-my-experience-on-the-ea-forum#cThGt9waXED2iJr3k <p>It looks like Jan&#x27;s comment on your other post was heavily upvoted, indicating general agreement with his concerns, but I&#x27;d hope that people with other concerns would have written about them. </p><p>I&#x27;ve <a href="https://forum.effectivealtruism.org/posts/Rkr2W8ADSGwWXfRBF/effective-impact-investing#BqdHEchNXXPfW58y4">recommended before</a> that people try to avoid downvoting without either explaining their reasoning or upvoting a response that matched their views. I&#x27;ve been happy to see how common this is, though there&#x27;s still room for improvement.</p><p>Please keep posting and sharing your ideas -- one of the Forum&#x27;s core purposes is &quot;helping new people with ideas get feedback&quot;, and no one entered the EA community with only good ideas to share. (As far as &quot;initial experience with forum use&quot; goes, you&#x27;re still doing a lot better than <a href="https://blog.givewell.org/2007/12/31/i-had-a-lapse-in-judgment-did-a-horrible-thing-and-i-apologize/">GiveWell&#x27;s Holden Karnofsky circa 2007</a>.)</p> aarongertler cThGt9waXED2iJr3k 2019-03-19T23:34:33.304Z Comment by aarongertler on Concept: EA Donor List. To enable EAs that are starting new projects to find seed donors, especially for people that aren’t well connected https://forum.effectivealtruism.org/posts/j3wJykgREb6muqvYh/concept-ea-donor-list-to-enable-eas-that-are-starting-new#LZeBDAipMfD4zi5xE <p>I agree with this point. Even in the startup world, where due diligence is common, most projects fail after spending a lot of money, achieving very little impact in the process. </p><p>In the case of EA projects, even a project that doesn&#x27;t have negative value can still lead to a lot of &quot;waste&quot;: There&#x27;s a project team that spent time working on something that failed (though perhaps they got useful experience) and one or more donors who didn&#x27;t get results. </p><p><a href="https://www.openphilanthropy.org/blog/hits-based-giving">Hits-based giving</a> (which focuses on big successes even at the cost of some failure) is a useful approach, but in order for that to work, you do need a project that can at least <em>plausibly </em>be a hit, and no idea is strong enough to create that level of credibility by itself. Someone needs to get to know the team&#x27;s background and skills, understand their goals, and consider the reasons that they might not reach those goals.</p><p>Side note: I hope that anyone who independently funds an EA project considers writing a post about their decision, as <a href="https://forum.effectivealtruism.org/posts/SYeJnv9vYzq9oQMbQ/2017-donor-lottery-report">Adam Gleave</a> did after winning the 2017 donor lottery.</p> aarongertler LZeBDAipMfD4zi5xE 2019-03-19T23:17:33.281Z Comment by aarongertler on Justice, meritocracy, and triage https://forum.effectivealtruism.org/posts/zC7ZZNLzMcnC9uriR/justice-meritocracy-and-triage#oqMFcxHiQrXEaauTo <p>I like the use of the &quot;non-X&quot; concept (which is new to me) to explore post-scarcity, a topic that has been talked about a lot within EA. Something like a universal basic income has a lot of popular support among members of this community, and there&#x27;s a lot of writing on &quot;how good the world could be, if we do things right and don&#x27;t experience a catastrophe from which we can&#x27;t recover&quot;.</p><p>Some resources you might like, if you haven&#x27;t seen them yet:</p><ul><li>Eliezer Yudkowsky&#x27;s &quot;<a href="https://www.lesswrong.com/posts/K4aGvLnHvYgX9pZHS/the-fun-theory-sequence">Fun Theory Sequence</a>&quot;</li><li>The Future of Life Institute&#x27;s &quot;<a href="https://futureoflife.org/2018/12/21/planning-for-existential-hope/">Planning for Existential Hope</a>&quot;</li></ul> aarongertler oqMFcxHiQrXEaauTo 2019-03-19T23:04:03.587Z Comment by aarongertler on A guide to improving your odds at getting a job in EA https://forum.effectivealtruism.org/posts/btgGxzFaHaPZr7RQm/a-guide-to-improving-your-odds-at-getting-a-job-in-ea#oBAYYJ75N64xRrAhE <p>I agree with Denise&#x27;s concerns about the time involved in following these suggestions, but I also think there are good lessons worth pointing out here. Some notes:</p><ul><li>Consider that &quot;EA organization&quot; refers to a very small group of nonprofits, which collectively hire... 50 people each year? Remove GiveWell and the Open Philanthropy Project (which have their own detailed guidelines on what they look for in applicants), and I&#x27;d guess that the number drops by half or more. Many of the positions recommended by 80,000 Hours require deep expertise in a particular topic; research and volunteering can help, but questions of general EA knowledge/experience aren&#x27;t likely to be as important. If you want to work on AI alignment, focus on reading <a href="https://humancompatible.ai/bibliography">CHAI&#x27;s bibliography</a> rather than, say, the EA Forum.</li><li>As far as volunteering, research, and other projects go, quality &gt; quantity. Years of reading casually about EA and posting on social media don&#x27;t hurt, but these factors aren&#x27;t nearly as important as a work reference who raves about your skills as a volunteer, or a Forum post that makes a strong contribution to the area you want to work on. </li></ul><blockquote>If you want an operations job and you wrote a blog post about the comparison of top online operational resource courses, then you are a person EA organisations are interested in talking to. </blockquote><p>This only holds true if the post was <em>useful</em>, helping EA orgs solve a problem they had or getting strong positive feedback from people who used it to select a course. There&#x27;s a lot of writing in the EA blogosphere; much of it is great, but some posts just never find an audience. Again, quality &gt; quantity; better to spend a lot of time figuring out which post idea is likely to have the most impact, then working on the best version you can produce, than to publish a lot of posts you didn&#x27;t have the time to think about as carefully. </p><p>(This doesn&#x27;t mean that the Forum itself doesn&#x27;t encourage unpolished work -- <a href="https://forum.effectivealtruism.org/about">we&#x27;re happy to see your ideas!</a> -- but that the writing most likely to <em>demonstrate your practical skills </em>is writing that you&#x27;ve polished.)</p><p>--</p><p>As an aside: I&#x27;m not a career coach by any means, but I&#x27;ve worked in EA operations and EA content, and I&#x27;ve talked to a lot of different organizations about what they look for in applicants. If you have particular questions about applying to an org in/adjacent to EA, you&#x27;re welcome to comment here or <a href="mailto:aaron.gertler@centreforeffectivealtruism.org">email me</a> (though it&#x27;s possible that my advice will consist of &quot;ask these questions to the organization&quot; or &quot;read this article they wrote about what they want&quot;).</p><p>--</p><p><em>I work for CEA, but these views are my own.</em></p> aarongertler oBAYYJ75N64xRrAhE 2019-03-19T22:44:58.909Z Comment by aarongertler on [Link] A Modest Proposal: Eliminate Email https://forum.effectivealtruism.org/posts/qCwyhBuAMbjCegjtC/link-a-modest-proposal-eliminate-email#AZtafNsvfD9oJr5Qk <p>Slack&#x27;s not perfect, but here are some features I like:</p><ul><li>Emotes let you &quot;respond&quot; to a message in less than a second with zero typing. At CEA, we have an &quot;eyes&quot; emote that means &quot;I&#x27;ve seen this message&quot;, which saves me 30 seconds over sending a &quot;thanks for sending this, I&#x27;ve read it&quot; email. We have lots of other emotes that stand in for other kinds of quick messages. I send a <em>lot </em>less email at CEA than I did in my most recent corporate job, at a tech firm with pretty standard messaging practices.</li><li>Channels act as a proactive sorting system. CEA has an &quot;important&quot; channel for time-sensitive things that everyone should read and a &quot;general&quot; channel for things that everyone should read, but that aren&#x27;t time-sensitive. If all the messages on those channels were emails, I&#x27;d wind up reading them all as they came in, but in Slack I can ignore most of them until I hit the time in my day when I want to catch up on messages, without spending any energy on sorting.</li></ul><p>Slack also has a feature that lets you set &quot;statuses&quot; in the same way the HBR article discusses (e.g. &quot;working on important thing, available after 4:00 pm&quot;), which takes less time than writing an auto-reply and also doesn&#x27;t add dozens of automated emails to other people&#x27;s inboxes when they try contacting you.</p> aarongertler AZtafNsvfD9oJr5Qk 2019-03-18T20:24:32.499Z Comment by aarongertler on The Importance of Truth-Oriented Discussions in EA https://forum.effectivealtruism.org/posts/GqXQPy9duht3RL9pw/the-importance-of-truth-oriented-discussions-in-ea#4Lc8tQT3RgkdAuZs9 <p>1. I&#x27;d really recommend finding a different phrase than &quot;low levels of emotional control&quot;, which is both more insulting than seems ideal for conversations in an EA context and too vague to be a useful descriptor. (There are dozens of ways that &quot;controlling one&#x27;s emotions&quot; might be important within EA, and almost no one is &quot;high&quot; or &quot;low&quot; for all of them.) </p><p>2. &quot;Less welcoming for everyone else&quot; is too broad. Accommodating people who prefer some topics not be brought up certainly makes EA less welcoming for <em>some</em> people: Competing access needs are real, and a lot of people aren&#x27;t as comfortable with discussions where emotions aren&#x27;t as controlled, or where topics are somewhat limited. </p><p>But having &quot;high emotional control&quot; (again, I&#x27;d prefer a different term) doesn&#x27;t necessarily mean feeling unwelcome in discussions with people who are ideological or &quot;less controlled&quot; in some contexts. </p><p>One of the features I like most in a community is &quot;people try to handle social interaction in a way that has the best average result for everyone&quot;. </p><p>I&#x27;d consider &quot;we figure out true things&quot; to be the most important factor we should optimize for, and our discussions should aim for &quot;figuring stuff out&quot;. But that&#x27;s not the <em>only </em>important result; another factor is &quot;we all get along and treat each other well&quot;, because there&#x27;s value in EA being a well-functioning community of people who are happy to be around each other. If having a topic consistently come up in conversation is draining and isolating to some members of the community, I think it&#x27;s reasonable that we have a <em>higher bar</em> for that topic.</p><p>This doesn&#x27;t mean abandoning global poverty because people think it seems colonialist; it might mean deciding that someone&#x27;s <a href="https://forum.effectivealtruism.org/posts/GqXQPy9duht3RL9pw/the-importance-of-truth-oriented-discussions-in-ea#iJo5ybiF3ta6c2DqK">Mormon manifesto</a> doesn&#x27;t pass the bar for &quot;deserves careful, point-by-point discussion&quot;. That isn&#x27;t very inclusive to the manifesto&#x27;s author, but it seems very likely to increase EA&#x27;s overall inclusiveness.</p> aarongertler 4Lc8tQT3RgkdAuZs9 2019-03-18T20:16:37.799Z Comment by aarongertler on The Importance of Truth-Oriented Discussions in EA https://forum.effectivealtruism.org/posts/GqXQPy9duht3RL9pw/the-importance-of-truth-oriented-discussions-in-ea#jQALmCMZuKwfRZwSJ <p><em>I work for CEA, but the following views are my own. I don&#x27;t have any plans to change Forum policy around which topics are permitted, discouraged, etc. This response is just my attempt to think through some considerations other EAs might want to make around this topic.</em></p><p>--</p><blockquote>While we all have topics on which our emotions get the better of us, those who leave are likely to be overcome to a greater degree and on a wider variety of topics. This means that they will be less likely to be able to contribute productively by providing reasoned analysis. But further than this, they are more likely to contribute negatively by being dismissive, producing biased analysis or engaging in personal attacks.</blockquote><p>I don&#x27;t really care how likely someone is to be &quot;overcome&quot; by their emotions during an EA discussion, aside from the way in which this makes them feel (I want people in EA, like people everywhere, to flourish). </p><p>Being &quot;overcome&quot; and being able to reason productively seem almost orthogonal in my experience; some of the most productive people I&#x27;ve met in EA (and some of the nicest!) tend to have unusually strong emotional reactions to certain topics. There are quite a few EA blogs that alternate between &quot;this thing made me very angry/sad&quot; and &quot;here&#x27;s an incredibly sophisticated argument for doing X&quot;. There&#x27;s some validity to trying to increase the net percentage of conversation that isn&#x27;t too emotionally inflected, but my preference would be to accommodate as many productive/devoted people as we can until it begins to trade off with discussion quality. I&#x27;ve seen no evidence that we&#x27;re hitting this trade-off to an extent that demands we become less accommodating. </p><p>(And of course, biased analysis and personal attacks can be handled when they arise, without our needing to worry about being too inclusive of people who are &quot;more likely&quot; to contribute those things.)</p><blockquote>The people who leave are likely to be more ideological. This is generally an association between being more radical and more ideological, even though there are also people who are radical without being ideological. People who are more ideological are less able to update in the face of new evidence and are also less likely to be able to provide the kind of reasoned analysis that would cause other EAs to update more towards their views.</blockquote><p>See the previous point. I don&#x27;t mind having ideological people in EA if they share the community&#x27;s core values. If their commitment to an ideology leads them to stop upholding those values, we can respond to that separately. If they can provide reasoned analysis on Subject A while remaining incorrigibly biased on Subject B, I&#x27;ll gladly update on the former and ignore the latter. (Steven Pinker disagrees with many EAs quite sharply on X-risk, but most of his last book was<a href="https://forum.effectivealtruism.org/posts/gQvaA9EbvmzQATHt3/book-review-enlightenment-now-by-steven-pinker"> <u>great</u></a>!) </p> aarongertler jQALmCMZuKwfRZwSJ 2019-03-18T08:07:08.377Z Comment by aarongertler on The Importance of Truth-Oriented Discussions in EA https://forum.effectivealtruism.org/posts/GqXQPy9duht3RL9pw/the-importance-of-truth-oriented-discussions-in-ea#FnpHwmiymf9JY9M69 <p><em>I work for CEA, but the following views are my own. I don&#x27;t have any plans to change Forum policy around which topics are permitted, discouraged, etc. This response is just my attempt to think through some considerations other EAs might want to make around this topic.</em></p><p>-- </p><blockquote>Even when there is a cost to participating, someone who considers the topic important enough can choose to bear it.</blockquote><p>This isn&#x27;t always true, unless you use a circular definition of &quot;important&quot;. As written, it implies that anyone who can&#x27;t bear to participate must not consider the topic &quot;important enough&quot;, which is empirically false. Our capacity to do any form of work (physical or mental) is never <em>fully</em> within our control. The way we react to certain stimuli (sights, sounds, ideas) is never <em>fully </em>within our control. If we decided to render all the text on the EA Forum at a 40-degree angle, we&#x27;d see our traffic drop, and the people who left wouldn&#x27;t just be people who didn&#x27;t think EA was sufficiently &quot;important&quot;. </p><p>In a similar vein:</p><blockquote>The more committed [you are] to a cause, the more you are willing to endure for it. We agree with CEA that committed EAs are several times more valuable than those who are vaguely aligned, so that we should [be] optimising the movement for attracting more committed members.</blockquote><p>Again, this is too simplistic. If we could have 100 members who committed 40 hours/week or 1000 members who committed 35 hours/week, we might want to pursue the second option, even if we weren&#x27;t &quot;optimizing for attracting more committed members&quot;. (I don&#x27;t speak for CEA here, but it seems to me like &quot;optimize the amount of total high-fidelity and productive hours directed at EA work&quot; is closer to what the movement wants, and even that is only partly correlated with &quot;create the best world we can&quot;.) </p><p>You could also argue that &quot;better&quot; EAs tend to take ideas more seriously, that having a strong negative reaction to a dangerous idea is a sign of seriousness, and that we should therefore be trying very hard to accommodate people who have reportedly had very negative reactions to particular ideas within EA. This would also be too simplistic, but there&#x27;s a kernel of truth there, just as there is in your statement about commitment.</p><blockquote>Even if limiting particular discussions would clearly be good, once we’ve decided to limit discussions at all, we’ve opened the door to endless discussion and debate about what is or is not unwelcoming (see<a href="https://www.lesswrong.com/posts/7xGcyB7RNdfDe5vxL/moderator-s-dilemma-the-risks-of-partial-intervention"> <u>Moderator’s Dilemma</u></a>). And ironically, these kinds of discussions tend to be highly partisan, political and emotional. </blockquote><p>The door is already open. There are dozens of preexisting questions about which forms of discussion we should permit within EA, on specifically the EA Forum, within any given EA cause area, and so on. Should we limit fundraising posts? Posts about personal productivity? Posts that use obscene language? Posts written in a non-English language? Posts that give investing advice? Posts with graphic images of dying animals? I see &quot;posts that discuss Idea X&quot; as another set of examples in this very long list. They may be more popular to argue about, but that doesn&#x27;t mean we should agree never to limit them just to reduce the incidence of arguments.</p><blockquote>We note that such a conclusion would depend on an exceptionally high quantity of alienating discussions, and is prima facie incompatible with the generally high rating for welcomingness reported in the<a href="https://forum.effectivealtruism.org/posts/eoCexTGET3eFQz3w2/ea-survey-2018-series-how-welcoming-is-ea"> <u>EA survey</u></a>. We note that there are several possible other theories.</blockquote><p>I don&#x27;t think the authors of the<a href="https://forum.effectivealtruism.org/posts/nqgE6cR72kyyfwZNL/making-discussions-in-ea-groups-inclusive"> <u>Making Discussions Inclusive</u></a> post would disagree. I don&#x27;t see any conclusion in that post that alienating discussions are the main factor in the EA gender gap; all I see is the claim, with some evidence from a poll, that alienating discussions are <em>one </em>factor, along with suggestions for reducing the impact of that particular factor.</p><blockquote>It is worthwhile considering the example of Atheism Plus, an attempt to insist that atheists also accept the principles of social justice. This was incredibly damaging and destructive to the atheist movement due to the infighting that it led to and was perhaps partly responsible for the movement’s decline.</blockquote><p>I don&#x27;t have any background on Atheism Plus, but as a more general point: Did the atheism movement actually decline? While the r/atheism subreddit is now ranked #57 by subscriber count (as of 13 March 2019) rather than #38 (<u><a href="https://web.archive.org/web/20150704000940/http://redditlist.com/">4 July 2015</a></u>), the American atheist population seems to have been<a href="https://en.wikipedia.org/wiki/Atheism_in_the_United_States"> <u>fairly flat since 1991</u></a>, and British irreligion is at an<a href="https://www.independent.co.uk/news/uk/home-news/british-people-atheist-no-religion-uk-christianity-islam-sikism-judaism-jewish-muslims-a7928896.html"> <u>all-time high</u></a>. Are there particular incidents (organizations shutting down, public figures renouncing, etc.) that back up the &quot;decline&quot; narrative? (I would assume so, I&#x27;m just unfamiliar with this topic.)</p> aarongertler FnpHwmiymf9JY9M69 2019-03-18T08:04:57.545Z Comment by aarongertler on The Importance of Truth-Oriented Discussions in EA https://forum.effectivealtruism.org/posts/GqXQPy9duht3RL9pw/the-importance-of-truth-oriented-discussions-in-ea#vS6q9WFTtzv6ZcZyE <p><em>I work for CEA, but the following views are my own. I don&#x27;t have any plans to change Forum policy around which topics are permitted, discouraged, etc. This response is just my attempt to think through some considerations other EAs might want to make around this topic.</em></p><p>--</p><p>There were some things I liked about this post, but my comments here will mostly involve areas where I disagree with something. Still, criticism notwithstanding:</p><ul><li>I appreciate the moves the post makes toward being considerate (the content note, the emphasis on not calling out individuals).</li><li>Two points from the post that I think are generally correct and somewhat underrated in debates around moderation policy: You can&#x27;t please everyone, and power relations within particular spaces can look very different than power relations outside of those spaces. This also rang true (though I consider it a good thing for certain &quot;groups&quot; to be disempowered in public discussion spaces):</li></ul><blockquote>There is a negative selection effect in that the more that a group is disempowered and could benefit from having its views being given more consideration, the less likely it is to have to power to make this happen.</blockquote><ul><li>The claim that we should not have &quot;limited discussions&quot; is closing the barn door after the horse is already out. The EA Forum, like almost every other discussion space, has limits already. Even spaces that don&#x27;t limit &quot;worldly&quot; topics may still have meta-limits on style/discourse norms (no personal attacks, serious posts only, etc.). Aside from (maybe?) 4Chan, it&#x27;s hard to think of well-known discussion spaces that truly have no limits. For example, posts on the EA Forum:</li><ul><li>Can&#x27;t advocate the use of violence.</li><li>Are restricted in the types of criticism they can apply: &quot;We should remove Cause X from EA because its followers tend to smell bad&quot; wouldn&#x27;t get moderator approval, even if no individually smelly people were named.</li></ul></ul><p>--</p><p>While I don&#x27;t fully agree with every claim in<a href="https://forum.effectivealtruism.org/posts/nqgE6cR72kyyfwZNL/making-discussions-in-ea-groups-inclusive"> <u>Making Discussions Inclusive</u></a>, I appreciated the way that its authors didn&#x27;t call for an outright ban on any particular form of speech -- instead, they highlighted the ways that speech permissions may influence other elements of group discussion, and noted that groups are making trade-offs when they figure out how to handle speech.</p><p>This post also mostly did this, but occasionally slipped into more absolute statements that don&#x27;t quite square with reality (though I assume one is meant to read the full post while keeping the word &quot;usually&quot; in mind, to insert in various places). An example:</p><p>We believe that someone is excluded to a greater degree when they are not allowed to share their sincerely held beliefs than when they are merely exposed to beliefs that they disagree with.</p><p>This seems simplistic. The reality of &quot;exclusion&quot; depends on which beliefs are held, which beliefs are exposed, and the overall context of the conversation. I&#x27;ve seen conversations where someone shoehorned their &quot;sincerely held beliefs&quot; into a discussion to which they weren&#x27;t relevant, in such an odious way that many people who were strained on various resources (including &quot;time&quot; and &quot;patience&quot;) were effectively forced out. Perhaps banning the shoehorning user would have excluded them to a “greater degree”, but their actions excluded a lot of people, even if to a “lesser degree”. Which outcome would have been worse? It’s a complicated question.</p><p>I&#x27;d argue that keeping things civil and on-topic is frequently <em>less</em> exclusionary than allowing total free expression, especially as conversations grow, because some ideas/styles are repellent to almost everyone. If someone insists on leaving multi-page comments with Caps Lock on in every conversation within a Facebook group, I&#x27;d rather ask them to leave than ask the annoyed masses to grit their teeth and bear it.</p><p>This is an extreme example, of course, so I&#x27;ll use a real-world example from another discussion space I frequent: Reddit.</p><p>On the main Magic: The Gathering subreddit, conversations about a recent tournament winner (a non-binary person) were frequently interrupted by people with strong opinions about the pronoun &quot;they&quot; being &quot;confusing&quot; or &quot;weird&quot; to use for a single person.</p><p>This is an intellectual position that may be worth discussing in other contexts, but in the context of these threads, it appeared hundreds of times and made it much more tedious to pick out actual Magic: The Gathering content. Within days, these users were being kicked out by moderators, and the forum became more readable as a result, to what I&#x27;d guess was the collective relief of a large majority of users.</p><p>--</p><p>The general point I&#x27;m trying to make: </p><p><em><strong>&quot;Something nearly everyone dislikes&quot; is often going to be worth excluding even from the most popular, mainstream discussion venues.</strong></em></p><p>In the context of EA, conversations that are genuinely about effective do-gooding should be protected, but I don&#x27;t think several of your examples really fit that pattern:</p><ul><li>Corruption in poor countries being caused by &quot;character flaws&quot; seems like a non sequitur. </li><ul><li>When discussing ways to reduce corruption, we can talk about history, RCT results, and economic theory -- but why personal characteristics? </li><li>Even if it were the case that people in Country A were somehow more &quot;flawed&quot; than people in Country B, this only matters if it shows up in our data, and at that point, it’s just a set of facts about the world (e.g. “government officials in A are more likely to demand bribes than officials in B, and bribery demands are inversely correlated with transfer impact, which means we should prefer to fund transfers in B”). I don&#x27;t see the point of discussing the venality of the A-lish compared to the B-nians separately from actual data.</li></ul><li>I think honest advocates for cash-transfer RCTs could quite truthfully state that they aren&#x27;t trying to study whether poor people are &quot;lazy&quot;. Someone&#x27;s choice not to work doesn&#x27;t have to be the target of criticism, even if it influences the estimated benefit of a cash transfer to that person. It&#x27;s also possible to conclude that poor people discount the future without attaching the &quot;character flaw&quot; label.</li><ul><li>Frankly, labels like this tend to obscure discussion more than they help, by obscuring actual data and creating fake explanations (&quot;poor people don&#x27;t care as much about the future, which is bad&quot; &lt; &quot;poor people don&#x27;t care as much about the future, but this is moderated by factors A and B, and is economically rational if we factor in C, and here&#x27;s a model for how we can encourage financial planning by people at different income levels&quot;).</li><li>The same problem applies to your discussion of female influence and power; whether or not a person&#x27;s choices have led them to have less power seems immaterial to understanding which distributions of power tend to produce the best outcomes, and how particular policies might move us toward the best distributions.</li></ul></ul><p>To summarize the list of points above: In general, discussions of whether a state of the world is &quot;right&quot;, or whether a person is &quot;good&quot; or &quot;deserving&quot;, don&#x27;t make for great EA content. While I wouldn&#x27;t prohibit them, I think they are far more tempting than they are useful, and that we should almost always try to use &quot;if A, then B&quot; reasoning rather than &quot;hooray, B!&quot; reasoning.</p><p>Of course, &quot;this reasoning style tends to be bad&quot; doesn&#x27;t mean &quot;prohibit it entirely&quot;. But it makes the consequence of limiting speech topics seem a bit less damaging, compared to what we could gain by being more inclusive. (Again, I don’t actually think we <em>should </em>add more limits in any particular place, including the EA Forum. I’m just pointing out considerations that other EAs might want to make when they think about these topics.) </p> aarongertler vS6q9WFTtzv6ZcZyE 2019-03-18T08:03:05.226Z Open Thread #44 https://forum.effectivealtruism.org/posts/j4ASyayDWFwgEEzid/open-thread-44 <p>Use this thread to post things that are awesome, but not awesome enough to be full posts. Consider giving your comment a brief title to improve readability. </p><p>(<a href="https://forum.effectivealtruism.org/posts/jrN4CHJooBm3KCfBK/open-thread-43">Here&#x27;s the last Open Thread, for reference.</a>)</p> aarongertler j4ASyayDWFwgEEzid 2019-03-06T09:27:58.701Z EA Forum Prize: Winners for January 2019 https://forum.effectivealtruism.org/posts/k7j7oxMcHsun2nC5H/ea-forum-prize-winners-for-january-2019 <p> CEA is pleased to announce the winners of the January 2019 EA Forum Prize!</p><p>In first place (for a prize of $999): &quot;<u><a href="https://forum.effectivealtruism.org/posts/hP6oEXurLrDXyEzcT/ea-survey-2018-series-cause-selections">EA Survey 2018 Series: Cause Selections</a></u>&quot;, by <u><a href="https://forum.effectivealtruism.org/users/david_moss">David_Moss</a></u>, <u><a href="https://forum.effectivealtruism.org/users/incogneilo18">Neil_Dullaghan</a></u>, and Kim Cuddington.</p><p>In second place (for a prize of $500): &quot;<u><a href="https://forum.effectivealtruism.org/posts/Ns3h8rCtsTMgFZ9eH/ea-giving-tuesday-donation-matching-initiative-2018">EA Giving Tuesday Donation Matching Initiative 2018 Retrospective</a></u>&quot;, by <u><a href="https://forum.effectivealtruism.org/users/avin">AviNorowitz</a></u>.</p><p>In third place (for a prize of $250): &quot;<u><a href="https://forum.effectivealtruism.org/posts/jn7TwAtFsHLW3jnQK/eagx-boston-2018-postmortem">EAGx Boston 2018 Postmortem</a></u>”, by <u><a href="https://forum.effectivealtruism.org/users/mjreard">Mjreard</a></u>.</p><p>We also awarded prizes in <u><a href="https://forum.effectivealtruism.org/posts/k4SLFn74Nsbn4sbMA/ea-forum-prize-winners-for-november-2018">November</a></u> and <u><a href="https://forum.effectivealtruism.org/posts/gsNDoqpB2pWq5yYLv/ea-forum-prize-winners-for-december-2018">December</a></u>.</p><h2>What is the EA Forum Prize?</h2><p>Certain posts exemplify the kind of content we <u><a href="https://forum.effectivealtruism.org/about">most want to see</a></u> on the EA Forum. They are well-researched and well-organized; they care about <u><a href="https://ideas.ted.com/why-you-think-youre-right-even-when-youre-wrong/">informing readers, not just persuading them</a></u>.</p><p>The Prize is an incentive to create posts like this. But more importantly, we see it as an opportunity to showcase excellent content as an example and inspiration to the Forum&#x27;s users.</p><h2>The voting process</h2><p>All posts published in the month of January qualified for voting, save for those written by CEA staff and Prize judges.</p><p>Prizes were chosen by five people. Three of them are the Forum&#x27;s moderators (<u><a href="https://forum.effectivealtruism.org/users/aarongertler">Aaron Gertler</a></u>, <u><a href="https://forum.effectivealtruism.org/users/denise_melchin">Denise Melchin</a></u>, and <u><a href="https://forum.effectivealtruism.org/users/julia_wise">Julia Wise</a></u>).</p><p>The others were two of the three highest-karma users at the time the new Forum was launched (<u><a href="https://forum.effectivealtruism.org/users/peter_hurford">Peter Hurford</a></u> and <u><a href="https://forum.effectivealtruism.org/users/joey">Joey Savoie</a></u> — <u><a href="https://forum.effectivealtruism.org/users/robert_wiblin">Rob Wiblin</a></u> took this month off).</p><p>Voters recused themselves from voting for content written by their colleagues. Otherwise, they used their own individual criteria for choosing posts, though they broadly agree with the goals outlined above.</p><p>Winners were chosen by an initial round of <u><a href="https://en.wikipedia.org/wiki/Approval_voting">approval voting</a></u>, followed by a runoff vote to resolve ties.</p><h2>About the January winners</h2><p>“<u><a href="https://forum.effectivealtruism.org/posts/hP6oEXurLrDXyEzcT/ea-survey-2018-series-cause-selections">EA Survey 2018 Series: Cause Selections</a></u>”, like the other posts in that series, makes important data from the EA Survey much easier to find. The summary and use of descriptive headings both increase readability, and the methodological details help to put the post’s numbers in context.</p><p>As a movement, we collect a lot of information about ourselves, and it’s really helpful when authors report that information in a way that makes it easier to understand. All the posts in this series are worth reading if you want to learn about the EA community.</p><p>--</p><p>The EA Giving Tuesday program shows what a team of volunteers can do when they notice an opportunity — and how much more good can be done when those volunteers actively work to improve their project (in this case, they raised the matching funds they obtained by a factor of 10 between 2017 and 2018). </p><p>“<u><a href="https://forum.effectivealtruism.org/posts/Ns3h8rCtsTMgFZ9eH/ea-giving-tuesday-donation-matching-initiative-2018">EA Giving Tuesday Donation Matching Initiative 2018 Retrospective</a></u>” illustrates this well, taking readers through the setup and self-improvement processes of the Initiative, in a way that offers lessons for any number of other projects. </p><p>Documentation like this is important for keeping a project going even if a key contributor stops being available to work on it. We hope that others will learn from the EA Giving Tuesday example to create such documents (and Lessons Learned sections) for their own projects.</p><p>—</p><p>“<u><a href="https://forum.effectivealtruism.org/posts/jn7TwAtFsHLW3jnQK/eagx-boston-2018-postmortem">EAGx Boston 2018 Postmortem</a></u>” is a well-designed guide to running a small EA conference, which explains many important concepts in a clear and practical way using stories from a particular event. </p><p>Notable features of the post:</p><ul><li>The author links directly to materials they used for the event (like a template for inviting speakers), helping other organizers save time by giving them something to build on.</li><li>The takeaway section for each subtopic helps readers find the knowledge they wanted, whether they&#x27;re planning a full event or were just curious to see how another conference handled food.</li></ul><p>I personally expect to share this postmortem whenever someone asks me about running an EA event (whether with 20 people or 200), and I hope to see an updated version after this year’s EAGx Boston!</p><h2>The future of the Prize</h2><p>When we launched the EA Forum Prize, we planned on running the program for three months before deciding whether to keep awarding monthly prizes. We still aren’t sure whether we’ll do so. Our goals for the program were as follows:</p><ol><li>Create an incentive for authors to put more time and care into writing posts.</li><li>Collect especially well-written posts to serve as an example for other authors.</li><li>Offer readers a selection of curated posts (especially those who don’t have time to read most of the content published on the Forum).</li></ol><p><strong>If you have thoughts on whether the program should continue, please let us know in the comments, or by contacting <u><a href="mailto:aaron@centreforeffectivealtruism.org">Aaron Gertler</a></u>. </strong>We’d be especially interested to hear whether the existence of the Prize has led you to write anything you might not have written otherwise, or to spend more time on a piece of writing.</p> aarongertler k7j7oxMcHsun2nC5H 2019-02-22T22:27:50.161Z The Narrowing Circle (Gwern) https://forum.effectivealtruism.org/posts/WF5GDjLQgLMjaXW6B/the-narrowing-circle-gwern <p><em>Content note: Discussion of infanticide and sexual violence.</em></p><p><em>Views I express in this essay are my own, unrelated to CEA.</em></p><hr class="dividerBlock"/><p><strong>Summary: </strong>Have our moral &quot;circles&quot; really expanded over time? While some groups get more moral consideration than they once did, others get less, or see their moral status shift back and forth. Gwern questions how much &quot;progress&quot; we&#x27;ve really made over the years, as opposed to mere shifts between the groups we care about. </p><hr class="dividerBlock"/><p>In <em><a href="https://www.gwern.net/The-Narrowing-Circle">The Narrowing Circle</a></em>, Gwern speculates that what we see as broad moral progress may instead be a series of moral <em>shifts</em>, embracing new beings/ideas and rejecting old ones in a way that isn&#x27;t as predictable or linear as &quot;expanding circle&quot; theory might hold.</p><p>I highly recommend reading the original essay, but here&#x27;s a brief summary of Gwern&#x27;s main points.</p><h2>Is there an expanding circle?</h2><ul><li><a href="http://www.amazon.com/The-Expanding-Circle-Evolution-Progress/dp/0691150699/?tag=gwernnet-20">Peter Singer</a> proposed that people tend to include more and more beings in their &quot;circle&quot; of moral regard over time. <a href="https://quoteinvestigator.com/2012/11/15/arc-of-universe/">Many others</a> hold a similar view (&quot;the arc of the moral universe is long, but it bends toward justice&quot;)</li><li>However, it&#x27;s easy to see patterns appear in random data. Between that phenomenon and confirmation bias, we should be careful not to jump too eagerly to an &quot;expanding circle&quot; explanation without considering that we could be ignoring beings that have been <em>excluded </em>from moral regard, perhaps because we no longer even <em>consider </em>those beings as potential inclusions.</li><li>Another question (not explored too deeply in this essay): Have we become more moral, or do we simply live in a world that is less morally challenging? It may be easier to feel compassion when we are rich and at peace, but if a truly threatening war broke out, would we become as bloodthirsty as ever? (We may not <em>believe </em>in witches, but if we did believe in witches, as our ancestors did, would we still execute<em> </em>them?)</li></ul><h2>How have we narrowed the circle?</h2><p><strong>Religion</strong></p><p>Compared to people in the past, people in the present hold very little regard (on average) for supernatural entities. This isn&#x27;t always because of atheism or agnosticism; many people claim to be religious but also make little or no effort to &quot;keep the faith&quot;. Has our disregard for the gods outpaced our disbelief?</p><p>This disregard extends to the case of &quot;sacred animals&quot;. Not only have we dramatically scaled up factory farming; we have also (on a smaller scale) removed &quot;protected&quot; status from certain categories of animals that had holy significance in the past. (We&#x27;ve also <a href="https://daily.jstor.org/when-societies-put-animals-on-trial/">stopped putting animals on trial</a>, though this seems to me like a separate phenomenon.)</p><p><strong>Infanticide</strong></p><p>Infants and the unborn have seen their moral status shift back and forth around the world and through the centuries. Some societies regularly cast out unwanted infants (or even mandated the killing of infants in some cases); others banned abortion from the time of conception.</p><blockquote>If one accepts the basic premise that a fetus is human, then the annual rate (as pro-life activists never tire of pointing out) of millions of abortions worldwide would negate centuries of moral progress. If one does not accept the premise, then per C.S. Lewis, we have change in facts as to what is human, but nothing one could call an expanding circle. </blockquote><p><strong>Disability</strong></p><p>In many ways, we take much better care of people with disabilities than we did in past eras. In other ways, we&#x27;ve come up with new reasons to exclude people; modern society may discriminate more viciously than past societies on the basis of weight or facial appearance. (I&#x27;ll add a quote from Aeon: &quot;<a href="https://aeon.co/essays/there-is-no-shame-worse-than-poor-teeth-in-a-rich-world">There is no shame worse than poor teeth in a rich world.</a>&quot;)</p><p><strong>Judicial Torture</strong></p><p>Many states, in both the East and West, have moved back and forth on policies related to the torture of prisoners and dissidents. We no longer hang prisoners in front of cheering crowds, but we lock tens of thousands of people in solitary confinement and make jokes about the sexual abuse of prisoners. (I&#x27;ll also note that society constantly redefines what a &quot;crime&quot; is; we&#x27;re much nicer to thieves than we once were, and probably harsher toward drug users.)</p><blockquote>Let’s not talk about how one is sentenced to jail in the first place; <a href="https://twitter.com/HunterFelt/status/317495942829965313">Hunter Felt</a>: Your third arrest, you <a href="https://en.wikipedia.org/wiki/Three-strikes%20law">go to jail for life</a>. Why the third? Because in <a href="https://en.wikipedia.org/wiki/Baseball">a game</a> a guy gets three times to swing a stick at a ball. </blockquote><p><strong>Ancestors</strong></p><p>We do a poor job of respecting the wishes of the dead, even when those people have made reasonable and non-harmful plans for the use of their assets (many trusts put away for charity are torn apart by lawyers and heirs).</p><blockquote>More dramatically, we dishonor our ancestors by neglecting their graves, by not offering any sacrifices or even performing any rituals, by forgetting their names (can you name your great-grandparents?), by selling off the family estate when we think the market has hit the peak, and so on. </blockquote><p>Gwern argues, convincingly, that people in the past were much more respectful in this sense (perhaps a useless gesture to those no longer able to receive it, but might it not have been a comfort to those who died long ago to know that they would be remembered, respected, even revered?).</p><p><strong>Descendants</strong></p><p>This is fairly standard EA material about planning for the long term, and is as such slightly out of date (&quot;there are no explicit advocates for futurity&quot;). But we are a tiny group within society, and when I think about the majority of living people outside of EA, this rings true:</p><blockquote>Has the living’s concern for their descendants, the inclusion of the future into the circle or moral concern, increased or decreased over time? Whichever one’s opinion, I submit that the answer is shaky and not supported by excellent evidence. </blockquote><h2>My thoughts</h2><p><em>I make no claim that any of these views are original, but I&#x27;m trying to note things I didn&#x27;t see in Gwern&#x27;s essay.</em></p><p>When we cease to grant moral regard to certain groups, it seems to happen for one or more of the following reasons:</p><p>1. We no longer view them as &quot;possible&quot; targets for moral regard (e.g. the gods, to an atheist)</p><p>2. While we acknowledge that they are &quot;possible&quot; targets, our modern morality doesn&#x27;t really &quot;cover&quot; them (e.g. fetuses, to some in the pro-choice movement, though this issue is complicated, nearly everyone wants fewer abortions, and any &quot;side&quot; in the debate holds a wide range of views about what to do and why)</p><p>3. We&#x27;ve learned new ways to take advantage of them (e.g. animals, in the case of factory farming)</p><p>4. We&#x27;ve genuinely become more antagonistic toward them (e.g. the view of Muslims by certain groups since 2001; the treatment of American prisoners)</p><hr class="dividerBlock"/><p>It seems to me as though (1) generally doesn&#x27;t interfere with the notion of the expanding circle. Neither does (3), necessarily; if our ancestors knew how to establish factory farms, I assume they would have done so, since they were no strangers to animal cruelty (e.g. bear-baiting, gladitorial combat).</p><p>(2) does complicate things, and while I favor expanding abortion rights, I&#x27;m not sure I&#x27;d think of them as a facet of the &quot;expanding circle&quot; in the same way as I do the expansion of civil rights for certain groups. And (4) implies that the expanding circle can, under the right circumstances, <em>shrink</em>, due to the same kinds of mass movements and meme-spreading that categorize expansion of the circle.</p><p>For example, it&#x27;s often argued that knowing a gay person makes you more likely to favor gay rights; as more people come out of the closet, more people know that they have gay friends and relatives, and support for gay rights spreads rapidly. </p><p>Could the opposite be true for prisoners? As the crime rate shrinks, and people with criminal records become less likely to re-integrate into society, perhaps fewer people know someone who&#x27;s been to prison. Would that make it easier to think of criminals as &quot;the other&quot;, people you&#x27;d never love or befriend? </p><p>(On the one hand, incarceration rose in the U.S. during a time of large increases in the crime rate; on the other hand, prison reform seems to have lagged substantially behind reduction in the crime rate, implying that some factor other than a direct &quot;fear of criminals&quot; is in play. Do we simply <em>care less </em>nowadays?)</p><p>This also makes me rethink my position on certain kinds of animal cruelty; as fewer and fewer people live on farms, might we care less and less about the way farm animals are treated?</p> aarongertler WF5GDjLQgLMjaXW6B 2019-02-11T23:50:45.093Z What are some lists of open questions in effective altruism? https://forum.effectivealtruism.org/posts/dRXugrXDwfcj8C2Pv/what-are-some-lists-of-open-questions-in-effective-altruism <p>One very good way to inspire research is to create a list of open questions. I&#x27;m aware of a few resources like this:</p><ul><li><a href="https://forum.effectivealtruism.org/posts/LG6gwxhrw48Dvteej/concrete-project-lists">Richard Batty&#x27;s Concrete Projects List</a> (and some of the comments).</li><li><a href="https://www.openphilanthropy.org/blog/technical-and-philosophical-questions-might-affect-our-grantmaking">Open Phil&#x27;s &quot;questions that might affect our grantmaking&quot; list</a>.</li><li><a href="https://foundational-research.org/open-research-questions/">FRI&#x27;s open research questions</a>.</li><li><a href="https://www.lesswrong.com/posts/kphJvksj5TndGapuh/directions-and-desiderata-for-ai-alignment">Paul Christiano&#x27;s sequence on iterated amplification</a>, which talks about open questions but never quite lists them as such.</li></ul><p>Are there other sets of EA-related open questions that I&#x27;ve left out of this list, and that aren&#x27;t on Richard&#x27;s list?</p><p>Specifically, I&#x27;m looking for questions that could be solved through research or experimentation, rather than &quot;projects&quot; that require competitive execution (&quot;could someone create an Amazon for charitable donations?&quot; doesn&#x27;t count, but &quot;what factors lead to someone repeatedly using a donation website?&quot; could).</p> aarongertler dRXugrXDwfcj8C2Pv 2019-02-05T02:23:03.345Z Are there more papers on dung beetles than human extinction? https://forum.effectivealtruism.org/posts/dvCuqKS825AqSm7fN/are-there-more-papers-on-dung-beetles-than-human-extinction <p><strong>Summary: </strong>Yes. But extinction can probably catch up if we put our minds to it.</p><hr class="dividerBlock"/><p>From a <a href="https://www.vox.com/future-perfect/2019/1/3/18165541/extinction-risks-humanity-asteroids-supervolcanos-gamma-rays"><em>Vox</em> article</a> by the wonderful Kelsey Piper:</p><blockquote>&quot;There are <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/1758-5899.12002">more academic papers on dung beetles than the fate of <em>H. sapiens</em></a>,&quot; Sandberg writes. That’s a bizarre state of affairs. What’s going on?</blockquote><p>And:</p><blockquote>Mitigating risks requires more careful and thoughtful development of new technology, measures to avoid deployment of unsafe systems, and international coordination to enforce agreements that reduce risk. All of those are uphill battles. No wonder it’s more rewarding to study dung beetles.</blockquote><p>I&#x27;m often suspicious of claims about the state of the world (e.g. the resources devoted to different academic subjects) based on imperfect signals about the state of the world (e.g. search results for paper topics). </p><p>&quot;A Google search for X returns Y results&quot; is a lazy cliche, almost totally uninformative in most places I&#x27;ve seen it used. &quot;Google auto-completes X with Y&quot; <a href="http://slatestarcodex.com/2013/04/04/lies-damned-lies-and-facebook-part-2-of-∞/">isn&#x27;t much better</a>. </p><p>But when Anders Sandberg makes a similar claim, I pay attention. Here&#x27;s a graph from the paper Piper picked:</p><span><figure><img src="http://aarongertler.net/wp-content/uploads/2019/02/Dung-beetles.png" class="draft-image " style="" /></figure></span><p>I&#x27;m too lazy to register for Scopus right now, but Google Scholar gives me similar results for 2012. For the rest of this post, I&#x27;ll use &quot;since 2018&quot; as my base year, rather than 2012 -- with EA&#x27;s influence, maybe X-risk is catching up to beetles?</p><hr class="dividerBlock"/><h2>Google Scholar results since 2018</h2><p><em>Data collected on 4 February, 2019.</em></p><p><strong>&quot;Dung beetle&quot; OR &quot;Dung beetles&quot;: </strong>1830 results</p><p><strong>&quot;Human extinction&quot;:</strong> 449</p><p><strong>&quot;Human extinction&quot; OR &quot;existential risk&quot; OR &quot;global catastrophic risk&quot;:</strong> 940 </p><p>Okay, the beetles are winning. What if I add a some of the natural threats to humanity mentioned by Sandberg and Piper?</p><p><strong>&quot;Asteroid detection&quot; OR &quot;existential risk&quot; OR &quot;human extinction&quot; OR &quot;global catastrophic risk&quot; OR &quot;supervolcano&quot; OR (&quot;gamma ray burst&quot; AND (&quot;human&quot; OR &quot;injury&quot; OR &quot;danger&quot;)): </strong>1180</p><p>Is this a decisive victory? A &quot;dung deal&quot;, as it were? </p><h2>Possible caveats</h2><p>I tried removing &quot;global&quot; from &quot;global catastrophic risk&quot; and <em>almost</em> beat the beetles, but I discovered that &quot;catastrophic risk&quot; is insurance lingo which usually refers to hurricanes and droughts and other non-global hazards.</p><p>I tried adding &quot;mass extinction&quot; and more than <em>doubled </em>the beetles&#x27; score, but nearly all papers using that term are about natural history rather than future risk.</p><p>I tried removing &quot;include citations?&quot; for my searches. This cut down on beetle numbers (from 1830 to 1080) and didn&#x27;t really affect my longest X-risk search term. Suddenly, risk came out ahead!</p><p>But then I read a sample of the papers returned by both searches. The dung beetle papers were all about, well, dung beetles. The X-risk papers referred to a motley array of topics, many of which had nothing to do with X-risk:</p><ul><li>The psychology concept of &quot;human <a href="https://www.hartleylab.org/uploads/5/3/1/0/53101939/extinction_learning.pdf">extinction learning</a>&quot;.</li><li><a href="https://www.sciencedirect.com/science/article/abs/pii/S0006320717322024">&quot;Human extinction <em>of</em>&quot; other animals</a> (the real X-risk was inside us all along).</li><li>Random physics papers summoned by my reckless use of gamma rays.</li></ul><p>...and so on. Take out all of that, and the beetles are still pushing us around.</p><h2>Do search term numbers matter?</h2><p>Some things this comparison probably tells us:</p><ul><li>More people are currently being paid to study biological topics to which dung beetles are relevant than to study human extinction.</li><li>It&#x27;s easier to publish papers about said topics than about human extinction.</li><li>Not enough people are <a href="https://vkrakovna.wordpress.com/2015/05/17/hamming-questions-and-bottlenecks/">Hamming themselves</a> before choosing research topics. (There may be more people in the world who could become solid beetle scholars than those who could become solid X-risk scholars, but still.)</li></ul><p>Some things this comparison can&#x27;t really tell us:</p><ul><li>How much work/investment goes toward each topic, in total. People working outside the scientific community, including governments, probably spend a lot more money and time on human extinction than they do on dung beetles.</li><li>How successful the average human extinction researcher is, compared to the average dung beetle researcher. (I actually don&#x27;t have a good guess as to which topic is more likely to get you a grant if you have a good idea.)</li></ul><p>The actual problem this post made me think about:</p><ul><li>How easy it is to get involved in human extinction research compared to dung beetle research. </li></ul><p>The former could be a vast field with innumerable open questions, but it still seems difficult for most people to contribute in any reliable way; few of those open questions are <em>listed </em>anywhere, few classes teach these subjects, few reliable methods exist for making progress, etc. </p><p>By contrast, if you are a college student and want to start studying dung beetles, you can <a href="https://www.researchgate.net/profile/Fabien_Muhirwa/publication/324594158_Dung_beetle_distribution_abundance_and_diversity_along_an_elevation_gradient_in_Nyungwe_National_Park_Rwanda_A_preliminary_survey/links/5ad75e19aca272fdaf7ed574/Dung-beetle-distribution-abundance-and-diversity-along-an-elevation-gradient-in-Nyungwe-National-Park-Rwanda-A-preliminary-survey.pdf">grab some bug traps, follow your professors to a nearby national park, and start taking samples</a>. </p><p>...I suppose it&#x27;s time for me to start soliciting lists of open questions. That post will be linked <a href="https://forum.effectivealtruism.org/posts/dRXugrXDwfcj8C2Pv/what-are-some-lists-of-open-questions-in-effective-altruism">here</a> in about 15 minutes. </p><hr class="dividerBlock"/><p>Please let me know if I missed anything, of course, from a more apt search term to a philosophical consideration. It would be nice to take the Dung Beetle Question from &quot;open&quot; to &quot;closed&quot;.</p><p></p> aarongertler dvCuqKS825AqSm7fN 2019-02-05T02:09:58.568Z You Should Write a Forum Bio https://forum.effectivealtruism.org/posts/2j8ERGPu68L5Bd95y/you-should-write-a-forum-bio <p><em>I work for CEA and help to run the Forum, but this is my personal opinion as a person who likes having information about the EA community.</em></p><hr class="dividerBlock"/><h2>Why write a bio?</h2><p>If you click someone&#x27;s username, you can see a personal bio in their profile.</p><p>The EA Forum will be a slightly better place if more of its users write a bio. If you don&#x27;t mind sharing information about yourself, I recommend writing one.</p><p>Here are a few reasons why:</p><p><strong>1. It makes it easy to see your affiliations.</strong> If I&#x27;m talking to someone about a charity, it helps to know whether they work, or have worked, at said charity. Transparency!</p><p><strong>2. It lets you link to your other content</strong>. If you link to a blog/personal site in your bio, someone who likes your post or comment can read more of your work. They can also learn more about your favorite causes, or anything else you want to share.</p><p><strong>3. It can be especially helpful to newer community members.</strong> When you&#x27;re new (or otherwise aren&#x27;t very connected to the EA “social scene”), effective altruism can feel like a small club of people who already know each other. Helping people learn about you, if you don’t mind sharing that information, makes EA more welcoming.</p><p>This list is non-exhaustive, because bios are flexible and can serve many purposes.</p><hr class="dividerBlock"/><p><strong>If you prefer not to share personal information on the Forum, please don&#x27;t feel pressured to do so by this post. </strong>My intention is to remind people who want to share, or wouldn&#x27;t mind sharing, that bios exist and are helpful.</p><h2>How to write a bio</h2><p>1. Click your username, then click &quot;Edit Account&quot;.</p><p>2. Type your bio in the &quot;Bio&quot; box below your email address. For now, we only support plain text. You can use line breaks while editing, but the published bio will show on your profile without line breaks.</p><p>3. Click &quot;Submit&quot; at the bottom of the page, so that your bio will be saved.</p><p>Not sure what to write? Try some of these:</p><ul><li>Your name (if you don&#x27;t mind sharing).</li><li>Your EA affiliations (e.g. employment, group membership), or whatever else you&#x27;re working on. </li><li>Your favorite causes/organizations.</li><li>A link to any other writing you&#x27;d like to share.</li><li>Fun facts?</li></ul><p>For example, here&#x27;s my bio:</p><p><em>Aaron is a full-time content writer at CEA. He started Yale&#x27;s student EA group, and has volunteered for CFAR and MIRI. He also works for a small, un-Googleable private foundation that makes EA-adjacent donations. Before joining CEA, he was a tutor, a freelance writer, a tech support agent, and a music journalist. He blogs, and keeps a public list of his donations, at aarongertler.net.</em></p><p>No need to make yours this long, of course.</p><hr class="dividerBlock"/><p><strong>Suggestion: </strong>If you decide to write a bio after reading this post, leave a comment so that other people know to read it! (At the very least, I will read it, since I’m always curious about the Forum’s users.)</p> aarongertler 2j8ERGPu68L5Bd95y 2019-02-01T03:32:29.453Z EA Forum Prize: Winners for December 2018 https://forum.effectivealtruism.org/posts/gsNDoqpB2pWq5yYLv/ea-forum-prize-winners-for-december-2018 <p> CEA is pleased to announce the winners of the December 2018 EA Forum Prize!</p><p>In first place (for a prize of $999): &quot;<u><a href="https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison">2018 AI Alignment Literature Review and Charity Comparison</a></u>&quot;, by <u><a href="https://forum.effectivealtruism.org/users/larks">Larks</a></u>.</p><p>In second place (for a prize of $500): &quot;<u><a href="https://forum.effectivealtruism.org/posts/XWSTBBH8gSjiaNiy7/cause-profile-mental-health">Cause Profile: Mental Health</a></u>&quot;, by <u><a href="https://forum.effectivealtruism.org/users/michaelplant">Michael Plant</a></u>.</p><p>In third place (for a prize of $250): &quot;<u><a href="https://forum.effectivealtruism.org/posts/PbnvjtTFnPiaT5ZJQ/lessons-learned-from-a-prospective-alternative-meat-startup">Lessons Learned from a Prospective Alternative Meat Startup Team</a></u>&quot;, by <u><a href="https://forum.effectivealtruism.org/users/scottweathers">Scott Weathers</a></u>, <u><a href="https://forum.effectivealtruism.org/users/joangass">Joan Gass</a></u>, and an anonymous co-author.</p><p>You can see November’s winning posts <u><a href="https://forum.effectivealtruism.org/posts/k4SLFn74Nsbn4sbMA/ea-forum-prize-winners-for-november-2018">here</a></u>.</p><h2>What is the EA Forum Prize?</h2><p>Certain posts exemplify the kind of content we <u><a href="https://forum.effectivealtruism.org/about">most want to see</a></u> on the EA Forum. They are well-researched and well-organized; they care about <u><a href="https://ideas.ted.com/why-you-think-youre-right-even-when-youre-wrong/">informing readers, not just persuading them</a></u>.</p><p>The Prize is an incentive to create posts like this. But more importantly, we see it as an opportunity to showcase excellent content as an example and inspiration to the Forum&#x27;s users.</p><h2>About the December winners</h2><p>&quot;<u><a href="https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison">2018 AI Alignment Literature Review and Charity Comparison</a></u>&quot; is an elegant summary of a complicated cause area. It should serve as a useful resource for people who want to learn about the field of AI alignment; we hope it also sets an example for other authors who want to summarize research.</p><p>The post isn’t only well-written, but also well-organized, with several features that make it easier to read and understand. The author: </p><ul><li>Offers suggestions on how to effectively read the post.</li><li>Hides their conclusions, encouraging readers to draw their own first.</li><li>Discloses relevant information about their background, including the standards by which they evaluate research and their connections with AI organizations.</li></ul><p>These features all fit with the Forum’s goal of “information before persuasion”, letting readers gain value from the post even if they disagree with some of the author’s beliefs.</p><hr class="dividerBlock"/><p>&quot;<u><a href="https://forum.effectivealtruism.org/posts/XWSTBBH8gSjiaNiy7/cause-profile-mental-health">Cause Profile: Mental Health</a></u>&quot; is a strong investigation of a cause which hasn’t gotten very much attention from the EA movement.</p><p>Especially good features of the post:</p><ul><li>An introduction which serves as a useful guide to a long analysis.</li><li>Summaries of each section placed under the section headers, making navigation and comprehension even easier.</li><li>Endnotes which help readers verify information for themselves.</li><li>The use of a classic framework for impact analysis (<u><a href="https://80000hours.org/articles/problem-framework/">scale, neglectedness, and tractability</a></u>), which helps readers compare mental health to other cause areas that have been evaluated using the same framework.</li></ul><p>We hope to see more such investigations in the future for other promising causes. </p><hr class="dividerBlock"/><p>&quot;<u><a href="https://forum.effectivealtruism.org/posts/PbnvjtTFnPiaT5ZJQ/lessons-learned-from-a-prospective-alternative-meat-startup">Lessons Learned from a Prospective Alternative Meat Startup Team</a></u>&quot; is a well-organized and highly informative discussion from a team that tried to start a high-impact company. The authors provide useful advice about entrepreneurship and summarize the state of alternative-meat research, a key topic within animal welfare. While they decided not to move forward with a startup, the team learned from the experience and also produced value for the EA community by sharing their story on the Forum.</p><p>We’ve been impressed by similar “postmortem” articles published on the Forum in the past. Going forward, we hope to see other people share lessons from the projects they pursue, whether or not they “complete” those projects.</p><h2>The voting process</h2><p>All posts made in the month of December qualified for voting, save for those written by CEA staff and Prize judges.</p><p>Prizes were chosen by six people. Three of them are the Forum&#x27;s moderators (<u><a href="https://forum.effectivealtruism.org/users/maxdalton">Max Dalton</a></u>, <u><a href="https://forum.effectivealtruism.org/users/denise_melchin">Denise Melchin</a></u>, and <u><a href="https://forum.effectivealtruism.org/users/julia_wise">Julia Wise</a></u>).</p><p>The other three are the EA Forum users who had the most karma at the time the new Forum was launched (<u><a href="https://forum.effectivealtruism.org/users/peter_hurford">Peter Hurford</a></u>, <u><a href="https://forum.effectivealtruism.org/users/joey">Joey Savoie</a></u>, and <u><a href="https://forum.effectivealtruism.org/users/robert_wiblin">Rob Wiblin</a></u>).</p><p>Voters recused themselves from voting for content written by their colleagues. Otherwise, they used their own individual criteria for choosing posts, though they broadly agree with the goals outlined above.</p><p>Winners were chosen by an initial round of <u><a href="https://en.wikipedia.org/wiki/Approval_voting">approval voting</a></u>, followed by a runoff vote to resolve ties.</p><h2>Next month</h2><p>The Prize will continue with a round for January’s posts! After that, we’ll evaluate whether we plan to keep running it (or perhaps change it in some fashion). We hope that the Forum’s many excellent December posts will provide inspiration for more great work in the coming months. </p><h2>Feedback on the Prize</h2><p>We&#x27;d love to hear any feedback you have about the Prize. Leave a comment or contact <a href="https://forum.effectivealtruism.org/users/aarongertler">Aaron Gertler</a> with questions or suggestions. </p> aarongertler gsNDoqpB2pWq5yYLv 2019-01-30T21:05:05.254Z The Meetup Cookbook (Fantastic Group Resource) https://forum.effectivealtruism.org/posts/cAnYmiNDzCoDWsGtJ/the-meetup-cookbook-fantastic-group-resource <p>(This was also posted on <a href="https://www.lesswrong.com/posts/ousockk82npcPLqDc/meetup-cookbook">LessWrong</a> a few months ago, and has comments there.)</p><p>I love single-page websites. A fire still burns in my heart for <a href="https://web.archive.org/web/20181204175011/http://whatiseffectivealtruism.com/">What Is Effective Altruism</a>?, even if it&#x27;s a bit old-fashioned.</p><p>Today, The Meetup Cookbook lit another one of those fires. It&#x27;s almost everything you need to run a meetup, in a box. (The authors run rationality rather than EA meetups, but those are pretty similar on the level of &quot;planning and logistics&quot;.) </p><p>Here are some of my favorite excerpts:</p><blockquote>I make a schedule of the planned topics about six months in advance in a spreadsheet [...] This makes it extremely easy to post the meetups every week. Reducing friction for ourselves means that the meetup happens more reliably. </blockquote><p>As a former organizer for two different EA groups, just looking at that spreadsheet (photo on website) makes me feel calmer than I ever did when I was planning events week by week.</p><blockquote><strong>Should I ask for RSVPs, so I know how many people are coming?</strong> No. Probably don&#x27;t bother, it never works [...] most people seem to like to be able to decide day-of whether they&#x27;re going to come [...] RSVPs are usually poorly correlated with attendance.</blockquote><blockquote>Another strategy is to say &quot;I&#x27;m going to be at the location from X-Y PM, guaranteed,&quot; and hang out the entire time to see if anyone shows up. This way you catch people even if they show up very, very late - which does happen, in our experience. This is more useful if you have very low attendance, or you&#x27;re starting a new meetup and are not sure what to expect.</blockquote><p>The &quot;guaranteed location&quot; strategy is also the best one I&#x27;ve found. Schedules are hard; people miss trains, lose their keys, get out of work late, get caught up in a conversation on the way over... and in all those cases, they sometimes turn around and go home rather than show up late. &quot;Stop by whenever&quot; won&#x27;t work for all meetups (sometimes you need to prep in advance based on attendance, etc.), but it&#x27;s a great way to get started.</p><blockquote>You might feel awkward about taking charge of a group. That&#x27;s okay, and if you feel really uncomfortable, you can lampshade it by saying something like &quot;Hey, so I guess I&#x27;m running this thing.&quot; But you don&#x27;t really <em>need</em> to say things like that. Meetups are low-stakes. It&#x27;s not a dominance move to set up and run one; it&#x27;s a gift you give to other people. You may not be the best person possible to lead this group of people, but you&#x27;re the one who showed up and is doing your best, and that&#x27;s what matters. </blockquote><p>Yes! As it turns out, people actually tend to like other people who set up cool things for them, and give them a chance to sit back and relax and listen. Even if you make a mistake somewhere, there&#x27;s a good chance no one but you will notice. If someone notices, there&#x27;s a good chance they won&#x27;t mind. If they mind, there&#x27;s a good chance they&#x27;ll ask to help instead of getting mad. If they get mad, the most likely result is that they just don&#x27;t show up next time. Which really isn&#x27;t so bad.</p><h2>Other notes</h2><ul><li>When I reflect on my organizing experience, I remember one major problem not covered by the guide: I&#x27;m not very good at talking to strangers. I get anxious at the thought of a room filling with people I have to quickly befriend. Some ways to get around this:</li><ul><li><a href="https://www.benkuhn.net/twopeople">Have two people</a>. That is, even if you&#x27;re the one doing most or all of the planning, having someone you know come along and share the social duties relieves a lot of pressure. When I was struggling to start the Yale University group, my co-founder was really helpful in this way.</li><li>Message people ahead of time. This doesn&#x27;t have to mean taking RSVPs (as noted above, those are of limited value). It can also mean asking people to join a Facebook group if they want to <em>hear </em>about events (less pressure than promising to <em>attend</em> an event) and then sending a friendly message to every new member, introducing yourself and asking an icebreaker question. (The Cookbook offers some good questions for this.)</li></ul><li>You&#x27;re missing out if you don&#x27;t look at one of the Cookbook&#x27;s other links: Spencer Greenberg&#x27;s <em><a href="http://www.spencergreenberg.com/2017/03/better-formats-for-group-interaction-going-beyond-lectures-group-discussions-panels-and-mixers/">Better Formats for Group Interaction.</a> </em>If some of the Cookbook&#x27;s activities don&#x27;t feel like they&#x27;d apply to your EA group, maybe you&#x27;ll find inspiration here!</li></ul><p></p> aarongertler cAnYmiNDzCoDWsGtJ 2019-01-24T01:28:00.600Z The Global Priorities of the Copenhagen Consensus https://forum.effectivealtruism.org/posts/YReJJ8MZdASANojrT/the-global-priorities-of-the-copenhagen-consensus <p> </p><p>The Copenhagen Consensus is one of the few organizations outside the EA community which conducts cause prioritization research on a global scale.</p><p>Nearly everything on their &quot;<u><a href="https://www.copenhagenconsensus.com/post-2015-consensus">Post-2015 Consensus</a></u>&quot; list, which covers every cause they&#x27;ve looked at, fits into &quot;global development&quot;; they don&#x27;t examine animal causes or global catastrophic risks aside from climate change (though they do discuss population ethics in the case of <a href="https://www.copenhagenconsensus.com/post-2015-consensus/populationanddemography">demographic interventions</a>).</p><p>Still, given the depth of the research, and the sheer number of experts who worked on this project, it seems like their list ought to be worth reading. On the page I linked, you can find links to all of the different cause areas they examined; <a href="https://www.copenhagenconsensus.com/sites/default/files/post2015brochure_m.pdf">here&#x27;s a PDF</a> with just cost-effectiveness estimates for every goal across all of their causes.</p><p>I didn&#x27;t have the time to examine a full report for any of the cause areas, but I wanted to open a thread by noting numbers and priorities which I found interesting or surprising:</p><ul><li>The most valuable types of intervention, according to CC: </li><ul><li>Reduce restrictions on trade (10-20 times as valuable per-dollar as anything else on the list)</li><li>Increase access to contraception (CC says &quot;universal&quot; access, but I don&#x27;t see why we wouldn&#x27;t get roughly the same value-per-dollar, if not more, by getting half the distance from where we are to the goal of universal access)</li><li>Aspirin therapy for people at the onset of a heart attack</li><li>Increase immunization rates (their estimates on the value of this don&#x27;t seem too far off from GiveWell&#x27;s if I compare to their numbers on malaria)</li><li>&quot;Make beneficial ownership info public&quot; (making it clear who actually owns companies, trusts, and foundations, making it harder to transfer money illegally between jurisdictions). Notably, CC argues justifiably for reducing hidden information to zero, since &quot;a partial solution to the transparency issue would simply allow alternative jurisdictions to continue to be used&quot;.</li><li>Allow more migration</li><li>Two interventions within food security: Working to reduce child malnutrition (a common EA cause) and research into increasing crop yields (something EA has barely touched on, though The Life You Can Save does <a href="https://www.thelifeyoucansave.org/where-to-donate/one-acre-fund">recommend</a> One Acre Fund)</li></ul><li>Areas that CC found surprisingly weak, compared to what I&#x27;d expected:</li><ul><li>Cut outdoor air pollution (about 3% as valuable as cutting indoor air pollution)</li><li>Data collection on how well UN Millennium Development Goals are being met (<a href="https://www.copenhagenconsensus.com/post-2015-consensus/datafordevelopment">measurement is very expensive</a>, and could cost more than actual development assistance)</li><li>Social protection system coverage (helping more people access government benefits); CC estimates that this is less than one-fifth as valuable as cash transfers</li></ul></ul><p>Reading the full position papers for some interventions could be a really valuable exercise for anyone who cares a lot about global development (particularly if you think EA may be neglecting certain opportunities in that space). If you spot anything interesting (and/or anything that seems wrong), leave a comment!</p><p></p> aarongertler YReJJ8MZdASANojrT 2019-01-07T19:53:01.080Z Forum Update: New Features, Seeking New Moderators https://forum.effectivealtruism.org/posts/7etEYiorToG9KXGEw/forum-update-new-features-seeking-new-moderators <p> </p><p><em><strong>In this update: </strong>New features, moderation updates, and a call for new moderators.</em></p><h1>New features</h1><p>Since the new version of the EA Forum is a fork of <u><a href="https://www.lesswrong.com/">LessWrong</a></u>, it&#x27;s easy for us to pull new updates as they arrive on there. If we think that a new LessWrong feature is likely to also be a good fit for the Forum, we will likely merge it into our site. </p><h2>Floating table of contents </h2><p>This week, we are merging a major update which introduces a floating table of contents to the left of posts. This is a much-requested feature on the Forum which should help readers track the structure of longer posts, and we’re really pleased to have it. The table of contents tracks three levels of headers, and picks up on the header formats from the WYSIWYG (“what you see is what you get”) and Markdown editors. It interprets stand-alone bold text as a header. </p><h2>Comment Retraction</h2><p>Users can now retract their comments, which does not delete them, but will strike through the words while leaving them visible. (You can also un-retract a comment.)</p><p>This allows users to designate that they no longer endorse a past comment without deleting it entirely. We may implement more features related to retracted posts in the future (e.g. suppressing notifications for them, alerting users who replied to a comment that was later retracted).</p><h2>Question posts</h2><p>We have also released a new type of post: a Question post. </p><p>This allows users to pose questions, which can then be answered. Answers are shown below the question; there&#x27;s also a comment section for clarifying and interpreting the question, and for other thoughts which aren’t quite answers. You can see questions on <u><a href="https://forum.effectivealtruism.org/questions">this page</a></u> of the Forum, and they will also be posted to the Frontpage/Community sections as appropriate.</p><p>We think that this is an important feature for three reasons: First, this will give newer community members a place to receive high-quality answers to their questions. Second, those questions can encourage more knowledgeable members to write up content which is likely to be useful to the community. Finally, this can allow the community to make intellectual progress more reliably. The person who knows that a question is important is unlikely to be the best person to answer that question, and we hope this feature can match up people-with-questions to people-with-the-skill-to-answer-questions more reliably.</p><h2>Post page redesign</h2><p>In order to accommodate the question format, the Post page has been redesigned.</p><p>For more details on question posts and the table of contents feature, please see the <u><a href="https://www.lesswrong.com/posts/mrGeJ4Wt66PxN9RQh/lw-update-2018-12-06-table-of-contents-and-q-and-a">announcement post</a></u> on LessWrong. </p><h1>Moderation Updates</h1><h2>Cross-posting</h2><p>The Forum team has been in touch with lots of organizations and writers in the community, to ask for permission to cross-post their content. We encourage you to cross-post content from your blog or website, and to <u><a href="mailto:forum@effectivealtruism.org">get in touch</a></u> if you’d like us to do that for you. (We’ll post your content under your account, with formatting that works for the Forum — for example, by removing anchor links that only work in HTML.) We also encourage you to cross-post good EA-related content that you stumble across on the web. Cross-posting helps more people to find the best content and creates a space for moderated discussion.</p><h2>Personal blogs</h2><p>Until now, we have been moving all blog posts to the Frontpage or Community sections (see more on this distinction <u><a href="https://forum.effectivealtruism.org/posts/5TAwep4tohN7SGp3P/the-frontpage-community-distinction">here</a></u>). We are tentatively planning to relax this, and leave some types of post on people’s personal blogs. However, we are still likely to move the majority of posts to either Frontpage or Community.</p><p>Personal blogs are hosted on your user page (<u><a href="https://forum.effectivealtruism.org/user/[your">https://forum.effectivealtruism.org/user/[your</a></u> username], for instance, <u><a href="https://forum.effectivealtruism.org/users/maxdalton">see here</a></u>). Other users can follow your blog if they wish, and they’ll see notifications when you post. Personal blog posts are also included in the “all posts” view of the Frontpage section.</p><p>We are more likely to move posts to Frontpage/Community if:</p><ul><li>They receive lots of upvotes, and a high ratio of upvotes to downvotes</li><li>They are of broader relevance to the community, pursuing more interesting and important questions</li><li>They are clearly written and engaging</li><li>The analysis is high-quality (though it might still be brief and/or incomplete)</li></ul><p>The reasoning for this change is:</p><ul><li><u><a href="https://forum.effectivealtruism.org/posts/vhKfHHnCYbgSNL3Ci/should-you-have-your-own-blog#febP2NdTT8PtBZbDq">Some users</a></u> have expressed that they would feel more comfortable posting to the Forum if some of their lower-quality/less broadly relevant content was kept on their personal blogs. In general, we’d like to remove barriers to users posting content.</li><li>Some posts have met our Frontpage guidelines, but nevertheless not been posts that we would want to promote.</li><ul><li>For instance, there was a post (now deleted by the user) asking, hypothetically, why EA-type thinking couldn’t be applied to other areas of ethics, suggesting that violence might be justified for utilitarian reasons (without advocating for that violence). Whilst this post didn’t violate our guidelines for the Forum, and we think that it’s good that it was an opportunity for the poster to hear counterarguments, we don’t feel that this is the kind of content that we want to promote more broadly, or content that we want to endorse by actively moving it to Frontpage.</li></ul></ul><p>Our main worry about doing this is that it complicates the job of moderators, and introduces more judgement calls. We want to maintain the cause-neutral, community-driven atmosphere of the Forum, and we will try to do that in all of our decisions. We’re happy to talk about this more in the comments section.</p><h2>Call for new moderators</h2><p>Howie Lempel is stepping down from moderation duties in order to focus on his new full-time job. We’re very grateful for the work he’s been doing to moderate posts, and to shape the policies that we’re following.</p><p>This means that we’re looking for new moderators. This volunteer role takes 1-3 hours per week. Ideally, we’re looking for someone with a good knowledge of effective altruism, sound judgement, previous activity posting/commenting on the Forum, and some experience with building online or in-person communities. If you’re interested, please fill in this <u><a href="https://drive.google.com/forms/d/1qyaHCX924jRSDAU63ci6t5cQkfKSyF61TvVblaQ6TXc/edit">application form</a></u>. </p> aarongertler 7etEYiorToG9KXGEw 2018-12-20T22:02:46.459Z What's going on with the new Question feature? https://forum.effectivealtruism.org/posts/K3y8zNzMmkg8t5dbm/what-s-going-on-with-the-new-question-feature <p>I know it&#x27;s a new feature, but how does it work?</p> aarongertler K3y8zNzMmkg8t5dbm 2018-12-20T21:01:21.607Z EA Forum Prize: Winners for November 2018 https://forum.effectivealtruism.org/posts/k4SLFn74Nsbn4sbMA/ea-forum-prize-winners-for-november-2018 <p> </p><p>CEA is pleased to announce the winners of the November 2018 EA Forum Prize!</p><p>In first place (for a prize of $999*): stefan.torges, &quot;<u><a href="https://forum.effectivealtruism.org/posts/d3cupMrngEArCygNk/takeaways-from-eaf-s-hiring-round">Takeaways from EAF&#x27;s Hiring Round</a></u>&quot;.</p><p>In second place (for a prize of $500): Sanjay, &quot;<u><a href="https://forum.effectivealtruism.org/posts/RnmZ62kuuC8XzeTBq/why-we-have-over-rated-cool-earth">Why we have over-rated Cool Earth</a></u>&quot;.</p><p>In third place (for a prize of $250): AdamGleave, &quot;<u><a href="https://forum.effectivealtruism.org/posts/SYeJnv9vYzq9oQMbQ/2017-donor-lottery-report">2017 Donor Lottery Report</a></u>&quot;.</p><p>*As it turns out, a prize of $1000 makes the accounting more difficult. Who knew?</p><p> </p><h2>What is the EA Forum Prize?</h2><p>Certain posts exemplify the kind of content we <u><a href="https://forum.effectivealtruism.org/about">most want to see</a></u> on the EA Forum. They are well-researched and well-organized; they care about <u><a href="https://ideas.ted.com/why-you-think-youre-right-even-when-youre-wrong/">informing readers, not just persuading them</a></u>.</p><p>The Prize is an incentive to create posts like this, but more importantly, we see it as an opportunity to showcase excellent content as an example and inspiration to the Forum&#x27;s users.</p><p>That said, the winning posts weren&#x27;t &quot;exclusively&quot; great. Our users published dozens of excellent posts in the month of November, and we had a hard time narrowing down to three winners. (There was even a three-way tie for third place this month, so we had to have a runoff vote!)</p><p> </p><h2>About the November winners</h2><p>While this wasn&#x27;t our express intent, November&#x27;s winners wound up representing an interesting cross-section of the ways the EA community creates content.</p><p><strong>&quot;<u><a href="https://forum.effectivealtruism.org/posts/d3cupMrngEArCygNk/takeaways-from-eaf-s-hiring-round">Takeaways from EAF&#x27;s Hiring Round</a></u>&quot;</strong> uses the experience of an established EA organization to draw lessons that could be useful to many other organizations and projects. The hiring process is documented so thoroughly that another person could follow it almost to the letter, from initial recruitment to a final decision. The author shares abundant data, and explains how EAF’s findings changed their own views on an important topic.</p><p><strong>&quot;<u><a href="https://forum.effectivealtruism.org/posts/RnmZ62kuuC8XzeTBq/why-we-have-over-rated-cool-earth">Why we have over-rated Cool Earth</a></u>&quot;</strong> is a classic example of independent EA research. The author consults public data, runs his own statistical analyses, and reaches out to a charity with direct questions, bringing light to a subject on which the EA community doesn&#x27;t have much knowledge or experience. He also offers alternative suggestions to fight climate change, all while providing enough numbers that any reader could double-check his work with their own assumptions.</p><p>To quote one comment on the post:</p><blockquote><em>This sort of evaluation, which has the potential to radically change the consensus view on a charity, seems significantly under-supplied in our community, even though individual instances are tractable for a lone individual to produce.</em></blockquote><p><strong>&quot;<u><a href="https://forum.effectivealtruism.org/posts/SYeJnv9vYzq9oQMbQ/2017-donor-lottery-report">2017 Donor Lottery Report</a></u>&quot; </strong>is a different kind of research post, from an individual who briefly had resources comparable to an entire organization -- and used his fortunate position to collect information and share it with the community. He explains his philosophical background and search process to clarify the limits of his analysis, and shares the metrics he plans to use to evaluate his grants (which adds to the potential value of the post, since it opens the door for a follow-up post examining his results).</p><p> </p><p><strong>Qualities shared by all three winners:</strong></p><ul><li>Each post had a clear hierarchy of information, helping readers navigate the content and making discussion easier. Each author seems to have kept readers in mind as they wrote. This is crucial when posting on the Forum, since much of a post&#x27;s value relies on its being read, understood, and commented upon.</li><li>The authors didn&#x27;t overstate the strength of their data or analyses, but also weren&#x27;t afraid to make claims when they seemed to be warranted. We encourage Forum posts that prioritize information over opinion, but that doesn&#x27;t mean that informative posts need to <em>avoid</em> opinion: sometimes, findings point in the direction of an interesting conclusion.</li></ul><p> </p><h2>The voting process</h2><p>All posts made in the month of November, save for those made by CEA staff, qualified for voting.</p><p>Prizes were chosen by seven people. Four of them are the Forum&#x27;s moderators (<u><a href="https://forum.effectivealtruism.org/users/maxdalton">Max Dalton</a></u>, <u><a href="https://forum.effectivealtruism.org/users/HowieL">Howie Lempel</a></u>, <u><a href="https://forum.effectivealtruism.org/users/denise_melchin">Denise Melchin</a></u>, and <u><a href="https://forum.effectivealtruism.org/users/julia_wise">Julia Wise</a></u>). The other three are the EA Forum users who had the most karma at the time the new Forum was launched (<u><a href="https://forum.effectivealtruism.org/users/peter_hurford">Peter Hurford</a></u>, <u><a href="https://forum.effectivealtruism.org/users/joey">Joey Savoie</a></u>, and <u><a href="https://forum.effectivealtruism.org/users/robert_wiblin">Rob Wiblin</a></u>).</p><p>All voters abstained from voting for content written by themselves or by organizations they worked with. Otherwise, they used their own individual criteria for choosing posts, though they broadly agree with the goals outlined above.</p><p>Winners were chosen by an initial round of <u><a href="https://en.wikipedia.org/wiki/Approval_voting">approval voting</a></u>, followed by a runoff vote to resolve ties.</p><p> </p><h2>Next month</h2><p>The Prize will continue with rounds for December and January! After that, we’ll evaluate whether we plan to keep running it (or perhaps change it in some way).</p><p>We hope that the Forum’s many excellent November posts will provide inspiration for more great material in the coming months. </p><p> </p><h2>Feedback on the Prize</h2><p>We&#x27;d love to hear any feedback you have about the EA Forum Prize. Leave a comment or contact <a href="mailto:aaron@effectivealtruism.org">Aaron Gertler</a> with questions or suggestions.</p><p> </p> aarongertler k4SLFn74Nsbn4sbMA 2018-12-14T21:33:10.236Z Literature Review: Why Do People Give Money To Charity? https://forum.effectivealtruism.org/posts/gABGNBoSfvrkkqs9h/literature-review-why-do-people-give-money-to-charity <p><em>Notes: </em></p><ul><li><em>Cross-posting from <a href="https://aarongertler.net/thesis/">my blog </a>without much refinement. If you spot any non-typo errors, I will upvote you and correct the post on my website. </em></li><li><em>If you aren&#x27;t sure whether to read it, I&#x27;ll try to tip you over the edge by mentioning that someone at Charity Science did, and decided to add it to <a href="http://www.charityscience.com/outreach-research.html">their website</a>. </em></li><li><em>If you might be writing a thesis at some point in the future, consider <a href="http://effectivethesis.com/project/">picking a topic that could be helpful to the EA community</a>! And if you wrote an EA-ish thesis recently, consider writing a summary for the Forum! I&#x27;m really glad I wrote this summary; it helped ~300 hours of work not go to waste.</em></li></ul><p></p><p>In 2015, I wrote a senior thesis:</p><p><strong><a href="http://aarongertler.net/wp-content/uploads/2018/01/Aaron-Gertler-Senior-Thesis-full-bibliography-1.pdf">Charitable Fundraising and Smart Giving: How can charities use behavioral science to drive donations?</a></strong></p><p>It’s a very long paper, and you probably shouldn’t read the whole thing. I conducted my final round of editing over the course of 38 hours, during which I did not sleep. It’s kind of a slog.</p><p>Here’s a PDF of the five pages where I summarize everything I learned and make recommendations to charities:</p><p><strong><a href="http://aarongertler.net/wp-content/uploads/2015/06/Thesis-Conclusion-and-Advice.pdf">The Part of the Thesis You Should Actually Read</a></strong></p><p>In the rest of this post, I’ve explained my motivation for actually writing this thing, and squeezed my key findings into a pair of summaries: One that’s a hundred words long, one that’s quite a bit longer.</p><h1>Super Short Summary</h1><p>Americans only give about 2% of their income to charity, and most of that money goes to charities that don’t do an especially good job of helping people. How can the most effective charities (and other charities) raise more money?</p><p>There are many different techniques that have been shown to work well in multiple studies, but evidence on most techniques is still very mixed, and some popular techniques in the real world have no experimental evidence behind them. Charities really ought to run more experiments to figure out which techniques will work for them.</p><p>In the meantime, some general advice for all charities:</p><ul><li>Tell donors what their donation will accomplish (be specific!).</li><li>Tell stories about individual people you’ve helped.</li><li>Make donors feel like they’re on a winning team with lots of other cool donors, making real progress on an important problem.</li><li>Also, run experiments. I can’t emphasize that enough.</li></ul><h1>Regular Summary</h1><p>I began to study the nonprofit sector because I’m convinced that giving money to the <strong><a href="http://givewell.org/">right causes</a></strong> is one of the best ways for an average person to <strong><a href="http://effectivealtruism.org/">improve the world</a></strong>.</p><p>I’d seen a lot of studies on <strong><a href="http://amzn.to/1eTumAo">fundraising techniques</a></strong>, and on <strong><a href="http://www.amazon.com/gp/product/006124189X/ref=as_li_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=006124189X&linkCode=as2&tag=aarongertlerw-20&linkId=PKZQKMTCA6OMR2FL">techniques for persuading people in general</a></strong>, but it wasn’t easy to find a lot of studies in one place, and it was especially tough to figure out whether any techniques <em>at all</em> had super-strong evidence behind them. It seemed like some were overvalued thanks to the results of a single study that <strong><a href="http://slatestarcodex.com/2014/12/12/beware-the-man-of-one-study/">wouldn’t necessarily generalize</a></strong> to most nonprofits.</p><p>So I did something foolish. I decided that <strong>my senior thesis would attempt to review every experimental study ever conducted on a charitable fundraising technique.</strong></p><p>To ensure that I was saying something original, I added a special section on techniques that would apply especially to “effective” charities: Those which could present strong evidence to donors that they were actually making the world a better place (and doing so more efficiently than most other charities).</p><h2>The Result</h2><p>This isn’t the best-written literature review of fundraising techniques, nor the most comprehensive. But it is probably the most comprehensive review of studies conducted specifically using participants who <em>actually gave money. </em></p><p>This is actually a major problem in the fundraising literature: <strong>About half the studies I found didn’t measure the impact of a technique on real donations.</strong> Instead, researchers measured how much money the participants claimed they would give if someone asked them, or whether they gave tokens to people playing an “economic game” with them, or whether they helped a research assistant clean up spilled coffee.</p><p>(To make an uncharitable comparison, it’s as though Stanley Milgram had conducted his famous <strong><a href="https://en.wikipedia.org/wiki/Milgram_experiment">obedience experiment</a></strong> by asking participants whether they would be willing to shock the person on the other side of the curtain if he asked nicely.)</p><p>I excluded any study that didn’t measure real monetary donations, unless it dealt in some way with evidence-based giving — very little has been written in that domain, so I had to be a bit less selective.</p><h2>Limitations</h2><p>Take everything I say with a grain of salt: I was an undergraduate when I wrote this, and I probably missed important points in some of these papers.</p><p>Almost every study here involves a single request for money, even though donor retention is more important for most charities than getting new donors who only give once. Including donor retention would have made this thesis almost impossible to write, but it’s still an important topic. (<strong><a href="https://web.archive.org/web/20160605153131/http://www.studyfundraising.com:80/about-us/professor-adrian-sargeant/">Adrian Sargeant</a></strong> has some great papers on building long-term relationships with donors.)</p><p>There’s not a lot of research on most of the techniques I covered, considering how popular they are. I found about five studies per technique, and many of those were methodologically flawed. Sample sizes and effect sizes varied drastically, and the sheer number of techniques meant that a meta-analysis wouldn’t have made sense.</p><p>For that matter, nearly everything about these studies varied drastically: The context in which a request was made, the relationship of the participants to the charity, the size of the charity, and so on. <strong>What I wound up with, in the end, were a few solid general rules and a lot of results hinting that certain approaches <em>might </em>be effective. </strong>Still, it’s better to have hints than to have nothing.</p><h1>The Actual Literature Review!</h1><p><em>Reminder: This is a very abridged summary of the paper. Citations available from the <a href="https://aarongertler.net/wp-content/uploads/2018/01/Aaron-Gertler-Senior-Thesis-full-bibliography-1.pdf">actual paper</a>.</em></p><h3>Introduction</h3><p>Charitable giving is probably a net positive, as far as social phenomena go. And even if it isn’t, the most efficient, data-driven forms of giving are certainly good. (This is my first thesis, so I’m defending even the most basic assertions.)</p><p>The latter form of giving, or “effective altruism”, clearly helps the recipients of donations, but it’s not entirely clear whether giving actually makes people happier. There’s a good chance that happy people give more, or that people claim to be happy after giving so experimenters will like them. But it’s also quite possible that giving money makes us happier than spending it, especially once we’ve spent a certain amount on ourselves.</p><p>But even though charitable giving is a very good thing, <strong>we don’t give very much, and the rate at which we give</strong> <strong><a href="https://philanthropy.com/article/The-Stubborn-2-Giving-Rate/154691">hasn’t really changed since 1970.</a></strong> For some reason, charities are struggling to convince people to give money away.</p><p>But science can help! This literature review aims to summarize research on the efficacy of various fundraising techniques — particularly those which could be useful to the most effective charities.</p><p>By the way, <strong><a href="http://opinionator.blogs.nytimes.com/2012/12/05/putting-charities-to-the-test/">some charities are more effective than others!</a></strong> (I&#x27;ll skip this bit for the EA Forum post, you&#x27;ve all heard it before.)</p><h3>Method</h3><p>I read hundreds of pages of Google Scholar even more pages in a few specialized databases and lots of books and the reference sections of some truly epic literature reviews (which are linked at the end of this post).</p><p>Some of the techniques I reviewed could be used by just about any charity. Others should be especially useful for charities that have something in common with the most effective charities — that is, they help people in other countries, help lots of people, measure their results, etc.</p><p>With a few exceptions, I only reviewed studies where participants actually gave real money to charity, because most other ways of predicting giving behavior in the real world don’t seem very effective, and we really want to predict giving behavior! <strong>Prediction is the name of the game.</strong></p><p>I’m also not measuring religious gifts or gifts to colleges, because <a href="https://en.wikipedia.org/wiki/Reciprocity_(social_psychology)">“giving back”</a> to an institution that helped you isn’t quite the same as the giving I’d like to measure.</p><h3>Who Gives? Why?</h3><p>What motivates people to give money away? I’m not going to say “System One” and “System Two”, because that’s cliche, so instead I’ll say “warm giving” and “cool giving” to reflect the fact that giving is driven by a mix of “cool” motivations (an abstract desire to do good, careful calculation of your impact, strategic giving that will make people like you) and “warm” motivations (empathy toward the recipient, personal connections to the charity, a habit of giving a dollar to anyone who asks).</p><p>Yes, this is really just System One and System Two. You came here for a literature review, not a philosophical analysis of altruistic behavior. This section is lazy.</p><p>Anyway, who gives the most money away? Sometimes men give more than women, and sometimes the reverse it true. Older people give more until around the time they retire. Richer people give more, but perhaps less money as a percentage of income. Religious people might give more, but it’s really difficult to tell because 1/3 of all U.S. donations goes toward churches, which only spend a fraction of their income on traditional “charitable” activities.</p><h3>Fundraising Techniques that Probably Help</h3><p><strong>“Legitimizing paltry contributions”:</strong> Tell donors that “even a penny will help” (or something like that), and they’ll usually be more likely to give without giving much less. I have a bunch of theories about why this happens, but we have many more techniques to cover, so let’s move on!</p><p><strong><a href="https://en.wikipedia.org/wiki/Anchoring">Anchoring:</a></strong> Suggesting that donors give $20 tends to bring in more than suggesting they give $10, but a very high anchor scares donors away. Use experiments to figure out the optimal suggestion!</p><p><strong>Dialogue:</strong> Ask someone how they’re doing, wait for them to answer, and <em>then</em> ask for money. This is a much better idea than asking right away, but so far we’ve only seen it work in person, not over the phone. Bonus points if you mention having something in common with the donor!</p><p>(In my favorite “similarity” study, the experimenter lied about having the same birthday as the participant. I can’t believe an IRB <a href="http://slatestarcodex.com/2017/08/29/my-irb-nightmare/">let them get away with that</a>.)</p><p><strong>Publicity:</strong> When someone’s donations will be made public (or even seen by just a single experimenter), they tend to give more. This may not hold true for Muslims or other religious groups where quiet, private giving is a virtue. But that’s a minor exception hypothesized from a single study: Mostly, publicity is a good strategy in the context of these experiments.</p><p><strong>Photographs:</strong> Adding pictures to donation materials tends to make them more effective, though it’s unclear whether sad children are better than happy children. Especially sad or upsetting photos could backfire.</p><p><strong>Individuals:</strong> We really like helping individuals, possibly because it’s easier to empathize with one person than with a whole group of people. Rather than talking about the sheer scope of the problem your charity deals with, it’s generally best to talk about how a donation has helped, or could help, a single sympathetic person.</p><p>In fact, people will literally give more money to save the life of one child than to save the lives of eight, <strong><a href="http://aarongertler.net/wp-content/uploads/2015/06/KogutRitovIdentified.pdf">even when eight lives can be saved for the price of one!</a></strong></p><p>This is a troubling result, but one team of researchers may have discovered how to reverse it with something called <strong><a href="http://aarongertler.net/wp-content/uploads/2015/06/Hsee_2013_unit_asking_fundraising_technique.pdf">“the unit asking effect”.</a></strong> (That paper might be my favorite in the entire thesis — check it out if you can.)</p><p><strong>Follow the Leader:</strong> Potential donors give more after they learn about the gifts of past donors, especially those who were very generous or who resembled the potential donor in some way. This also works if the potential donor sees another donation happen, or is told that the amount they donate will be known by another person (so that they have the chance to become “leaders” themselves).</p><p><strong>Matching donations:</strong> <strong><a href="http://www.benkuhn.net/matching">Ben Kuhn</a></strong> is better at statistics than I am, and his summary of the literature on matching is very rigorous. If you really care about donation-matching, you should read it.</p><p>My shorter summary: If you have some money lying around, you might be able to use it to increase donations by “matching” the gifts of future donors, so that people feel like they can do more good for the same “price” (as though your nonprofit were having a buy-one-get-one-free sale). Ben Kuhn points out that most of the research on matching is sketchy, but it’s no sketchier than the rest of the research on fundraising. Also, matching is “free”, since your charity gets the matching dollars either way, so you might as well experiment.</p><p><strong>Seed donations: </strong>Announce that you’d like to raise a set amount, then “seed” part of that amount so that “success” seems more likely. Donors like giving to specific campaigns that seem like they will meet their goals, and seed donations work about as well as matching in head-to-head experiments. On the other hand, if you have money you could use to seed a campaign or match donations, you could also try…</p><p><strong>Overhead coverage: </strong>When a charity announces that donors’ gifts will only cover “programs” (like giving mosquito nets to families) rather than “overhead” (like paying the salaries of <strong><a href="https://www.salsalabs.com/get-know-us/blog/so-you-want-to-hire-a-professional-fundraising-consultant">professional fundraisers</a></strong>), donors give quite a bit more. This phenomenon can be hacked if a charity uses leftover funds to “cover” its own overhead, or convinces one particular donor to cover <em>all </em>of the overhead so that most donors never have to think about it. </p><p>Donors seem to prefer charities with lower overhead even when the overhead is “covered”, but it’s unclear whether that’s true independent of donors’ fear that their own money will pay for overhead rather than programs.</p><p><strong><a href="http://acumen.org/blog/our-world/why-overhead-ratios-are-meaningless-for-kiva-and-acumen-fund/">Many nonprofits</a></strong> claim that <strong><a href="http://overheadmyth.com/faqs/">“overhead doesn’t matter”</a></strong>, because forcing charities not to spend on overhead keeps them from growing or innovating. This is partly true, though especially high overhead can be a warning sign that something weird is going on. Anyway, what really matters is how much good each dollar does, however the charity spends it. (Still, donors speak the language of overhead, so charities may have to do the same.)</p><h3>Other Fundraising Techniques</h3><p>This summary is long enough already, so I’ll skip talking about techniques that only work sporadically, or don’t seem to work at all.</p><p>With one exception: <strong>Offering gifts or prizes in exchange for donations works <em>very badly </em>in every study that tries to do it.</strong> This may not be the case for gifts “related” to the nonprofit (like a PBS tote bag), but telling people you’ll give them random chocolate if they donate is a terrible idea.</p><p>On the other hand, telling donors they’ll feel great after they give works pretty well, despite playing on the same selfish motivation. And giving people gifts <em>before </em>you ask them to give leads to amazing results (at least in the fourth study mentioned on page 20 of <strong><a href="http://aarongertler.net/wp-content/uploads/2015/06/BIT_Charitable_Giving_Paper.pdf">this paper</a></strong>).</p><h3>Really Obvious Helpful Techniques</h3><p>Simple, evidence-based things that all nonprofits should probably be doing:</p><p><strong>Talk about your beneficiaries a lot.</strong> Make them sound like nice, hardworking people who have a lot in common with the donor.</p><p><strong>Talk about the progress your organization has been making,</strong> not the enormous scope of the problem (or, at the very least, talk about both). People want to be on the winning team.</p><p><strong>Look good.</strong> Dress nicely. Be attractive and high-status (donors can be shallow, especially male donors talking to female fundraisers). It might even help to play catchy music and smell good, though that study has yet to be funded.</p><p><strong>If someone signs up for your mailing list, send them an email <em>right away,</em></strong> and ask them to donate soon after. As of 2014, some of the largest charities in the U.S. <strong><a href="http://aarongertler.net/wp-content/uploads/2015/06/Online-Fundraising-Scorecard.pdf">didn’t send a single email to new subscribers within 30 days</a></strong> — enough time for a potential donor to completely forget about them.</p><p><strong>Use simple, visual language.</strong> One <strong><a href="http://aarongertler.net/wp-content/uploads/2015/06/Evangelidis_2013_fatalities_drive_aid-not-survivors.pdf">clever study</a></strong> took issue with the fact that newspapers tend to use the word “affected” to describe the people who survive natural disasters. Referring to these people as “homeless” (which is what “affected” really means in this context) substantially increases the amount donors are willing to give to them.</p><p>This isn’t surprising: <strong>I don’t know what an “affected” person looks like, but I can picture a “homeless” person without difficulty, and being able to imagine someone is an important step toward caring about them. Visual language is important.</strong></p><h3>Conclusion</h3><p>When we consider the size and sociological importance of the nonprofit sector, it becomes clear that we need more research on fundraising techniques!</p><p>Yes, like any person who researches a topic in great depth, I conclude that <strong><a href="https://instsci.org/supercut.html">more research is needed</a></strong>. On the other hand, I’m not going to grad school, so I’m not biased by the need to churn out more papers on things I already know about. You can trust me on this one.</p><p>There are a few topics I think would be especially neat to research in more depth, but I talk about those within the thesis. For the rest of this summary, all I’d like to say is that <strong>charities should be running more experiments, and publicizing their results.</strong></p><p><strong>Here’s why:</strong></p><p>One cool thing about the nonprofit sector is that it isn’t a zero-sum game. It’s true that charity money is limited. But if we somehow raise charitable spending from 2% to 3% of the U.S. GDP, the gains from that will dwarf the pain of charitable competition. And one of the ways charities <em>can </em>raise the national giving rate is to work together to figure out better fundraising techniques.</p><p>What would happen if charities with excellent websites — like Kiva or Acumen or charity: water — shared the results of their <strong><a href="https://vwo.com/ab-testing/">A/B testing</a></strong> with the rest of the nonprofit world?</p><p>What if the five largest charities in America pooled funds to hire a couple of full-time researchers who could run a dozen experimental replications of important studies over the next year, and begin to figure out which techniques <em>consistently</em> had <em>large </em>effects on charitable giving?</p><p>What if the hundred largest charities in America hired a couple of extra lobbyists to push for a U.S. version of <a href="https://www.gov.uk/donating-to-charity/gift-aid">Gift Aid</a>, which could push the giving rate from 2% to 2.5% within a couple of years?</p><p>I don’t know if any of this would help, but it seems like it would be worthwhile to try. Fundraising experiments are easy to run, and can even be profitable. Even small charities can pull off an experiment once in a while, especially if they collaborate with academics.</p><p>Many of the studies I examined found that some techniques can boost donations by 50% or more. Either these results don’t carry over to the real world, or charities can profit enormously from experimentation; I’d really like to know which one is true.</p><h3>Last Words</h3><p>It may be that no technique or set of techniques, however clever, is going to push the 2% giving rate to 3% or higher. If so, we’ll need to figure out other ways to do more good with our giving.</p><p>This is why I’m so excited about effective altruism: <em>And... skipping this, since y&#x27;all on the Forum have your own reasons to be excited. In retrospect, though, it&#x27;s funny that I went from writing this paper and speculating that I&#x27;d work for an EA charity someday to not even looking at open EA jobs, GiveWell excepted, for almost three years.</em></p><h2>Interesting Papers and Other Links</h2><p>As I discovered while writing this thesis, the science of fundraising isn’t very rigorous yet. The team at <strong><a href="http://www.charityscience.com/operations-details/there-is-no-good-science-on-fundraising">Charity Science</a></strong> explains why.</p><p>My favorite literature reviews on charitable giving, besides mine: <strong><a href="http://aarongertler.net/wp-content/uploads/2015/06/Zagefka_2015_disaster_donation_insights.pdf">Zagefka &amp; James (2015)</a></strong> and <strong><a href="http://aarongertler.net/wp-content/uploads/2015/06/Bekkers_2011_perfect_review.pdf">Bekkers &amp; Wiepking (2011)</a></strong> </p><p>The <strong><a href="http://www.behaviouralinsights.co.uk/publications">Behavioural Insights team</a></strong> designs very cool experiments, many of which use subconscious “nudges” to boost charitable giving.</p><p>One of the deepest threats to effective giving is “psychic numbing”: The more suffering we know about, the less likely we are to take an objective approach to dealing with it. When one person is in danger, we’ll make an enormous effort to save them; when hundreds of thousands are in danger, we often fall into despair and stop trying to help. <strong><a href="http://aarongertler.net/wp-content/uploads/2015/06/Slovic_2007_psychic_numbing_genocide.pdf">Paul Slovic explains.</a></strong></p><p>In 2010, a group of nonprofit foundations published the <strong><a href="http://aarongertler.net/wp-content/uploads/2015/06/Money20for20Good_Final.pdf">Money for Good</a></strong> study, which surveyed thousands of high-income donors in an attempt to figure out how people might be convinced to give more money, and give more to the “highest-performing nonprofits”. The results are fascinating, and a little sad: Only 35% of participants ever did research on their favorite charities, and only 10% of <em>those </em>people used neutral sources rather than the charities’ websites.</p><p>* * * * *</p><p>Also: I’d never have finished this thesis without the help of my wonderful advisor, <strong><a href="http://psychology.yale.edu/people/hedy-kober">Hedy Kober</a></strong>. I’d also like to thank <strong><a href="https://en.wikipedia.org/wiki/Dean_Karlan">Dean Karlan</a></strong>, my second reader, whose work helped inspire me to pursue this topic in the first place.</p> aarongertler gABGNBoSfvrkkqs9h 2018-11-21T04:09:30.271Z W-Risk and the Technological Wavefront (Nell Watson) https://forum.effectivealtruism.org/posts/HLiT2YbHBaLxqhNbY/w-risk-and-the-technological-wavefront-nell-watson <p>This is a linkpost for Nell Watson&#x27;s <a href="https://www.nellwatson.com/blog/technological-wavefront">&quot;The Technological Wavefront&quot;</a>.</p><p>Brief summary:</p><ul><li>Many ancient peoples made impressive discoveries (in some cases, better than what we have now) long before they discovered modern science.</li><li>Society generally becomes more advanced and complex over time as long as resources allow for this growth; this is the &quot;technological wavefront&quot;.</li><li>However, if we hit a resource bottleneck, the wave will break, and we will be forced to step back down the complexity ladder, losing access to some of our present technology.</li><li><strong>&quot;It is our momentum as a species that keeps the light of enlightenment burning steadily.&quot;</strong> If we lose momentum and &quot;step down&quot;, we may never recover the technology we lose, since much of our present knowledge exists either in memory or on media we won&#x27;t be able to access.<strong> This risk of permanent loss is W-risk (&quot;wavefront risk&quot;).</strong></li><li>&quot;The greatest <a href="https://en.wikipedia.org/wiki/Global_catastrophic_risk">existential risk</a> to the <em>meaningfulness and excellence </em>of the future of humanity may be something surprisingly benign, not to be experienced as a bang, but rather as a long drawn-out whimper.&quot;</li><li>W-risk seems more likely to the author than X-risk, so she recommends guarding against it by stockpiling documentation from multiple generations of tech and finding ways to rebuild our energy supply without much fossil fuel.</li></ul><p></p> aarongertler HLiT2YbHBaLxqhNbY 2018-11-11T23:22:24.712Z Welcome to the New Forum! https://forum.effectivealtruism.org/posts/h26Kx7uGfQfNewi7d/welcome-to-the-new-forum <p>Thanks for joining us as we launch the new EA Forum!</p><p>We&#x27;re thrilled to be sharing the Forum with you. We hope that it will become <a href="https://forum.effectivealtruism.org/posts/wrPacgwp3DsSJYbce/why-the-ea-forum">the online hub for figuring out how to do good</a>. </p><p><strong>We strongly recommend that you start by reading two posts, which <a href="https://forum.effectivealtruism.org/posts/wrPacgwp3DsSJYbce/why-the-ea-forum">set out the goals for the EA Forum</a>, and <a href="https://forum.effectivealtruism.org/posts/dMJ475rYzEaSvGDgP/what-s-changing-with-the-new-forum">explain what&#x27;s new</a>.</strong></p><p>The original EA Forum was built to foster intellectual progress and help the community coordinate. We&#x27;ve taken those ideas even further with the new version, adding <a href="https://forum.effectivealtruism.org/posts/dMJ475rYzEaSvGDgP/what-s-changing-with-the-new-forum">new features and moderation policies</a> to promote healthy discussion.</p><p>We hope you&#x27;ll explore everything the Forum has to offer! Let us know if there&#x27;s anything we can do to improve your experience; you can <a href="mailto:forum@effectivealtruism.org">email us</a> or use the blue speech box in the lower-right-hand corner.</p><p></p> aarongertler h26Kx7uGfQfNewi7d 2018-11-08T00:06:06.209Z What's Changing With the New Forum? https://forum.effectivealtruism.org/posts/dMJ475rYzEaSvGDgP/what-s-changing-with-the-new-forum <p> </p><p>This post is a guide to how the new Forum differs from the original. </p><p>Some of these topics -- for example, the new karma system and moderation standards -- are discussed in more detail in the Forum’s <u><a href="https://forum.effectivealtruism.org/posts/PoYi6fynNfHScf7qB/ea-forum-2-0-initial-announcement">initial announcement post.</a></u> The announcement is a few months old, so this guide adds up-to-date information on certain topics. </p><p>We also have a more general <u><a href="https://forum.effectivealtruism.org/about">guide to the new Forum</a></u>, which covers discussion norms and post creation. Some of the guide’s material is repeated in this post.</p><p>If you have questions or feedback about the forum, use the blue speech bubble in the lower-right-hand corner of the screen to get in touch!</p><p></p><h2>Categories</h2><p>Posts on the new Forum are split into two categories:</p><p><strong>Frontpage </strong>posts are timeless content covering the ideas of effective altruism. They&#x27;ll usually be posts that are useful or interesting to a wide range of readers, but they can also discuss more advanced ideas.</p><p><strong>Community </strong>posts include discussion of particular issues within the community, or updates from organizations. This content may not have ongoing relevance, but is useful for increasing coordination in the community in the short term, and discussing important community matters. </p><p>We’ve made this a separate category so that new users can learn about the ideas before they engage with the community, and so that people can select which types of content they want to engage with.</p><p></p><p>If your post is about applying EA methodology and perspectives to the world, it will be moved to Frontpage. It will go to Community if it is focused on the EA community. Keep in mind which section you’re writing for with each post. </p><p>If a post seems to fit both sections, it will be moved to Community by default, so that users around the world can discuss ideas on Frontpage without having to keep up-to-date on community issues.</p><p>You can view either category on its own page, or use the “All Posts” view to see everything. We may add more categories later, but these are the only active ones.</p><p></p><h2>Norms</h2><p>We’ve been talking with users about their experience engaging with the Forum, and have some suggestions for altered norms that will resolve some of the issues they raised.</p><p></p><p><strong>What sort of posts do we encourage?</strong></p><ul><li>We encourage <strong>summaries</strong> and <strong>explanations</strong>, and see them as the foundation of intellectual progress. Debate is important, but high-quality debate is difficult unless each side’s point of view has been clearly explained, with sources that support their claims. </li><li>We encourage <strong>original research</strong>. We hope that students, academics, and independent researchers will post their work on the Forum, even if it’s incomplete or unpublished. </li><li>We encourage<strong> unpolished and shorter-form posts</strong>. We’d rather hear an idea that’s presented imperfectly than not hear it at all. If you&#x27;re struggling to polish an idea, or a piece of writing, others in the community may be able to help -- but only if you share it! Our karma system pushes popular posts to the top of the page, so you don&#x27;t have to worry that your post will “crowd out” other content.</li><li>We encourage <strong>linkposts</strong>. You can contribute a lot to the Forum by sharing interesting material, whether or not you wrote it yourself. By sharing to the Forum, you make it easier for others to find the idea, and create a space to discuss it.</li></ul><p></p><p><strong>Discussion norms</strong></p><p>In the past, we’ve received feedback from some users who found posting on the Forum to be intimidating. Posts sometimes got a lot of criticism without many positive suggestions, which led to brief and unproductive discussion.</p><p>We want users to feel comfortable and secure about posting new content. To this end, we encourage the use of <u><a href="http://effective-altruism.com/ea/dy/supportive_scepticism_in_practice/">supportive skepticism</a></u>. It’s fine to criticize an idea, but it’s even better to support its strongest parts, do your best to patch the holes in it, and be kind when handing it back to its owner. The goal of the Forum isn’t to defeat bad ideas; it’s to find good ideas, even when they appear in the context of a flawed argument.</p><p>We also accept anonymity. Many users publish under their real names, but we’d rather you publish under a pseudonym than not publish at all.</p><p></p><p><strong>Moderation</strong></p><p>On the old EA Forum, moderators mostly focused on removing spam and offensive posts. We don’t want to have much stronger moderation, but we do want to be a little more active, mostly aiming to encourage the best users and maintain the norms we’ve set out above. </p><p>We will do this according to our <u><a href="https://forum.effectivealtruism.org/about">moderation guidelines</a></u>. Mostly, this will simply involve giving positive feedback to contributors, and we expect to use moderation powers (e.g. deleting comments) very rarely.</p><p></p><h2>Features</h2><p><strong>Reading and Commenting</strong></p><ul><li>When you view a list of posts, those you haven’t read (or that have new comments) are highlighted in blue.</li><li>When you view a list of posts, you’ll only see titles by default, but you can preview the content by mousing over a title and clicking “show highlight”.</li><li>You can click a user’s name to be taken to their profile; from there, you can see their past posts and comments, message them, and (new feature!) subscribe to their content (you’ll get a notification whenever they post).</li><li>You can turn your vote into a “strong vote”, which adds or subtracts more karma, by holding the vote button for an extra moment. Your upvotes and downvotes gain karma as you accumulate karma; see <u><a href="https://forum.effectivealtruism.org/posts/PoYi6fynNfHScf7qB/ea-forum-2-0-initial-announcement">this post</a></u> for detailed numbers, and <u><a href="https://www.lesswrong.com/posts/7Sx3CJXA7JHxY2yDG/strong-votes-update-deployed">this post</a></u> for suggestions on when to use strong votes.</li></ul><p></p><p><strong>Writing</strong></p><ul><li>The default post editor is a WYSIWYG (What You See Is What You Get), so posts will look the same on the Forum as they do in your editor. </li><li>You can use Ctrl-4 (Cmd-4 for Macs) to add LaTeX to a post; this is especially useful for formatting equations. We like <u><a href="https://en.wikibooks.org/wiki/LaTeX/Mathematics#Symbols">this guide to writing math in LaTeX</a></u>.</li><li>You can request automatic cross-posting from your personal blog to the EA Forum, as long as you write about EA-relevant topics. Please <u><a href="https://docs.google.com/forms/d/e/1FAIpQLSf0M-pbfwqKsRGWoojZ6i2KuCDTDtmlBQ5mF07W1Vj404yzew/viewform?usp=sf_link">fill in this form</a></u> if you would like us to do this for you.</li></ul><p></p><h2>Prizes</h2><p>CEA will fund monthly prizes for the best posts published on the EA Forum. (We will do this for 3 months, and will consider further funding based on results.)</p><p>The prize amounts are as follows:</p><ul><li><strong>First: </strong>$999</li><li><strong>Second: </strong>$500</li><li><strong>Third: </strong>$200</li></ul><p>The first contest covers any posts made in November.</p><p>The winning posts will be determined by a vote of the moderators (<u><a href="https://forum.effectivealtruism.org/users/maxdalton">Max Dalton</a></u>, <u><a href="https://forum.effectivealtruism.org/users/HowieL">Howie Lempel</a></u>, <u><a href="https://forum.effectivealtruism.org/users/denise_melchin">Denise Melchin</a></u>, and <u><a href="https://forum.effectivealtruism.org/users/julia_wise">Julia Wise</a></u>) and the current top three Forum users (Peter Hurford, Joey Savoie, and Rob Wiblin). </p><p>The moderation team uses the email address <u><a href="mailto:forum@effectivealtruism.org">forum@effectivealtruism.org</a></u>; feel free to contact them with questions or feedback. </p><p></p> aarongertler dMJ475rYzEaSvGDgP 2018-11-07T23:09:57.464Z Book Review: Enlightenment Now, by Steven Pinker https://forum.effectivealtruism.org/posts/gQvaA9EbvmzQATHt3/book-review-enlightenment-now-by-steven-pinker <p>For most of history, it didn’t matter what century you lived in. With few exceptions, you would have suffered what we today consider “extreme poverty”:</p><ul><li>You’d spend your time hunting, gathering, or farming, using almost all your energy just to stay alive. Despite this effort, you’d eat the same food almost every day, and that food would barely be edible by modern standards. </li><li>Your only defenses against illness would be herbs, bed rest, or surgery performed with primitive tools and zero anesthesia.</li><li>You’d sleep when the sun went down -- <a href="https://lucept.com/2014/11/04/william-nordhaus-the-historic-cost-of-light/">light was expensive</a>.</li><li>And you’d probably die before the age of 60.</li></ul><p>But a few hundred years ago, things began to change. The world’s wealth exploded...</p><hr class="dividerBlock"/><span><figure><img src="http://aarongertler.net/wp-content/uploads/2018/10/Enlightenment-Now-GDP.png" class="draft-image center" style="" /></figure></span><p></p><p><strong>Source: </strong><em>Our World in Data, </em>Roser 2016, based on data from the World Bank and from Maddison Project 2014. </p><hr class="dividerBlock"/><p>...which gave us access to medicine, supermarkets, lightbulbs, and all sorts of other good things. Steven Pinker attributes this to the Enlightenment, an intellectual movement he breaks into four “themes”:</p><p><strong>Reason: </strong>Reason is our attempt to understand the world using evidence and logic, and to test our beliefs so that they evolve towards truth. During the Enlightenment, the spread of literacy and scholarship helped reason compete with its predecessors: “Faith, dogma, revelation, authority, [and] charisma.”</p><p><strong>Science: </strong>Science is the process of applying reason to understand the natural world. We’ve recently transitioned from near-universal superstition to an era when many people have a basic understanding of science. Millions of people work as <em>professional </em>scientists who expose new truths, or engineers who apply those truths to create wonders. Pinker sums up one of the greatest triumphs of science in two words: <u><a href="https://en.wikipedia.org/wiki/Smallpox#Eradication">“Smallpox was.”</a></u></p><p><strong>Humanism: </strong>The Enlightenment created a new system of morality: one which “privileges the well-being of individual men, women, and children over the glory of the tribe, race, nation, or religion.” This humanism has taught us to tolerate and care for each other to an <u><a href="https://press.princeton.edu/titles/9434.html">ever-greater degree</a></u>. In the process, war, slavery, and capital punishment have withered to husks of their former selves.</p><p><strong>Progress: </strong>In Pinker’s view, the Romantics of the 19th century (and the despots of the 20th) believed in twisting people to fit their ideals. But Enlightenment thinkers preferred twisting their ideals to fit people -- they tried to build a world more suitable for humans. In universities, governments, and markets, they created norms, laws, and machines that made our lives better in a thousand different ways. The Romantics sought “utopia”, but Pinker sees the goal of Enlightenment as “protopia”: we may not perfect the world, but we can always improve it.</p><hr class="dividerBlock"/><p>Though he discusses and defends the first three themes, Pinker’s main focus is progress, which he implies is driven by a virtuous cycle of increasing wealth, knowledge, and tolerance:</p><ul><li>New discoveries produce wealth, which can be used to fund more discoveries.</li><li>Some discoveries help us communicate globally, increasing our tolerance of “strangers” who no longer seem strange.</li><li>Wealth also makes us more tolerant. Nations with ample resources can afford social welfare programs and even the provision of aid to strangers in other nations.</li><li>Tolerance helps us produce wealth by trading, and gives us access to the ideas and discoveries of other people. (You get the idea.)</li></ul><hr class="dividerBlock"/><p>In a steady progression of strikingly similar graphs -- lines moving up for good things, down for bad -- Pinker shows that in the last few centuries, we finally escaped from stagnation. Human life has gotten better in almost every way, from a twenty-fold rise in average income since 1800 to a 50% reduction in young children killed by disease <em>since 2000</em>. </p><p>There are too many statistics to summarize, but some are especially surprising:</p><ul><li>Lethal lightning strikes in the U.S. are down 97% since 1900. In fact, there’s been a sharp decline in deaths from falls, fires, workplace injuries, and most other “accidents”. Our longer lives are due partly to medicine, but also to laws, regulations, and norms which promote safe behavior. </li><ul><li>Note: Pinker often focuses on the U.S., though historical trends are broadly similar for other developed countries (and many that are still developing).</li></ul><li>Deaths from natural disasters have also fallen drastically. Our wealth and knowledge give us innumerable small ways to defend ourselves (better hospitals, tougher structures, better early-warning systems, etc.).</li><ul><li>For an example of this, see Patrick McKenzie’s essay in “Further Reading”, at the end of this review.</li></ul><li>The average American has hundreds of hours of extra leisure time each year, compared to the early 20th century. This increase was driven both by shorter workweeks and by refrigerators, washing machines, and other appliances. Since 1900, we’ve cut weekly housework time in <em>half</em>.</li><li>Thanks to this extra time, American parents -- both mothers and fathers -- spend more time with their children than they did a century ago.</li></ul><p>Pinker holds that these improvements, while often grudgingly acknowledged, aren’t taken seriously enough by the modern counter-Enlightenment. Populist politicians attack every pillar of our present-day prosperity. Thinkers on the left and right criticize the “complacency” of modern society. And the media skips boring good news to promote negative stories.</p><p>Proposing a solution to these issues would require an additional book. Pinker mostly lets the numbers make his arguments for him, though he also addresses a few common counterarguments and pokes holes in his opponents’ logic. (When they even use logic, that is: one reviewer refers to Pinker’s numbers on violence reduction as <u><a href="https://www.theguardian.com/books/2015/mar/13/john-gray-steven-pinker-wrong-violence-war-declining">“amulets” and “sorcery”</a></u>).</p><hr class="dividerBlock"/><h2>Commentary</h2><p>Pinker is a stylish, entertaining writer whose book tells a number of important truths. His main claim -- that the world is getting better -- generally seems to be correct, and he backs up his best points with blistering prose. </p><p>But the claim isn’t universally true. And when the facts aren’t fully on his side, Pinker can descend into strawmanning and dodgy figures to justify his grand thesis.</p><p>One of the weakest chapters in the Progress section deals with existential risk -- which seems highly relevant, since even centuries of progress could be undone by a disaster of sufficient magnitude. As he tries to persuade us that we live in the best of times, Pinker undersells two problems that could endanger civilization: nuclear war and the development of artificial general intelligence. </p><p>On the nuclear side:</p><ul><li>He makes irrelevant points about the number of books using the words “nuclear war” and the political establishment’s current lack of interest in nuclear issues. (I don’t trust the political establishment to prioritize important problems, and I suspect that Pinker doesn’t, either.) </li><li>He also notes that “if we can reduce the annual chance of nuclear war to a tenth of a percent, the world’s odds of a catastrophe-free century are 90 percent”, but never acknowledges that a 10-percent chance of nuclear war is still uncomfortably high. </li><li>Finally, he points out the decline in nuclear danger since the end of the Cold War, but declines to mention new conflicts that could arise in the future; this is understandable, since he isn’t a military expert, but I’d have liked to see more evidence that our current low-risk state is stable. </li></ul><p>Still, he offers sensible proposals for reducing nuclear risk, and at least admits that the issue is worthy of attention. I left the chapter worrying slightly less about nuclear annihilation than I had before.</p><p>His discussion of artificial intelligence, on the other hand, felt perfunctory, as though he didn’t think the issue worthy of his full attention:</p><ul><li>As he did <u><a href="https://intelligence.org/2014/11/18/misconceptions-edge-orgs-conversation-myth-ai/">years ago</a></u>, he continues to state that the AI safety community fears intelligent systems that are malevolent or omniscient. But the expert consensus is more subtle and realistic. Safety researchers generally believe that a powerful AI doesn’t have to be evil or all-knowing to be <u><a href="https://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity#Importance">dangerous</a></u>. It just has to be capable of pursuing goals that endanger humans with enough intelligence to accomplish those goals. </li><li>He declines to engage with intellectual arguments about central topics like <u><a href="https://wiki.lesswrong.com/wiki/Orthogonality_thesis">orthogonality</a></u> or the <u><a href="https://en.wikipedia.org/wiki/AI_control_problem">control problem</a></u>. Instead, he cites <em>2001: A Space Odyssey</em> (as well as <em>Get Smart</em>, a shlocky comedy from the Sixties) as a counterpoint to Nick Bostrom’s <u><em><a href="https://www.amazon.com/dp/B00LOOCGB2/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1">Superintelligence</a></em></u>. One of his few expert quotes is an out-of-context line from Stuart Russell, whose views on the topic are nearly opposite Pinker’s. </li><li>In general, throughout the section, he selects weak points (some of which I’ve never seen argued by anyone in the community) and attacks their obvious flaws. In many other chapters, he takes time to find strong opposing arguments and make data-driven counterpoints; by comparison, the pages on AI feel rushed.</li></ul><p>Writers with relevant expertise (<u><a href="https://www.scottaaronson.com/blog/?p=3654">Scott Aaronson</a></u>, <u><a href="https://www.lesswrong.com/posts/C3wqYsAtgZDzCug8b/a-detailed-critique-of-one-section-of-steven-pinker-s">Phil Torres</a></u>) have contested Pinker’s points at length. I will add only that, given Pinker’s belief that humans have achieved incredible power and wealth through the use of reason and cooperation, it seems odd that he thinks AI will never be similarly capable. (Especially when so many people stand to make money by building smart, flexible systems that work well together.)</p><hr class="dividerBlock"/><p>Even when Pinker writes about present progress instead of future problems, some of the same problems emerge. George Monbiot’s <u><a href="https://www.theguardian.com/commentisfree/2018/mar/07/environmental-calamity-facts-steven-pinker">deep dive on the environmental chapter</a></u> found sketchy data and further out-of-context quotes. And while the numbers I spot-checked myself were accurate, some of them still had an odd spin. For example, Pinker argues that the true U.S. poverty rate has dropped sharply because today’s poor Americans can afford to buy more than poor Americans in past eras. This is true and important, but skirts other aspects of poverty -- feelings of inferiority, harassment by police, a lack of self-determination -- that haven’t necessarily changed for the better.</p><p>That said, most of his statistics are solid and well-selected, and the data-heavy sections are by far the strongest. The book begins to flag when Pinker turns away from numbers and toward his critics; he’s not particularly charitable in the book’s more argumentative sections, rarely yielding to a single opposing point.</p><p>The arguments also suffer from a simple lack of space. His critique of religion is shallow by necessity, since he can spare it only a few pages; the same goes for his critique of Romanticism, his critique of leftist academics, and so on. These sections read like newspaper op-eds; they’re fine, but they don’t give Pinker time to exert his full strength as an academic.</p><p>I almost wish he’d turned the social criticism into a separate book. I’d prefer a version of <em>Enlightement Now</em> that focused entirely on material and social progress, with complaints about Donald Trump replaced by deeper explanations of counterintuitive statistics.</p><hr class="dividerBlock"/><p>If I had to summarize all my complaints, I’d say that Pinker tends to over-argue his conclusion. Is <em>everything </em>really getting better? Are<em> all</em> risks truly decreasing? Is there really <em>nothing </em>of value in the Romantics and Postmoderns who followed the Enlightenment? </p><p>A few other points of note:</p><ul><li>Pinker makes a solid attempt to answer the troubling question at the heart of Yuval Harari’s <em><u><a href="https://www.amazon.com/dp/B00ICN066A/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1">Sapiens</a></u></em>: “For all of our progress, are we actually <em>happier</em>?” He finds some evidence that rising wealth has made most of us more satisfied with our lives. And while he avoids eras of the deeper past (knowing how the ancient Romans really felt is beyond us), he points out that our ancestors also suffered from boredom and ennui and a lack of time spent with family, all of which I’ve heard cited as issues specific to us moderns.</li><li>Animal welfare goes untouched, as it did in Pinker’s <em>The Better Angels of Our Nature, </em>which correctly noted a decline in human violence... against humans. It’s understandable that Pinker wants to focus on a single species, but graphs about the number of farm animals, many of whom live terrible lives, don’t look nearly as rosy:</li></ul><hr class="dividerBlock"/><span><figure><img src="http://aarongertler.net/wp-content/uploads/2018/10/Enlightenment-Now-meat-production.png" class="draft-image center" style="" /></figure></span><p></p><p><strong>Source: </strong>The Food and Agricultural Organization of the United Nations.</p><hr class="dividerBlock"/><ul><li>Pinker never tries to prove that economic growth will continue in the face of (theorized) <u><a href="https://en.wikipedia.org/wiki/The_Great_Stagnation">technological stagnation</a></u>, the aging of the developed world’s population, and the <u><a href="https://aarongertler.net/wp-content/uploads/2018/08/Are-Ideas-Getting-Harder-To-Find.pdf">ever-increasing cost of research</a></u>. This is harder to explain than the lack of animal welfare; economic decline could be just as dangerous as a nuclear exchange to Pinker’s “protopia”.</li><li>Along the same lines, while Pinker praises the modern regulatory system, he barely mentions the costs. Regulation certainly saves a lot of lives, but it can also <u><a href="https://ij.org/issues/economic-liberty/braiding/">become excessive</a></u> and <u><a href="https://www.mercatus.org/publication/cumulative-cost-regulations">slow down economic growth</a></u>. Like health and wealth, regulation tends to increase over time; unlike health and wealth, there is such a thing as <em>too much </em>regulation. </li></ul><hr class="dividerBlock"/><span><figure><img src="http://aarongertler.net/wp-content/uploads/2018/10/Enlightenment-Now-regulations.png" class="draft-image center" style="" /></figure></span><p></p><p><strong>Source: </strong>George Washington University Regulatory Studies Center.</p><hr class="dividerBlock"/><ul><li>Meanwhile, Pinker attributes the rise of <em>scientific</em> regulation, in the form of overly cautious ethics boards and bioethicists who slow medical progress, to the “stigmatization of science”. But another factor seems to be at work: have we not simply gotten carried away with our enlightened love of safety? Again, some of what Pinker defines as “anti-Enlightenment” just looks like an overabundance of progress.</li></ul><p>One last observation: <em>Enlightenment Now </em>has a lot more “now” than “Enlightenment”. As <u><a href="https://www.goodreads.com/review/show/2297689799?book_show_action=true&from_review_page=1">other reviewers</a></u> have noted, the book is light on intellectual history. Pinker gives a brief tour of names and ideas, but barely mentions how those ideas developed over the centuries, or how the Enlightenment’s philosophy influenced the Scientific and Industrial Revolutions. (Did we need Voltaire and Mill to get steam engines and assembly lines?) His most important points still hold without this material, but I wish he’d done more to connect his four “themes”.</p><p>In the end, I strongly endorse half of <em>Enlightenment Now</em>, tread with caution around a quarter, and would prefer the last quarter to have been published somewhere else. But the good material is often great, and Pinker’s occasional missteps shouldn’t obscure the beauty and joy of the facts he presents, which remain underrated. I’m glad we have him as a counterpoint to most of the media.</p><hr class="dividerBlock"/><h2>Who should read this book?</h2><ul><li>Pessimists who don’t think the world is getting better and want lots of counterarguments.</li><li>Optimists who like happy little graphs.</li><li>People of any outlook who want a brief tour of the last two centuries from a materialist perspective, with lots of citations for following up.</li></ul><hr class="dividerBlock"/><h2>Who shouldn’t read this book?</h2><ul><li>People who fundamentally distrust materialist perspectives.</li><li>People who prefer a few deep arguments to many surface-level arguments.</li><li>People who are familiar with this genre and don’t feel the need to remind themselves of all the ways things have gotten better.</li></ul><hr class="dividerBlock"/><h2>What questions does this book raise for the EA reader?</h2><p>Here are a few that were on my mind after I finished. Your questions might be entirely different; Pinker offers a lot to think about.</p><ul><li>Given the massive historical gains driven by economic growth, might it be worth putting more EA effort into research on growth and development?</li><li>If people really are more satisfied with their lives today than they were prior to the Industrial Revolution, how much of that satisfaction was dependent on material progress? Are there ways to capture similar life-satisfaction gains without an attendant order-of-magnitude increase in GDP?</li><li>How many of the lesser-known improvements cited by Pinker might help us think about new cause areas? </li><ul><li>For example, is there some form of technological progress that could, like the washing machine, save people multiple hours of tedium each week, and that EA could help bring into being? (Off-the-cuff example: Pushing forward full legal acceptance of self-driving cars by a few years might save billions of hours and many traffic deaths.)</li><li>Or are there particular safety regulations that could massively cut down on some obscure cause of death like industrial accidents, and might be relatively easy to push forward? What could we learn from an Open Phil <u><a href="https://www.openphilanthropy.org/research/history-of-philanthropy">“history of regulation”</a></u> case study?</li></ul></ul><hr class="dividerBlock"/><h2>Favorite Quotes</h2><ul><li>“Our greatest enemies are ultimately not our political adversaries but entropy, evolution (in the form of pestilence and the flaws in human nature), and most of all ignorance—a shortfall of knowledge of how best to solve our problems.”</li><li>“Bad things can happen quickly, but good things aren’t built in a day [...] if a newspaper came out once every fifty years, it would not report half a century of celebrity gossip and political scandals. It would report momentous global changes such as the increase in life expectancy.”</li><li>“Time spent on laundry alone fell from 11.5 hours a week in 1920 to 1.5 in 2014. For returning “washday” to our lives, Hans Rosling suggests, the washing machine deserves to be called the greatest invention of the Industrial Revolution.”</li><li>“In 1919, an average American wage earner had to work 1,800 hours to pay for a refrigerator; in 2014, he or she had to work fewer than 24 hours (and the new fridge was frost-free and came with an icemaker). Mindless consumerism? Not when you remember that food, clothing, and shelter are the three necessities of life, that entropy degrades all three, and that the time it takes to keep them usable is time that could be devoted to other pursuits.”</li><li>“On April 12, 1955, a team of scientists announced that Jonas Salk’s vaccine against polio—the disease that had killed thousands a year, paralyzed Franklin Roosevelt, and sent many children into iron lungs—was proven safe. According to Richard Carter’s history of the discovery, on that day, ‘people observed moments of silence, rang bells, honked horns, blew factory whistles, fired salutes, . . . took the rest of the day off, closed their schools or convoked fervid assemblies therein, drank toasts, hugged children, attended church, smiled at strangers, and forgave enemies.’”</li></ul><hr class="dividerBlock"/><h2>Further Reading</h2><ul><li><em>Civilization and Capitalism</em>, by Fernand Braudel, explores human material progress in meticulous detail. (Braudel spends as much time discussing improvements in bread quality as Pinker does improvements in GDP.) The full book is <u><a href="https://archive.org/stream/BraudelFernandCivilizationAndCapitalism/Braudel%2C%20Fernand%20-%20Civilization%20and%20Capitalism%2C%20Vol.%201#page/n27/mode/2up">free online</a></u>, but you should start with <u><a href="https://www.reddit.com/r/slatestarcodex/comments/8bypq0/reading_notes_civilization_capitalism_15th18th/">this excellent summary</a></u>.</li><li>MIT professor Scott Aaronson’s <u><a href="https://www.scottaaronson.com/blog/?p=3654">positive and pessimistic review</a></u> of <em>Enlightenment Now </em>(also linked above) includes a detailed critique of Pinker’s views on artificial intelligence.</li><li>Tyler Cowen, economist and champion book-reader, wrote a <u><a href="https://marginalrevolution.com/marginalrevolution/2018/02/enlightenment-now-new-steven-pinker-book.html">brief and thoughtful review </a></u>of <em>Enlightnment Now</em>.</li><li>Nathan J. Robinson offers a <u><a href="https://www.currentaffairs.org/2018/02/why-equality-is-indispensable">detailed rebuttal</a></u> of Pinker’s defense of inequality. (The rebuttal has its own flaws, of course, because <u><a href="https://quoteinvestigator.com/2015/06/15/complicated/">everything is more complicated than it seems</a></u>).</li><li>Patrick McKenzie produced a <u><a href="https://www.kalzumeus.com/2011/03/13/some-perspective-on-the-japan-earthquake/">stirring, detailed essay</a></u> about the effectiveness of modern disaster response (in the specific context of the 2011 Japanese earthquake).</li><li>One form of progress Pinker didn’t mention: The proportion of Wikipedia articles that meet a set of exacting quality standards has been <u><a href="https://en.wikipedia.org/wiki/Wikipedia:Good_article_statistics">steadily increasing</a></u> for years.</li><li>Many more forms of progress Pinker didn’t mention: <u><a href="https://www.gwern.net/Notes#ordinary-life-improvements">Gwern</a></u> lists the ways life has improved in the last three decades (the coffee has gotten better, for example).</li><li><u><a href="https://ourworldindata.org/wrong-about-the-world">Our World In Data</a></u> displays a set of surveys which show that most people are pessimistic about global development -- save for those in countries where the most development is happening, like China and Kenya. </li></ul> aarongertler gQvaA9EbvmzQATHt3 2018-10-21T23:12:43.485Z On Becoming World-Class https://forum.effectivealtruism.org/posts/4WwcNSGd3XcpBC72Y/on-becoming-world-class <p> In September, Bob Mueller posed an <a href="https://www.facebook.com/groups/437177563005273?view=permalink&id=1917381734984841">interesting question</a> on Facebook:</p><p><em>How does &quot;becoming (one of) the best in the world&quot; in some not-in-any-way effective field or niche compare to traditional EA careers?</em></p><p>Bob, who practices a niche, unspecified form of digital art, described himself as &quot;quite motivated about a lot of things&quot;. He didn&#x27;t seem perturbed by the prospect of switching to some more &quot;traditional&quot; career.</p><p>This made me wonder: If a person really does have equal access to both of these options -- possibly world-class at an unusual profession, or moderately talented at an EA-aligned profession -- and would be equally happy and satisfied in both places, how should they make the choice?</p><p>There are many skilled people in the community, so I suspect that Bob isn&#x27;t the only person who will end up thinking about this. I hope that my thoughts -- and those of the commenters! -- might come in handy.</p><hr class="dividerBlock"/><p><strong>Existing thoughts on this topic:</strong></p><p>80,000 Hours&#x27; latest career article notes that one high-impact career path involves <a href="https://80000hours.org/articles/high-impact-careers/#4-apply-an-unusual-strength-to-a-needed-niche">&quot;applying an unusual strength to a needed niche&quot;</a>.</p><p>Anthropologists, for example, helped the global health community contain Ebola by delivering critical information on local burial practices. And there are plenty of more common examples along these lines: Every organization needs someone who understands accounting, and someone who knows how to design a website.</p><p>That said, the EA community may be able to hire professionals in these fields as contractors, without needing to find accountants or designers who know anything about our particular beliefs. And while we need solid and competent people for these tasks, we can probably get by without anyone “world-class”. </p><p>Anyway, in Bob’s case, his work seems even less likely than anthropology to hold direct relevance for EA organizations. (As far as I can tell -- Bob, if you’re reading this, you’re welcome to provide more details in the comments!) So the 80,000 Hours advice isn’t too applicable for his situation, and the situation of other EAs with “not-in-any-way effective” talents.</p><hr class="dividerBlock"/><p>Given all of this, what value could someone like Bob bring to EA if he pursued a career in digital art, and rose to the top of his field?</p><p>Some Facebook commentators seemed a bit skeptical of the idea, with good reason:</p><ul><li>The top people in most fields, if those fields aren&#x27;t naturally practical or lucrative, probably won’t become wealthy or famous.</li><li>Even in fields where being at the top <em>does </em>mean fame or fortune, becoming a &quot;top person&quot; is a risky proposition. If Bob overestimates his skill, he may flounder somewhere below the peak of a <a href="http://createquity.com/2013/10/artists-not-alone-in-steep-climb-to-the-top/">winner-take-all market</a>.</li><li>Artistic skills in particular may not be very transferable. Someone who tries to become a top programmer in an obscure language could have more success falling back to a &quot;normal&quot; career in software than someone whose art goes out of style would have becoming a &quot;normal&quot; graphic designer. (Designers make a lot less money than programmers, and there aren&#x27;t nearly as many of them.)</li></ul><p>Every case is different, but these arguments do form a reasonable case that people like Bob should stick with a more standard EA-aligned career path.</p><hr class="dividerBlock"/><p>However, I suspect that there are serious potential upsides to becoming world-class even in an “irrelevant” field -- but that they&#x27;re harder to see or imagine than the downsides. Making art is risky, but when it&#x27;s done by someone with strong EA values, a lot of good possibilities open up.</p><p>For the rest of this piece, I&#x27;ll play Devil&#x27;s Advocate against the standard view, and consider what an EA who becomes a world-class artist -- or world-class in some other profession -- could do for the community.</p><hr class="dividerBlock"/><p><strong>1. Being world-class creates connections. </strong>Bob&#x27;s art has already been used by a &quot;classic&quot; music festival. That didn&#x27;t create any direct opportunity to advocate for EA, but let&#x27;s say his animations catch on -- and Daft Punk asks him to help out with their next tour. Suddenly, every EA who knows Bob is two degrees of separation from Daft Punk, and three degrees from much of the music industry.</p><p>&quot;But it&#x27;s not like Daft Punk is going to have an extended philosophical discussion with Bob, right? Why does this help?&quot;</p><p>This is true. But a personal connection, however brief, is a powerful thing, and can create opportunities well into the future.</p><p>If Thomas Bangalter decides to donate his electro-funk fortune to charity in a few years, and hits up his address book or reaches out on Twitter to ask for advice -- can we call this <a href="https://twitter.com/jeffbezos/status/875418348598603776?lang=en">&quot;pulling a Bezos&quot;</a>? -- he&#x27;ll be likely to notice a suggestion from his old friend Bob, who helped him out on his last tour.</p><p>And wow! Who knew that Bob was so into charity, or that he was personally acquainted with so many Oxford professors? Looks like Thomas was in luck; time to send a few emails...</p><p>In a more general sense, I&#x27;ve seen a lot of Facebook posts over the year from EAs whose wealthy friends wanted their advice on charitable donations. Any profession (choreographer, personal chef, stunt double) which exposes you to people with a lot of wealth and status -- even if you aren&#x27;t rich or famous yourself -- seems like it could have the same effect.</p><p>(Worth noting: Many professions with no obvious connection to wealth or status could <em>become </em>connected if you get good enough. <u><a href="https://hackernoon.com/im-32-and-spent-200k-on-biohacking-became-calmer-thinner-extroverted-healthier-happier-2a2e846ae113">Dr. Peter Attia</a></u> specializes in nutrition science, a rather neglected corner of medicine, but his work has led to his appearing on <u><a href="https://tim.blog/2014/12/18/peter-attia/">one of the world’s most popular podcasts</a></u> and serving as a personal health consultant for <u><a href="https://hackernoon.com/im-32-and-spent-200k-on-biohacking-became-calmer-thinner-extroverted-healthier-happier-2a2e846ae113">successful tech entrepreneurs</a></u>.)</p><hr class="dividerBlock"/><p><strong>2. Being world-class creates validity. </strong>I once worked for a recruiting agency that helped tech startups find programmers. But good programmers are inundated with recruiting messages, and even our most elite, experienced agents had low response rates when they reached out.</p><p>Once, we were in a desperate spot, and asked the CEO of a company we worked with to write the message themselves. His message, though it was dashed-off and totally un-optimized, performed at least twice as well as ours.</p><p>Being a CEO clearly helps you get attention, but I think it helped that this particular CEO, who was a programmer with a stacked LinkedIn page, had <em>validity</em>. He used technical jargon that we recruiters couldn&#x27;t use without sounding fake. He clearly had something in common with the people he wrote to, and could, by implication, understand their concerns in a way we couldn&#x27;t match.</p><p>Most professional communities contain a lot of people who would become interested in EA if they heard the right message -- which could mean hearing from the right <em>person</em>. If we want to convince digital artists to think about EA (say, if we want to see more art with EA themes), it will help to have digital artists in our community, even if they&#x27;re only &quot;well-known&quot; to other artists.</p><p>Same goes for advertisers and authors; lawyers and landlords; podcasters and pro gamers. My impression from working in a few different fields is that professional networks between &quot;elite&quot; members are very tight; even one or two people with EA leanings could have a surprising amount of direct influence.</p><p>(On &quot;connections&quot; vs. &quot;validity&quot;: The first relates to a world-class person meeting people in other fields, the second to meeting people in the same field.)</p><hr class="dividerBlock"/><p><strong>3. Being world-class creates diversity. </strong>Alice, a banker who read some articles about effective altruism and wanted to learn more, signs up for EA Global 2020.</p><p>The timeline splits, and two versions of Alice attend two different conferences:</p><p>a. At EA Global A, Alice talks on the first night to ten different people. Four of them are software developers. Two are philosophers. One is an economics PhD student. One studies some sort of esoteric computer science field that she doesn&#x27;t really understand. Two are undergraduates, who are respectively studying software development and biology, because something called &quot;pandemic risk&quot; is apparently a major concern. It&#x27;s all a bit overwhelming, and very abstract, and it sets the tone for the rest of her conference.</p><p>b. At EA Global B, Alice talks on the first night to ten different people. Most of them work on computer science or philosophy, but one is a professional poker player, which is very cool and not at all what she was expecting. And then there&#x27;s Bob, the digital artist, who helped out with Daft Punk&#x27;s last tour and shows her some <em>epic</em> animations on his phone. Alice goes back to her hotel feeling like the community is vibrant and diverse, has a few more interesting encounters over the course of the conference, and winds up reaching out to her local group when she gets home.</p><p>A community where almost everyone does the same few things might still thrive; there are many more ways to be diverse than just &quot;career&quot;. But having a wider range of professions has some advantages:</p><ul><li>We have a wider presence throughout our communities, such that more people are likely to run into an EA at some point in their lives.</li><li>We present a more balanced picture to a world that likes to stereotype us as a haven for tech nerds, weird philosophers, and no one else.</li><li>Within EA, we learn more from each other, and develop an internal understanding of more fields. An accountant hired from a random firm can keep the books for an EA organization, but they won’t write a post on the EA Forum about basic accounting principles that even small altruistic projects can use to save money and time.</li><ul><li>This post doesn&#x27;t actually exist, as far as I know, but if we had a few more EAs with backgrounds in accounting, perhaps it would!</li></ul></ul><p>(The &quot;career diversity&quot; consideration doesn&#x27;t just apply to people at the top of their fields. If you do something that isn&#x27;t common within the community, at even a reasonably skilled level, we may have a lot to learn from you!)</p><hr class="dividerBlock"/><p>None of these considerations apply universally. It may be the case that, between the risk of overestimating one&#x27;s skill and the difficulty of getting to the top of <em>any </em>profession, almost all EAs would be better off pursuing career paths with clear direct impact.</p><p>But it still seems important for our community to recognize and support someone who has a realistic chance of becoming, say, a <a href="https://www.npr.org/sections/deceptivecadence/2018/10/04/654327199/macarthur-fellow-matthew-aucoin-talks-composing-and-donating-his-genius-money">famous EA composer</a>. Or an EA bronze medalist in curling. Or the first EA in the U.S. House of Representatives...</p><p>...or even something <em>really </em>weird like &quot;the EA-aligned person who wrote the most popular piece of Harry Potter fanfiction ever&quot;. We can&#x27;t ignore the risk, but we also shouldn&#x27;t ignore the opportunity. </p><hr class="dividerBlock"/><p><strong>Questions for the comments: </strong>Have you ever tried to become world-class at something? Or had the chance to try, but opted to take another path? What were your results? Are you happy with the choice you made?</p> aarongertler 4WwcNSGd3XcpBC72Y 2018-10-19T01:35:18.898Z EA Concepts: Share Impressions Before Credences https://forum.effectivealtruism.org/posts/jhexFncC9KN76Z5ki/ea-concepts-share-impressions-before-credences <p>Hello, readers! I&#x27;m trialing for the Content position at CEA; as such, I&#x27;ve been asked to draft a couple of posts for the <a href="https://concepts.effectivealtruism.org/concepts/">concept map</a>. These are meant to be close to the current style (no links in body text, fairly concise). </p><p>I&#x27;d love to hear your feedback on this post. Specific questions:</p><p><strong>1. </strong>What are your favorite words for &quot;beliefs before updating on outside information&quot; and &quot;beliefs after updating on outside information&quot;? We&#x27;re trying to draw that distinction with &quot;impression&quot; and &quot;credence&quot;, but those may not be the best options.</p><p><strong>2.</strong> When you imagine this from the view of a reader who is newish to EA, and clicked on a link to read about the importance of &quot;sharing your impressions&quot;, does it make sense? Is it clear why this concept is useful</p><p><strong>3.</strong> Are there any other links we should add to &quot;further reading&quot;? (In particular, I think that a link to the Soviet example of &quot;everyone hates the government but is afraid to say so&quot; might be relevant, but I couldn&#x27;t find a good article summarizing the example.)</p><p>Thanks for your help! The other concept draft is <a href="https://forum.effectivealtruism.org/posts/hNJsTFLWLFbHh8uke/ea-concepts-inside-view-outside-view">here</a>.</p><hr class="dividerBlock"/><p><strong>Share Impressions Before Credences</strong></p><p>When we think through a question by ourselves, we form an “impression” of the answer, based on the way we interpret our experiences. (Even if you experience something that others have also experienced, what you take away from that is unique to you.)</p><p>When we discuss a question with other people, we may update our “impression” into a “credence” after updating on their views. But this can introduce bias into a discussion. If we update before speaking, then share our updated credences rather than our impressions, our conversation partners partly hear their own views reflected back to them, making them update less than they should.</p><p>Consider two friends, Aaron and Max, who are equally good weather forecasters. Aaron has the impression that there is a 60% chance of rain tomorrow. He tells Max about this. Max had formerly had the impression that there was an 80% chance of rain tomorrow, but he updates on Aaron’s words to reach a credence of 70%. </p><p>Aaron then asks Max for his view. Max tells him he thinks there’s a 70% chance of rain, so Aaron updates to reach a credence of 65%. Both friends used the same decision algorithm (average both probabilities), but because Aaron shared his impression first, and Max shared a view that “reflected” that impression, Aaron failed to update in the same way as Max. </p><p>This dynamic explains why it can be important to share your initial impressions in group discussions, even if they no longer reflect your up-to-date credences. Doing so helps all participants obtain as much information as possible from each participant’s private experience.</p><hr class="dividerBlock"/><p><strong>Further Reading:</strong></p><p>Kawamura, Kohei, and Vasileios Vlaseros. 31 July 2014. <a href="http://homepages.econ.ed.ac.uk/~kawamura/Expert_and_Majority_4.5.pdf">“Expert Information and Majority Decisions”</a>. </p> aarongertler jhexFncC9KN76Z5ki 2018-09-18T22:47:13.721Z EA Concepts: Inside View, Outside View https://forum.effectivealtruism.org/posts/hNJsTFLWLFbHh8uke/ea-concepts-inside-view-outside-view <p>Hello, readers! I&#x27;m trialing for the Content position at CEA; as such, I&#x27;ve been asked to draft a couple of posts for the <a href="https://concepts.effectivealtruism.org/concepts/">concept map</a>.</p><p>These are meant to be close to the current style (no links in body text, fairly concise). I&#x27;d love to hear your feedback on this post. Specific questions:</p><ol><li>Do you prefer &quot;we&quot; or &quot;you&quot; as the pronoun of choice?</li><li>Are there any other links we should add to &quot;further reading&quot;?</li><li>When you imagine this from the view of a reader who is newish to EA, and clicked on a link to read about something called &quot;outside view&quot;, does it make sense? Is it clear why this dichotomy could be useful?</li></ol><p>Thanks for your help! The other concept draft is here.</p><hr class="dividerBlock"/><p><strong>Forecasting with the Inside and Outside View</strong></p><p>When we make predictions, we often imagine the most subjectively “likely” way that something could happen, and then judge whether this scenario seems reasonable. This is called “inside view” thinking.</p><p>For example, if we want to predict whether we’ll get to work on time tomorrow, we might plan out a morning schedule, then judge whether we think we’ll be able to follow it.</p><p>However, another method -- &quot;outside view thinking&quot; -- tends to be more reliable. In the case of getting to work, we&#x27;re more likely to make an accurate prediction if we simply think about all the other mornings we’ve gone to work, and estimate how often we were late.</p><p>(This is most helpful if tomorrow is in the same “reference class” as those other days, without unusual factors -- like an important meeting or a blizzard -- that increase or decrease the odds of lateness.)</p><p>The outside view typically works better because it “automatically” includes data about unpredictable circumstances. If we’re late to work, it could be for many different reasons: oversleeping, missing the bus, forgetting our keys, etc. Calculating the odds for each <em>individual</em> reason would be very difficult, but looking at past workdays means that we can avoid doing so, by summing them into a single number. If we are late <em>for</em> <em>any reason</em> half the time, and today is a typical day, we probably have about half a chance of being late, <em>for any reason</em>, tomorrow.</p><p>If you ever find yourself trying to forecast about something in a common “reference class” (that is, you have information about a group of similar things), try using the outside view. The history of past events is often more reliable than your own judgment.</p><hr class="dividerBlock"/><p><strong>Further Reading:</strong></p><p>Hanson, Robert. 2017. <a href="http://www.overcomingbias.com/2007/07/beware-the-insi.html">Beware the Inside View.</a></p><p>Wikipedia. 2018. <a href="https://en.wikipedia.org/wiki/Reference_class_forecasting">Reference Class Forecasting.</a></p> aarongertler hNJsTFLWLFbHh8uke 2018-09-18T22:33:08.618Z Talking About Effective Altruism At Parties https://forum.effectivealtruism.org/posts/vwDjfnJ9656fAQvsC/talking-about-effective-altruism-at-parties <p>(Cross-posted from my <a href="http://aarongertler.net/party-talk-ea/"><strong>blog</strong></a>, with a few edits.)</p> <p>Many of the effective altruists I&apos;ve known were first introduced to EA through some kind of interpersonal connection -- a friend got interested first, or they heard about it at a college event, or something along those lines.</p> <p>I&apos;ve introduced a few people to EA this way myself. But it&apos;s a tricky thing to get right -- as we all know, many EA ideas can sound very strange at first, especially if the explanation is just a few words off.</p> <p>So I&apos;m listing some of the best ways I&apos;ve found to explain EA quickly, plus a few additional ideas. My goal is to explore many different frames for EA, so that we can find the best ways to explain it to many different types of people. The goal of any one of these frames is to open a short conversation, or create enough interest that someone will click on a link you send them later.</p> <p>(Note: No frame is perfect for all conversations, and some may be terrible if used in the wrong circumstances.&#xA0;Be careful.)</p> <p>&#xA0;</p> <p><strong>After you read this, please add your favorite frame(s) in the comments! (If you have any.)&#xA0;</strong>I&apos;d love to use this page to start collecting lots of examples, so we can collectively figure out the best-sounding frames, even if we&apos;re a long way from an RCT of EA frames.</p> <p>&#xA0;</p> <h1>List of Frames</h1> <h2>Excited Altruism</h2> <p>Some people see charity largely as a way to avoid moral guilt. I think that&apos;s a fair interpretation, but when I give, most of what I feel is excitement! I may never get the chance to save a child from a burning building [<strong><a href="http://amahighlights.com/william-macaskill/">source for the example</a></strong>], but I can still make a child&apos;s life much better, and maybe even help to save a child who would otherwise have died a preventable death. Why not be excited about that?&#xA0;I&apos;m also <strong><a href="http://blog.givewell.org/2013/08/20/excited-altruism/">excited to live in a time</a></strong> when we&apos;ve started to have really good evidence&#xA0;around how to help people on the other side of the world, so that I can be really efficient in the way that I give. When I give, I feel much the same as when I volunteer -- glad that I&apos;ve done something positive, and hopeful about the results. Hence, &quot;excited altruism&quot;. &#xA0;</p> <p>&#xA0;</p> <h3>The Feeling of Relief</h3> <p>&quot;Has there ever been a time when you&#xA0;started to get sick, and you knew it was going to be bad? And you had a moment of &apos;oh,&#xA0;no, please, anything but...&apos;?</p> <p>&quot;When I think about the people who are helped by groups that fight malaria and parasitic worms, I think about those moments. &quot;I&apos;ve had those &apos;oh no&apos; moments a few times, but that usually meant a really bad fever or a case of strep throat -- something that would go away in a few days when I took the right pills. And meanwhile, I&apos;d be more or less okay -- I could call in sick to work, get homework from my professor, and&#xA0;catch up on life when I got better.</p> <p>&quot;But if I lived&#xA0;in a village where malaria is very common, I&#xA0;might have a higher-than-50-percent chance of getting it during the rainy season. So when I woke up and felt sick, my &apos;oh no&apos; moment might&#xA0;mean several weeks spent in bed. During this time, I wouldn&apos;t&#xA0;be fit to farm,&#xA0;and&#xA0;I&apos;d lose quite a bit of money as a result -- meaning&#xA0;I might be skipping meals later. If it were a parasitic worm infection, the symptoms wouldn&apos;t be the same, but the general principle holds true; I&apos;d be in bad shape.</p> <p>&quot;When I give money to buy deworming pills or a malaria bednet, I imagine someone who would be having one of those &apos;oh no&apos; moments instead being perfectly healthy and not having their life disrupted. I imagine how good I&apos;d feel if someone stopped one of my strep infections before it happened. It feels awesome,&#xA0;and that makes me excited to give someone a chance at one fewer &apos;oh no&apos; moment.&quot; &#xA0;</p> <p>&#xA0;</p> <h3>Global Inequality (Money)</h3> <p>(For use in conversations where people claim to favor local giving.)</p> <p>&quot;I&apos;ll grant that inequality causes problems in the United States. But U.S. inequality is minor compared to inequality on the planet Earth. &quot;Did you know that Earth has a higher Gini coefficient than any single country? A lot of us are part of the global &apos;99 percent&apos;, so to speak.</p> <p>&quot;The average American CEO makes about 200 times as much as me. And I make about 200 times as much as some of the poorest people in the world.</p> <p>&quot;One big project of effective altruism is to reduce global inequality. By letting more migrants into the U.S. (where they&apos;ll send money back to their families), by cutting down on illicit cash flows (when rich people in poor countries don&apos;t pay taxes and hide money abroad), and also by literally asking rich people to give cash to poor people. That seems to work pretty well.&quot; &#xA0;</p> <p>&#xA0;</p> <h3>Global Inequality (Attention)</h3> <p>(For use in conversations where people claim to favor local giving.)</p> <p>&quot;There are a lot of Americans we tend to ignore. The homeless, American Indians, ex-felons... plus&#xA0;a lot of other groups. And that&apos;s terrible.</p> <p>&quot;But I think that we ignore people in some other countries to an equal&#xA0;extent. That happens with charitable giving, too. For every dollar an American donates, we send about <strong><a href="https://www.charitynavigator.org/index.cfm?bay=content.view&amp;cpid=42">six cents</a></strong> to other countries.</p> <p>&quot;Intellectually, I understand the philosophical position behind giving locally. But on a personal level, I find it really hard to see nationality as a thing that should guide me, apart from any other factors. Maybe direct friendship, or shared membership in a small group, but not nationality.&quot; &#xA0;</p> <p>&#xA0;</p> <h3>Doubling Income</h3> <p>&quot;Major social problems in the U.S. are generally harder to solve with money than major social problems in the developing world.&#xA0;Still, there are obviously ways you could spend money to change the life of a fellow American. And that would often by a kind, world-improving thing to do.</p> <p>&quot;But money goes so much further abroad that it took me a long time to understand exactly how big the difference is.&#xA0;</p> <p>&quot;GiveDirectly lets me send money straight to someone in Kenya. If I give them about $700, they can use that money to double a family&apos;s income for the entire year. Can you imagine what the impact would be if you doubled someone&apos;s income in the U.S. for a year?</p> <p>&quot;But that would be about 20 times as expensive in the U.S. And the impact would be similar either way.&#xA0;$700 transforms a poor Kenyan&apos;s life in about the same way you&apos;d expect $14,000 to transform a poor American&apos;s life.&quot; &#xA0;</p> <p>&#xA0;</p> <h3>Shared Humanity</h3> <p>&quot;Does the cold, calculating part of EA mean that I lose some of my empathy?</p> <p>&quot;Well, maybe. It&apos;s definitely hard for me to feel empathy for someone who survives&#xA0;on a couple of dollars a day. It would feel arrogant to claim that I &apos;understand&apos; that person. Our lives are different in almost every way.</p> <p>&quot;But even if I&apos;ll never know what it&apos;s like to eat the same thing for almost every meal, I think there are basic human pleasures that I share with pretty much every other human who ever lived.</p> <p>&quot;I know what it&apos;s like to learn something new. I know what it&apos;s like to see an old friend after a long separation. I know what it&apos;s like&#xA0;to sit inside when the weather is bad and hear the raindrops on the roof and think &apos;yep, I&apos;m glad I&apos;m not outside right now&apos;. I know what it&apos;s like to fall in love with someone and wake up smiling just because that person&#xA0;<em>exists.</em></p> <p>&quot;So when I need an emotional boost, I imagine the person I&apos;m helping. And the way that, even though we are just about as different as two humans can be, we still share those awesome things. And I&apos;m hopefully freeing that person up to not feel as much stress, and to have more time to feel the kinds of happiness I think we&#xA0;<em>do&#xA0;</em>share.&quot; &#xA0;</p> <p>&#xA0;</p> <h3>Ridiculously Lucky</h3> <p>I keep my version of this frame on <strong><a href="http://aarongertler.net/donations/">the page where I track my giving.</a></strong> I might also whip out my phone and quote my idol,&#xA0;<strong><a href="http://theunitofcaring.tumblr.com">The Unit of Caring:</a></strong></p> <blockquote>I firmly refuse to feel guilty about my outrageous cosmic luck. I find it far more satisfying to pay it forward. See, luck, like pretty much everything else, can be bought with money [...] I was born by sheer chance into a country that has eradicated malaria already but I can buy a couple bednets towards the project of stamping it off this earth entirely [...] Almost every advantage I have, everyone ought to have, and giving them money is the closest I can come to putting a finger on the cosmic scales.</blockquote> <p>&#xA0;</p> <h3>Revenge</h3> <p>&quot;I have a hard time getting angry at people. I usually feel like people&apos;s reasons for doing bad things made sense to them at the time, and whenever I get mad at them I remember times people were mad at me for doing bad things, and then I feel kind of sick.</p> <p>&quot;But getting angry is really satisfying. So instead I get angry at problems. I&apos;m angry at meteors that have the sheer <em>nerve</em>&#xA0;to get within a billion miles of Earth. I&apos;m angry at mosquitoes, because they&apos;re always biting&#xA0;people. I&apos;m angry at the social systems, built on purpose or by accident, that ruin lives with no remorse, because they are abstract concepts that can&apos;t even <em>feel </em>remorse.</p> <p>&quot;EA is my way of saying &apos;screw you, problems!&apos; You want to keep people in jail? I&apos;ll bail them out. You want to make people sick? I&apos;ll&#xA0;<em>murder&#xA0;</em>you. You want to threaten my planet? I&apos;ll wipe the very&#xA0;<em>possibility&#xA0;</em>of you from existence.&#xA0;And I&apos;ll do it with the cold, brutal efficiency of an executioner.&quot;</p> <p>&#xA0;</p> <h3>Social Justice</h3> <p>&quot;I&apos;m generally a fan of the modern social justice movement. I think they&apos;ve done more good than harm, and will end up doing much more good than harm in the long run.</p> <p>&quot;But like any movement, they wind up focusing on some people more than others, to avoid being stretched really thin. I think EA does a good job of catching some of the groups&#xA0;SJ sometimes doesn&apos;t catch.</p> <p>&quot;A focus on local movements and protests means we don&apos;t always catch people who aren&apos;t from our country, and that&apos;s one of EA&apos;s major focuses. And while SJ has a lot of vegetarians and vegans, I haven&apos;t seen a lot of animal-rights rhetoric; that&apos;s another major EA focus.</p> <p>&quot;When I think &apos;all lives matter&apos;, I&apos;m not going for a counterpoint to &apos;black lives matter&apos;. I&apos;m going for &apos;don&apos;t forget the lives of people who live outside the classic U.S. race/gender/class spectrum!&apos; Even if &apos;racism&apos; manifests very differently in India or Nigeria or Myanmar, poverty and the lack of education still cause very familiar problems there.&quot; &#xA0;</p> <p>&#xA0;</p> <h3>Being Embarrassed in the Future</h3> <p>&quot;<strong><a href="http://paulgraham.com/say.html">Paul Graham</a></strong> wrote a great essay on &apos;what we can&apos;t say now&apos;. It&apos;s about how the world might change in the future.</p> <p>&quot;We can look back at every other era and find problems with how they lived. Segregation, slavery, wars of conquest... there&apos;s always something.</p> <p>&quot;What will our great-grandchildren be embarrassed about when they look at the year 2016? Probably the &apos;rights issues&apos; we&apos;re still struggling with. But I think they&apos;ll also be confused and angry about how many people we left out to dry because they weren&apos;t in the same country, or because of generic arguments about how &apos;aid doesn&apos;t help&apos; or &apos;start helping in your own backyard&apos;.</p> <p>&quot;This extends to some uncomfortable opinions, too. Like the idea that some causes, or specific charities, are simply a waste of time and money. I wonder if our descendants will look&#xA0;at the amount we give to museums or symphony orchestras, and just be completely confused as to why so many people were dying for lack of really cheap medicine in other parts of the world.</p> <p>&quot;Basically, I&apos;m assuming my descendants will be smarter than I am, and I try&#xA0;to donate&#xA0;in part so that my giving will make sense to them.&quot;</p> aarongertler vwDjfnJ9656fAQvsC 2017-11-16T20:22:46.114Z Meetup : Yale Effective Altruists https://forum.effectivealtruism.org/posts/D32ujb3MbKS6DTCCZ/meetup-yale-effective-altruists <h2>Discussion article for the meetup : <a href="/meetups/s">Yale Effective Altruists</a></h2> <div> <p> <strong>WHEN:</strong> <span>11 October 2014 10:59:03PM (-0400)</span><br> </p> <p> <strong>WHERE:</strong> <span>New Haven, CT</span> </p> </div><!-- .meta --> <div> <div><p>Come hang out with Yale&apos;s student EA group! Contact 302-824-2026 or aaron.gertler@yale.edu for more information on location and details.</p></div> </div><!-- .content --> <h2>Discussion article for the meetup : <a href="/meetups/s">Yale Effective Altruists</a></h2> aarongertler D32ujb3MbKS6DTCCZ 2014-10-07T02:59:35.605Z