Comment by aarongertler on Should EAs participate in the Double Up Drive? · 2019-01-14T21:00:15.006Z · score: 1 (1 votes) · EA · GW

Now that the 2018 Drive is over, what EAs should do in 2019 will depend on the terms of the match (if it even gets offered again). As soon as the Drive starts, I plan to get a clear answer to how it works, though the more people who ask, the better!

Some possibilities for how it might be run:

  • The Drive is truly counterfactual (every dollar you give = an extra dollar from the sponsors)
  • The Drive only affects distribution, not amount (your money just influences where and not whether sponsors give their funds)
  • Somewhere in between (e.g. only funds beyond $2 million lead to additional matching from sponsors, because they plan to give $2 million no matter what)

This year, it seems like the Drive turned out to be counterfactual for all money raised after $2.4 million, but not necessarily before (we don't actually know).

If the Drive is "truly counterfactual", or is likely to reach the amount above which extra funds will be counterfactual, it is a good opportunity. This would mean that EAs should strongly consider saving up to donate through the Drive, especially if they may not easily be able to do Facebook donation matching (e.g. because they are European and there's a higher risk that their banks will reject a donation through Facebook).

However, we do want to be sure we don't flood the Drive with so many donations that the sponsors feel reluctant to run it in future years. If that becomes a concern, it's not something individuals need to worry about (unless you're donating mid-five-digit sums or more), but CEA and other orgs may share it around less widely. We'll see what the terms are this year, though.

The Meetup Cookbook (Fantastic Group Resource)

2019-01-12T03:56:29.655Z · score: 11 (6 votes)
Comment by aarongertler on EAs Should Invest All Year, then Give only on Giving Tuesday · 2019-01-12T00:26:47.111Z · score: 5 (5 votes) · EA · GW

Notes on this, from someone who was fairly involved in the GT process:

  • Even if competition for the Facebook match increases, the amount of data we gathered this year should help us be better-prepared next year, so the "base" percentage of a match should be above 65%, as long as you trust yourself to follow best practices around donating quickly.
  • Non-Americans had a much harder time getting matched by Facebook for some reason (probably banking/credit card authorization issues). They should take this into account when planning donations.
  • Other large matching campaigns sometimes pop up, mostly but not only during Giving Season. It's good to keep an eye out for those (as the community does now) and be ready to move on an opportunity if it happens mid-year.
    • This also implies that finding out whether a match is actually counterfactual can be a really big deal for the community; I wish I'd worked harder to confirm with the Double Up Drive team whether their match was counterfactual (I think the answer turned out to be "yes", in which case I should have done more promotion, but I'm not actually sure).
  • There are other good reasons to donate either throughout the year (e.g. gives charities better info, smoother cashflow) and at year's (e.g. many non-EA people are thinking about giving, you might help to influence them by discussing your donations in public).

It seems valuable for someone to write up a more detailed document on timing considerations: "give now or give later" is a popular question, but often implies giving many years later; "when to give in the next 12 months" is very different.

One more thing which seems important: There are other ways to optimize a donation besides timing! Once you know how much you'll give, and where, you have many options for how to share that information; you can write about it, post on social media, set up your own "match" for friends (make it truly counterfactual, and try to discourage EA people from using up matching funds that might instead attract non-EA people)

Comment by aarongertler on A general framework for evaluating aging research. Part 1: reasoning with Longevity Escape Velocity · 2019-01-12T00:10:49.555Z · score: 4 (4 votes) · EA · GW

I second the suggestion to add a summary at the top of the post.

The Forum has a feature that it took me a while to notice: On pages that show lists of posts, each post has an estimated reading time. The time for this post, for example, was "20m". If someone is thinking of investing 2o minutes in a post (and that number is likely conservative if they need to pause, think, go back, etc.), giving them a summary can be really valuable in helping them make that decision.

Comment by aarongertler on What movements does EA have the strongest synergies with? · 2019-01-12T00:04:53.175Z · score: 2 (2 votes) · EA · GW

Thanks for specifying how you want answers delivered! This isn't about any one movement (it's a meta-answer), but I'll post it here because I think it points to action that you or someone else could take to resolve the question.

Rather than trying to think of all the movements/groups I can and filtering by "seems synergistic", I'll try to break this question down.

Any group with which we share some common trait might have synergy; the number of common traits should correlate with the level of synergy.

Some traits of EA:

  • Cares about charity
  • Cares about career choice
  • Cares about using evidence
  • Cares about maximizing output/ being efficient
  • Cares about certain "neglected" groups: The global poor, farm/wild animals, people in the future
  • Cosmopolitan, with a focus on the entire world/"grand strategy"
  • Political lean toward economic conservatism and social liberalism

Charities themselves may not be huge fans of us if they see us as critical/rivals, but people trying to donate are synergistic with us. Which groups of people spend a lot of time trying to donate? People with a lot of money, people planning their legacies, people who work as charitable consultants, etc.

You can do the same thing for each list item, and try to notice which groups fall into multiple categories or have "anti-categories" they expressly don't fit:

  • Many Buddhists care about the global poor, cosmopolitanism, and charity.
  • Libertarians like efficiency, economic conservatism, and social liberalism.
  • College students like social liberalism, career choice, and cosmopolitanism (but aren't big on economic conservatism). And so on.

If you do wind up building a list in this way, you should share it on the Forum more generally! I've wanted a resource like this for a while but haven't had time to build it carefully.

Comment by aarongertler on What are ways to get more biologists into EA? · 2019-01-11T23:45:24.961Z · score: 7 (7 votes) · EA · GW

Some options that come to mind:

1. Increase the amount of funding available for biologists working on projects within EA-aligned areas (neglected tropical diseases, pandemic prevention, longevity, etc.)

2. Create a professional network for biologists working on said projects and hold events

3. Invite biologists who receive Open Phil or other EA grants to attend EA Global (with free tickets/travel)

4. Something anyone reading this might be able to do: Find biologists working on cool things and make them feel appreciated. Try to understand their work, share their work enthusiastically (to the extent that you understand it), tell them they're making a difference, and recommend they look into any EA funding options which might be relevant for them. (Be selective in this last case; you don't want anyone to waste their time applying for grants that aren't actually a good fit for their projects.)

In general, people either find EA because they like the general mission or because EA contains a lot of work/people relevant to something they liked already. If you're thinking about a particular interest group (like biologists), think about what biologists value, and ways to let them know the EA community has those things.

Comment by aarongertler on What movements does EA have the strongest synergies with? · 2019-01-11T23:33:09.031Z · score: 1 (1 votes) · EA · GW

Thanks for responding, kbog!

For future reference, we recommend posting answers in the "New Answer" section, rather than as comments. The comment section is meant for asking clarifying questions, or for thoughts that aren't actually answers. (This is a new feature, so we know it takes some getting used to!)

Comment by aarongertler on What movements does EA have the strongest synergies with? · 2019-01-11T23:32:47.877Z · score: 1 (1 votes) · EA · GW

Thanks for responding, Jemma!

For future reference, we recommend posting answers in the "New Answer" section, rather than as comments. The comment section is meant for asking clarifying questions, or for thoughts that aren't actually answers. (This is a new feature, so we know it takes some getting used to!)

Comment by aarongertler on List of possible EA meta-charities and projects · 2019-01-09T23:24:37.678Z · score: 3 (3 votes) · EA · GW

I looked through this list to see which ideas might already exist, or be immediately feasible without building anything new. This caught my eye:

A vetting system for project ideas

What features would this system have that "posting a Google Doc on the EA Forum" doesn't have? Doing so allows you to choose who can or can't see it, present your idea in as much detail as you'd like, see how much the EA community likes it in general, get feedback from experts, etc. Would it be helpful to have a centralized space only for project ideas?

(There are, of course, project-management apps that are much better than Google Docs for actually implementing projects, but I'm not aware of any specialized software just for getting feedback on an initial idea.)

CEA is trying to make the Forum the best place to post EA content, in the sense that this is generally where you'll find the most readers and get the best feedback. We'd hope that "EA projects" are exactly the kind of thing that get posted here, so if there's a way in which we could add features to the Forum which would make that easier, we'd be interested in hearing about it!

Comment by aarongertler on The Global Priorities of the Copenhagen Consensus · 2019-01-08T20:57:23.686Z · score: 4 (3 votes) · EA · GW

I don't hold an especially high opinion of Lomborg's epistemics, since I've seen some pretty sharp critiques of The Skeptical Environmentalist (not sure about his newer work). But since the CC reports were mostly produced by non-Lomborg people, that doesn't influence my view of them very much.

However, I agree with other responses that collaborating with CC comes with a degree of risk given Lomborg's status as a controversial figure. I think it's worth trying to learn from their work, but I don't have any particular view on working with them directly.

The Global Priorities of the Copenhagen Consensus

2019-01-07T19:53:01.080Z · score: 41 (24 votes)
Comment by aarongertler on Altruistic Motivations · 2019-01-07T06:17:54.433Z · score: 2 (2 votes) · EA · GW

Yes. That's currently how our cross-posting program works. Nate's blog isn't active at the moment, but he let us know that we could cross-post old material.

Comment by aarongertler on The Importance of Time Capping · 2019-01-03T21:16:58.153Z · score: 2 (2 votes) · EA · GW

I call this "timeboxing", and it's been really useful to me when I can bring myself to do it. I'll also note that Giving What We Ca has acknowledged that they should have spent less time on certain research:

Giving What We Can research spent too many resources evaluating the same interventions and organizations that GiveWell was evaluating.
Comment by aarongertler on are values a potential obstacle in scaling the EA movement? · 2019-01-03T21:09:48.556Z · score: 2 (2 votes) · EA · GW

You're bringing up a lot of questions that are core to the EA movement, and which have been debated in many different places. The links from CEA's strategy page might interest you; they go into CEA's models of how to build communities, and where "impact" comes from.

In general, there's no simple answer to how much a person's personal values matter for their potential impact. To give a simplistic example, value alignment with EA seems more important for a moral philosopher (whose work is all about their values) than for a biologist (if someone decides to work on anti-aging research because they want to win a Nobel Prize and think Aubrey de Grey has a cool beard, they may still do excellent, world-shaping work despite non-EA motives).

You may want to check your intuition that older generations are more value-driven against data; older people tend to be more religious, but younger people tend to give "better" answers on many important moral questions (look up "the expanding moral circle" for more on this idea). Meanwhile, the extent to which people make sacrifices to act on their values seems to fluctuate from generation to generation; political protests go from popular to unpopular to popular again, people worry less about pollution but more about eating meat, etc.

Thanks to modern communication systems and growing moral cosmopolitanism throughout the world, this is probably the best time in history to promote something like EA, and conditions are getting better every year.

Comment by aarongertler on Finding it hard to retain my belief in altruism · 2019-01-03T20:50:43.162Z · score: 1 (1 votes) · EA · GW
Even though my logical belief towards altruism (stemming from no longer valuing intrinsically the happiness of a stranger) is gone, my heart will always want to help those who really need help through effective altruism. I don't think that's good enough though and really hope somebody can reconvince me to believe logically in altruism instead of just emotionally.

Maybe doing what your heart wants to do is "good enough", if a lot of people who seem very logical and reasonable to you have come to the same conclusion through more "logical" routes?

I've been involved with EA for four years and work full-time at an EA organization, but I still wouldn't call my commitment to EA an especially "logical" one. I'm one of those unusual people (though they're much more common within EA) who grew up with a strong feeling that others' happiness mattered as much as mine; I cried about bad news from the other side of the world because I felt like children starving somewhere else could just as easily have been me.

I reached that conclusion emotionally -- but when I went to college and began studying philosophy, I realized that my emotional conclusion was actually also supported by many philosophers, plus thousands of other people from all walks of life who seemed to be unusually thoughtful in their other pursuits. Seeing this was what convinced me I'd probably found the right path, and I haven't seen strong evidence against EA being broadly "correct" since I joined up.

So even if you don't "logically" value the happiness of strangers, I think it's safe to trust your heart, if doing so is leading you to a path that seems better for the world, and you're still using logic to make decisions along that path. Even if you get lost in a strange city and stumble upon your destination by accident, that doesn't mean you need to leave and find your way back using a map.

Comment by aarongertler on Women's Empowerment: Founders Pledge report and recommendations · 2018-12-21T03:09:40.680Z · score: 7 (4 votes) · EA · GW

Habryka: Did you see this line in the introduction of this post?

We also recommend charities that are highly cost-effective in improving women’s lives but do not focus exclusively on women’s empowerment. We discuss these organisations, including those recommended by our research partner GiveWell, in other research reports on our website.

On the other hand, it does seem like a specific GiveWell charity or two should have shown up on this list, or that FP should have explicitly noted GiveWell's higher overall impact (if the impact actually was higher; it seems like GiveDirectly isn't clearly better than Village Enterprise or Bandhan at boosting consumption, at least based on my reading of p. 5o of the 2018 GD study, which showed a boost of roughly 0.3 standard deviations in monthly consumption vs. 0.2-0.4 SDs for Bandhan's major RCT, though there are lots of other factors in play).

I think I've come halfway around to your view, and would need to read GiveWell and FP studies much more carefully to figure out how I feel about the other half (that is, whether GiveWell charities really do dominate FP's selections).

I'd also have to think more about whether second-order effects of the FP recommendations might be important enough to offset differences in the benefits GiveWell measures (e.g. systemic change in norms around sexual assault in some areas -- I don't think I'd end up being convinced without more data, though).

Finally, I'll point out that this post had some good features worth learning from, even if the language around recommending organizations wasn't great:

  • The "why is our recommendation provisional" section around NMNW, which helped me better understand the purpose and audience of FP's evaluation, and also seems like a useful idea in general ("if your values are X, this seems really good; if Y, maybe not good enough").
  • The discussion of how organizations were chosen, and the ways in which they were whittled down (found in the full report).

On the other hand, I didn't like the introduction, which used a set of unrelated facts to make a general point about "challenges" without making an argument for focusing on "women's empowerment" over "human empowerment". I can imagine such an argument being possible (e.g. women are an easy group to target within a population to find people who are especially badly-off, and for whom marginal resources are especially useful), but I can't tell what FP thinks of it.

Comment by aarongertler on Response to a Dylan Matthews article on Vox about bipartisanship · 2018-12-21T00:32:57.469Z · score: 4 (5 votes) · EA · GW

You make many good points here! One note: I'd suggest changing the title of the piece, which is quite ambiguous at the moment. Maybe something which refers to the topic of bipartisanism, or to journalism that isn't careful enough with logic or statistics?

Comment by aarongertler on Survey of 2018 EA Survey · 2018-12-20T23:18:23.395Z · score: 1 (1 votes) · EA · GW

Do your "correct answer" numbers correct for the people who put something like "no answer" or "prefer not to answer"?

I'd guess that most survey respondents were actually guessing something like "percentage of people who give an answer, and for whom the answer is X", even if they were supposed to be guessing "percentage of all people who answer X".

Comment by aarongertler on What's going on with the new Question feature? · 2018-12-20T23:12:16.156Z · score: 3 (3 votes) · EA · GW

Thanks, JP! I've always had more questions than I knew what to do with, and now I know what to do with them.

Comment by aarongertler on Women's Empowerment: Founders Pledge report and recommendations · 2018-12-20T23:11:16.172Z · score: 4 (3 votes) · EA · GW

Thanks for writing this out, Habryka!

These are all important considerations, and while I disagree about the strength of the methodology (it seems stronger than that of many posts I've seen be popular on the Forum), I agree that having a more comparison-friendly impact measure would have been good, as well as a justification for why we should care about this subfield within global development.

----

I'm not sure how the Forum should generally regard "research into the best X charity" for values of "X" that don't return organizations with metrics comparable to the best charities we know of.

On the one hand, it can be genuinely useful for the community to be able to reach people who care about X by saying "with our tools, here's what we might tell you, but if you trust this work, maybe also look at Y".

On the other hand, it may drain time and energy from research into causes that are more promising, or dilute the overall message of EA.

I guess I'll keep taking posts like this on a case-by-case basis for now, and I thought this particular case was worth a (non-strong) upvote. But I have a better understanding of why one might come to the opposite conclusion.

Forum Update: New Features, Seeking New Moderators

2018-12-20T22:02:46.459Z · score: 22 (12 votes)

What's going on with the new Question feature?

2018-12-20T21:01:21.607Z · score: 10 (4 votes)
Comment by aarongertler on [deleted post] 2018-12-20T04:09:38.902Z

Nice project! I think that ideas of the form "give people with a lot of time and curiosity some small incentive to learn about EA instead of something else" offer a lot of room for exploration. I wonder whether this sort of thing is better than an 80,000 Hours-style career workshop (with career planning followup from your group, the 80K newsletter, etc.)? Maybe it depends on the value of the translations?

Comment by aarongertler on Women's Empowerment: Founders Pledge report and recommendations · 2018-12-20T02:42:41.101Z · score: 5 (5 votes) · EA · GW

Thanks for sharing this research! Women's empowerment may not be a standard EA cause area, but I'm almost always interested to see good evaluations of charities working in the global development space. I especially liked Founders Pledge's evaluation of J-PAL's GPI program for scaling proven interventions.

I see that several people downvoted the post: If you did this, and see this comment, would you mind explaining why?

Even if you disagree with the importance of an area that Founders Pledge chooses to evaluate, it would be helpful to share why you think the content doesn't meet the standards or goals of the Forum. I've personally found their evaluations to be pretty strong; not as thorough as GiveWell, but certainly adding solid information to discussions around EA topics.

Comment by aarongertler on Critique of Superintelligence Part 1 · 2018-12-18T17:14:51.400Z · score: 1 (1 votes) · EA · GW

Connected sequences of posts are definitely encouraged, as they are sometimes the best way to present an extensive argument. However, I'd generally recommend that someone make one post over two short posts if they could reasonably fit their content into one post, because that makes discussion easier.

In this case, I think the content could have been fit into fewer posts (not just one, but fewer than five) had the organization system been a bit different, but this isn't meant to be a strong criticism -- you may well have chosen the best way to sort your arguments. The critique I'm most sure about is that your section on "the nature of intelligence" could have benefited from being broken down a bit more, with more subheadings and/or other language meant to guide readers through the argument (similarly to the way you presented Bostrom's argument in the form of a set of premises, which was helpful).

Comment by aarongertler on EA Forum Prize: Winners for November 2018 · 2018-12-17T07:27:53.508Z · score: 1 (1 votes) · EA · GW

I can't speak for any of the voters, but they can use any criteria they want (taking our goals for the Forum as a set of suggestions that, in practice, they broadly agree with). I'd guess that karma is something that voters consider, because it's a reasonable measure of how helpful people actually found a post.

Comment by aarongertler on EA Forum Prize: Winners for November 2018 · 2018-12-17T07:23:02.086Z · score: 1 (1 votes) · EA · GW

Thanks for splitting your questions into different comments! Good policy for threads that aren't too crowded. The runoff vote was plurality-wins, because we didn't want a tie to further delay the announcement (our voters have a lot of other things on their plates). We'll keep iterating on the process as we move forward.

EA Forum Prize: Winners for November 2018

2018-12-14T21:33:10.236Z · score: 48 (23 votes)
Comment by aarongertler on [blog cross-post] So-called communal narcissists · 2018-12-14T00:23:04.615Z · score: 7 (5 votes) · EA · GW

Arguments of the type made by that article always sound to me like this:

"It's foolish to invest in Apple, because Steve Jobs is a corporate narcissist: He only cares about his own vision and self-aggrandizement. Maximizing shareholder value is just his way of getting attention."

It's not a perfect parallel, given the real concern that someone who does good for their own ego may stop doing good if it stops feeling rewarding (while Steve Jobs is guaranteed to keep being famous while Apple prospers). But since nearly everyone has at least some level of egocentric motivation, the solution for a group that really cares about doing good is closer to "show more appreciation and reward good work" than "watch out for communal narcissists".

Comment by aarongertler on Critique of Superintelligence Part 1 · 2018-12-13T23:54:12.200Z · score: 9 (4 votes) · EA · GW

Weak upvote for engaging seriously with content and linking to the other parts of the argument.

On the other hand, while it's good to see complex arguments on the Forum, it's difficult to discuss pieces that are written without very many headings or paragraph breaks. It's generally helpful to break down your piece into labelled sections so that people can respond unambiguously to various points. I also think this would help you make this argument across fewer than five posts, which would also make discussion easier.

I'm not the best-positioned person to comment on this topic (hopefully someone with more expertise will step in and correct both of our misconceptions), but these sections stood out:

To see how these two arguments rest on different conceptions of intelligence, note that considering Intelligence(1), it is not at all clear that there is any general, single way to increase this form of intelligence, as Intelligence(1) incorporates a wide range of disparate skills and abilities that may be quite independent of each other. As such, even a superintelligence that was better than humans at improving AIs would not necessarily be able to engage in rapidly recursive self-improvement of Intelligence(1), because there may well be no such thing as a single variable or quantity called ‘intelligence’ that is directly associated with AI-improving ability.

Indeed, there may be no variable or quantity like this. But I'm not sure there isn't, and it seems really, really important to be sure before we write off the possibility. We don't understand human reasoning very well; it seems plausible to me that there really are a few features of the human mind that account for nearly all of our reasoning ability. (I think the "single quantity" thing is a red herring; an AI could make self-recursive progress on several variables at once.)

To give a silly human example, I'll name Tim Ferriss, who has used the skills of "learning to learn", "ignoring 'unwritten rules' that other people tend to follow", and "closely observing the experience of other skilled humans" to learn many languages, become an extremely successful investor, write a book that sold millions of copies before he was well-known, and so on. His IQ may not be higher now than when he begin, but his end results look like the end results of someone who became much more "intelligent".

Tim has done his best to break down "human-improving ability" into a small number of rules. I'd be unsurprised to see someone use those rules to improve their own performance in almost any field, from technical research to professional networking.

Might the same thing be true of AI -- that a few factors really do allow for drastic improvements in problem-solving across many domains? It's not at all clear that it isn't.

If, however, we adopt the much more expansive conception of Intelligence(1), the argument becomes much less defensible. This should become clear if one considers that ‘essentially all human cognitive abilities’ includes such activities as pondering moral dilemmas, reflecting on the meaning of life, analysing and producing sophisticated literature, formulating arguments about what constitutes a ‘good life’, interpreting and writing poetry, forming social connections with others, and critically introspecting upon one’s own goals and desires. To me it seems extraordinarily unlikely that any agent capable of performing all these tasks with a high degree of proficiency would simultaneously stand firm in its conviction that the only goal it had reasons to pursue was tilling the universe with paperclips.

Some of the world's most famous intellectuals have made what most people in the EA community would see as bizarre or dangerous errors in moral reasoning. It's possible for someone to have a deep grasp of literature, a talent for moral philosophy, and great social skills -- and still have desires that are antithetical to sentient well-being (there are too many historical examples to count).

Motivation is a strange thing. Much of the world, including some of those famous intellectuals I mentioned, believes in religious and patriotic ideals that don't seem "rational" to me. I'm sure there are people far more intelligent than I who would like to tile the world with China, America, Christianity, or Islam, and who are unlikely to break from this conviction. The ability to reflect on life, like the ability to solve problems, often seems to have little impact on how easily you can change your motivations.

It's also important not to take the "paperclip" example too seriously. It's meant to be absurd in a fun, catchy way, but also to stand in for the class of "generally alien goals", which are often much less ridiculous.

If an AI were to escape the bonds of human civilization and begin harvesting all of the sun's energy for some eldritch purpose, it's plausible to me that the AI would have a very good reason (e.g. "learn about the mysteries of the universe"). However, this doesn't mean that its good reason has to be palatable to any actual humans. If an AI were to decide that existence is inherently net-negative and begin working to end life in the universe, it would be engaging in deep, reflective philosophy (and might even be right in some hard-to-fathom way) but that would little comfort to us.

Comment by aarongertler on [blog cross-post] We are in triage every second of every day · 2018-12-13T23:13:05.949Z · score: 7 (6 votes) · EA · GW

Thanks for sharing this, Holly!

To anyone who liked the essay, I also recommend Julia Wise's "No One is a Statistic", which makes a similar argument.

Comment by aarongertler on Lessons Learned from a Prospective Alternative Meat Startup Team · 2018-12-13T23:09:45.429Z · score: 4 (4 votes) · EA · GW

Wonderful writeup! Strong upvote for the structure, use of tables, and analysis of future possibilities.

You may want to define, or link to definitions of, terms like "scaffolding" and "extrusion" that may not be familiar to most readers in this context.

Also, despite not having a technical co-founder on the team, did you still feel like you were able to conduct effective research on alternative-meat engineering? It seems like you have a pretty thorough understanding of technology in the space, and what products might be useful to develop, but I can imagine that taking a long time or being really difficult without a technical background. Do you have any advice to share on learning to understand a field's engineering, product development, etc., without a strong background in those subjects?

Comment by aarongertler on Giving more won't make you happier · 2018-12-11T20:11:32.420Z · score: 2 (2 votes) · EA · GW

This is certainly true! Money can buy almost anything, including security against future disasters. I'm only making a personal claim about myself and my own use of money. I personally often feel like giving is the form of spending that will make me "happiest", because it feels like a direct path to me getting a sense of personal satisfaction in a way that saving often doesn't.

Comment by aarongertler on Requesting community input on the upcoming EA Projects Platform · 2018-12-11T01:04:37.382Z · score: 4 (3 votes) · EA · GW

Yes, I definitely endorse editing summaries into long posts, both to help future readers and to establish good norms for other posters! :-)

For the EA Survey note, I was responding to this part of your post:

It may be the case that an even broader platform that applies to the entire EA community, encompassing all community members and EA organizations, would be even better for increasing the impact of the effective altruism movement.

I don't know whether the Survey would welcome questions about projects, but you could ask. I'm thinking of something like: "Have you ever worked on an independent/volunteer EA project that wasn't run by a larger EA organization, whether or not the project is still going?", and then, if they say "yes", a link to a spreadsheet/form where they can add some details. The exact wording of the question will be determined by the way you decide to define "project".

The purpose of a survey question would be to catch lesser-known projects, including some that failed -- there are still lessons to learn in those cases. Gathering data on something like the early history of currently successful orgs would look very different, I imagine.

Comment by aarongertler on Giving more won't make you happier · 2018-12-11T00:58:47.852Z · score: 2 (2 votes) · EA · GW

People are more likely to give when certain markers of "effectiveness" are satisfied (e.g. you tell them exactly how the money will be spent, you tell them the charity is relatively low-overhead, you tell them how much progress you've made toward solving a problem).

"More likely to give" =/= "more happy after giving", but it does seem to represent something like "anticipates being happier after giving" (that's a reasonable interpretation for why people do almost anything with money).

These claims come from what I remember about writing a thesis on giving behavior. The relevant material starts on p. 59, items (1), (4), and (6), though I'm synthesizing a broader base of evidence here (plus a bit of intuition from my experiences talking about EA with people outside the community).

Comment by aarongertler on Giving more won't make you happier · 2018-12-11T00:52:22.916Z · score: 6 (2 votes) · EA · GW

I agree that the link is probably heavily sublinear. But I wonder if it becomes less sublinear if one is more conscious of impact-per-dollar.

I've had this experience myself, sort of, in that I began to enjoy giving more after I found EA and my previous "well, I hope this works" feeling resolved into "yes, I found the best deal on helping!". And since I know that I've found a good deal with high-EV returns, giving more does feel better, just as it would if I were depositing more money into a high-yield investment. Meanwhile, because I have enough money to be materially comfortable, the idea of "$1000 in savings lets me skip working for another two weeks in 40 years, assuming I even want to stop working" doesn't hold much appeal, compared to "spending $1000 on one of the world's best products".

Comment by aarongertler on Requesting community input on the upcoming EA Projects Platform · 2018-12-10T22:50:48.716Z · score: 2 (5 votes) · EA · GW

Upvoted. As for any post of this length, I'd recommend a bullet-point summary at the top, alongside any action items you'd like readers to take.

I'm enthusiastic about the idea of a project platform. The EA survey might be a good place to gather data on what's out there (plus whatever info people upload directly to the site).

My impression is that most projects don't have lasting impact outside of skill-building, but a few volunteer-founded initiatives (the EA Newsletter, the Giving Tuesday Project) have really impressed me, and I'm sure there are good projects I haven't discovered yet.

Comment by aarongertler on EA Survey 2018 Series: Donation Data · 2018-12-10T22:45:20.913Z · score: 10 (5 votes) · EA · GW

Thanks for the writeup!

Questions:

1. Were income numbers pre or post-tax?

2. Do you have a number for average earnings of non-students who are earning to give? $52,000 is a pretty low number for that category.

3. How did the survey define the difference between "earning to give" and "other", if at all?

I'm really looking forward to the dedicated post that will give us numbers on non-students in GWWC; hitting 10% at the median would be nice.

Comment by aarongertler on Giving more won't make you happier · 2018-12-10T21:16:15.381Z · score: 10 (5 votes) · EA · GW

Upvoted for bringing in a lot of cool research! Didn't strong-upvote because I felt the conclusion was a little too strong ("actively discourage", especially), and I wish you'd linked to some examples of EA promoting egoistic giving.

----

1. I sense a really fantastic opportunity here to trick Buzzfeed into donating a lot of money: "$10 charitable donation vs. $1000 charitable donation".

2. Is there a particular case of EA communication that you think actively goes against the science you cite? I vaguely remember seeing writing in a few places along the lines of "giving can make you happier" or "giving effectively can make you more confident" (and by implication, happier), but not "giving more can make you happier".

3. It would be good to see one of these studies specifically take on EA-style giving, where people often have an unusually strong sense of what their money is buying and can feel unusually confident that it will actually help. Most charities don't have anything nearly as immersive as GiveDirectly Live, a website which (to me) makes every additional dollar I give pretty darn salient.

4. "Actively discouraging" people who want to use effective giving to become happier seems far too strong. For one, the studies you cite generally look at large numbers of people; even if giving doesn't make the average person happier, it still seems like it could make any given individual happier.

If we want to give maximally accurate information, we could say "there are a lot of different things that might work, giving is one, saving your money might be better", but our ability to advise individuals seems really context-dependent. I've known people who I thought would actually be a lot happier upon donating more; I've known other people for whom "get financially secure so you can have FYM ASAP" was better egocentric advice.

5. Finally, if someone comes up to us and says "I just want to make myself happy, should I give more?"...

...maybe we should, instead of saying "no", say "why not consider trying to want other people to be happy, too?"

Becoming more altruistic in spirit/personality seems to be pretty helpful for a lot of people. I don't know how much the science backs that up, so I wouldn't recommend it as an official response, but "caring about other people makes you happy" does seem like one of the strongest cross-cultural "common sense" lessons in all of human experience.

Comment by aarongertler on Notes on “The Art of Gathering” · 2018-12-07T01:08:51.832Z · score: 3 (3 votes) · EA · GW

To the author/readers: Have you made use of any of these techniques in your own events? If so, what did you do, and what effect did you perceive?

The advice here that most resonated with me:

  • Authority as an "ongoing commitment" (being chill is easy, but doesn't work very well).
  • The use of rules (I've seen good things result from phone-free gatherings).
  • My local group's work to anti-normalize "saying you'll show up but then not showing up" (working to make sure everyone had transportation, messaging anyone who didn't show up to individually ask what happened).
  • Making use of "casual time" (trying to slightly steer or suggest topics for conversations that happen before the main event, especially by making an effort to work new people into the conversation: "Hey! Is this your first time? What brings you here?")

Comment by aarongertler on Existential risk as common cause · 2018-12-06T00:58:57.955Z · score: 17 (10 votes) · EA · GW

Strong upvote. This is a fantastic post, and I wish that people who downvoted it had explained their reasoning, because I don't see any big flaws.

I don't necessarily agree with everything written here, and I don't think the argument would suffice to convince people outside of EA, but we need more content like this, which:

  • Cites a lot of philosophers who aren't commonly cited in EA (good for two reasons: non-EA philosophers are the vast majority of philosophers and presumably have many good ideas, including on areas we care about; citing a wider range of philosophers makes EA work look a lot more credible)
  • Carefully points out a lot of uncertainties and points that could be made against the argument. I hadn't put a name before on the difference between "honoring" and "promoting", but I suspect that many if not most peoples' objection to focusing on X-risk probably takes this form if you dig deep enough.
  • Includes a summary and a confidence level.

A couple of things I wish had been different:

  • I don't know what "confidence level" means, given the wide range of ways a person could "agree" with the argument. Is this your estimate of the chance that a given person's best bet is to give to whatever X-risk organization they think is best, as long as they aren't one of your groups? Your estimate of how solid your own argument is, where 100% is "logically perfect" and 0% is "no evidence whatsoever"? Something else?
  • The formatting is off in some places, which doesn't impact readability too much but can be tricky in a post that uses so many different ways of organizing info (quotes, bullets, headings, etc.) One specific improvement would be to replace your asterisk footnotes with numbers [1] so that it's easier to find them and not mix them up with bullet points.

Aside from the honor/promote distinction, I think the most common objection to this from someone outside of EA might be something like "extinction is less than 1% likely, not because the world isn't dangerous but because I implicitly trust other people to handle that sort of thing, and prefer to focus on local issues that are especially important to myself, my family, and my community".

[1] Like this.

Comment by aarongertler on allocating donations to years · 2018-12-05T06:48:51.674Z · score: 1 (1 votes) · EA · GW

Thanks for writing this up!

This may not be new information, but everyone has to learn about donation bunching/optimizing for tax deductions somewhere, and there's no "standard" way to do so yet.

This is bad, because there's a regular flow of people who go from "not having this problem" to "having this problem": Each year, some people with high salaries join the community, and some community members who donate regularly begin to earn high salaries. I hope that members of both groups see this post and remember to donate efficiently.

Comment by aarongertler on Effective Altruism Foundation: Plans for 2019 · 2018-12-05T06:24:29.000Z · score: 2 (2 votes) · EA · GW

Thanks for a great writeup, Jonas! I really liked the clear layout of the post and the link to provide anonymous feedback.

Questions I had after reading the post:

1. It's clear that EAF does some unique, hard-to-replace work (REG, the Zurich initiative). However, when it comes to EAF's work around research (the planned agenda, the support for researchers), what sets it apart from other research organizations with a focus on the long-term future? What does EAF do in this area that no one else does? (I'd guess it's a combination of geographic location and philosophical focus, but I have a hard time clearly distinguishing the differing priorities and practices of large research orgs.)

2. Regarding your "fundraising" mistakes: Did you learn any lessons in the course of speaking with philanthropists that you'd be willing to share? Was there any systematic difference between conversations that were more vs. less successful?

3. It was good to see EAF research performing well in the Alignment Forum competition. Do you have any other evidence you can share showing how EAF's work has been useful in making progress on core problems, or integrating into the overall X-risk research ecosystem?

(For someone looking to fund research, it can be really hard to tell which organizations are most reliably producing useful work, since one paper might be much more helpful/influential than another in ways that won't be clear for a long time. I don't know if there's any way to demonstrate research quality to non-technical people, and I wouldn't be surprised if that problem was essentially impossible.)

Comment by aarongertler on Centre for the Study of Existential Risk: Six Month Report May-October 2018 · 2018-12-03T22:53:05.299Z · score: 8 (4 votes) · EA · GW

Thanks for writing up such a thorough report!

I was interested to see Yasmine Rix on your list of research affiliates; this is the first time I've heard of an X-risk organization working with an artist on public outreach, and it's a neat idea.

Has this kind of work been on CSER's agenda for a while/did you reach out to Yasmine? Did she reach out to you with the suggestion before you'd considered art? I'm curious about how the collaboration came to exist, and which benefits CSER thinks might arise from public outreach through art. (Is there a particular type of audience/media outlet you'd like to reach in this way that wouldn't be reachable through publications?)

Comment by aarongertler on So you want to do operations [Part one] - which skills do you need? · 2018-12-03T22:34:33.785Z · score: 3 (3 votes) · EA · GW

Is there a reason you came to have this opinion in the first place? The reasons you gave could work as explanations if "talented people are abundant" is true, but what actually makes you believe that in the first place?

It's hard for me to figure out whether I believe the same thing or not; when I look at the totality of my non-EA work experience, in many different fields where "ops"-type skills were required, I think I'd lean toward "ops talent is not as abundant as I once thought", but all I can back that up with is a series of anecdotes. (Many freelance tutors are not well-organized despite being in a job that strongly rewards ops talent, many businesspeople in high-profile positions use clunky filing systems and zero productivity tools, many hospital IT people are poor communicators... all of these are examples of ops skill being useful, but not present.)

Comment by aarongertler on Why we have over-rated Cool Earth · 2018-11-30T23:28:17.695Z · score: 1 (1 votes) · EA · GW

Good points all. Yours was a reasonable estimate, but the topic made it a good way to discuss certain general problems with this type of estimate, which are often more prominent in estimates made by other people in other situations.

(Also, I didn't spot the "last mile" comment; sorry for missing that, and thanks for calling it to my attention.)

Comment by aarongertler on Latest Research and Updates for November · 2018-11-30T20:19:16.026Z · score: 2 (2 votes) · EA · GW

Strong upvote for making it easier to find good things.

Shameless plug: If you liked this post as much as I did, you might also like the EA Newsletter.

We send out a smaller/more curated set of links each month; there's a lot of overlap with David's excellent work, but we also include "timeless classic" essays from past years and updates on various EA orgs (sorted by organization). Also, it comes straight to your inbox!

Comment by aarongertler on Would killing one be in line with EA if it can save 10? · 2018-11-30T01:44:54.682Z · score: 7 (6 votes) · EA · GW

People's beliefs differ widely on questions like that, even within EA. But it's helpful to keep in mind that things like "eugenics programs" in the Nazism sense (or various other forms of crime) are highly unlikely to be the best way to increase humanity's chances of survival, because they have many flow-through effects that are bad in a variety of ways.

To quote Holden Karnofsky:

I try to perform very well on “standard” generosity and ethics, and overlay my more personal, debatable, potentially-biased agenda on top of that rather than in replacement of it. I wouldn’t steal money to give it to our top charities.

Stealing money to save lives may seem moral in the short run, but there are so many ways theft can backfire that it's probably a terrible strategy even if you're focused on the total utility of your actions and ignoring commonsense prohibitions against stealing. You could be caught and jailed, reducing your ability to do good for a long time; your actions could hurt the reputation of EA as a movement and/or the reputation of the charity you supported; your victim could become an outspoken advocate against EA; and so on.

The general strategy that seems likely to most improve the future involves building a large, thriving community of people who care a lot about doing good and also care about using reason and evidence. Advocating crime, or behavior that a vast majority of people would view as actively immoral, makes it very hard to build such a community.

Comment by aarongertler on Outreach to Farmers · 2018-11-29T00:39:13.116Z · score: 1 (1 votes) · EA · GW

If you remember it, what was the name of this anti-tobacco group? In a quick search, I found a few articles about tobacco farmers who decided to switch to new crops for various reasons, but nothing about a nonprofit trying to make switches happen.

Comment by aarongertler on Outreach to Farmers · 2018-11-29T00:37:04.377Z · score: 1 (1 votes) · EA · GW

To what extent do you think the average farmer is likely to be replaced if they choose not to enter the industry?

Given the smallish number of large-scale American agribusinesses, I wouldn't be surprised if convincing someone not to farm actually does reduce the number of farmers in the long term, but I'd expect it to have a smaller effect on reducing the number of farmed chickens. Though I know nothing about agriculture as an industry, I'd naively expect a drop in farmers to lead to higher chicken prices, which would then lead existing businesses to expand their operations.

Does anyone know more about the economics at work here? Maybe land-use laws make it difficult to expand existing businesses and easier for a new farmer to get started in a new location?

Comment by aarongertler on Guide to Successful Community 1-1s · 2018-11-29T00:30:29.024Z · score: 4 (4 votes) · EA · GW

Fantastic post, especially the structure. Strong upvote.

Related anecdote: When I co-founded Yale EA, I tried everything, from group projects to speaker events, with wildly varying success. But at the end of the first year, it seemed clear which two things had given us ~90% of our value: Giving Games, and social time (dinner, movie nights, just hanging out on campus).

The second of those surprised me, especially since this was 2014-15 and we cared more about getting people interested in donating than we did about structured career change or helping people explore into EA philosophy. But if you're going to convince someone to make any kind of major change in their life, or at least to do their own research, you need them to trust you, to like you, and to know that you actually care about their interests.

Comment by aarongertler on Guide to Successful Community 1-1s · 2018-11-29T00:25:25.962Z · score: 1 (1 votes) · EA · GW

I second the recommendation of "The Charisma Myth". It's the best book I've ever read on social skills, and on a page-for-page basis is up there with the best blog posts I've read on that topic (which is remarkable, considering its length).

Comment by aarongertler on Is The Hunger Site worth it? · 2018-11-29T00:08:27.925Z · score: 3 (2 votes) · EA · GW

I see things similar to the Hunger Site pop up pretty frequently in large EA Facebook groups, and will share this comment whenever it happens (the numbers may be a bit different, e.g. for "donate while you shop" sites, but the general thought pattern of "can I donate to not think about this?" seems very useful).

Comment by aarongertler on How democracy ends: a review and reevaluation · 2018-11-29T00:05:01.979Z · score: 6 (2 votes) · EA · GW

Does anything you learned in the talk make you think that a particular cause area/problem is more important (or, if already an EA focus, less important) than you did before?

I enjoyed reading this, but I'm not really sure what these ideas mean for individuals or EA organizations; at most, I can imagine there being some relevance to the work of the Center for Election Science, but that's very broad speculation on my part.

Comment by aarongertler on Announcing the EA donation swap system · 2018-11-28T23:10:27.265Z · score: 2 (2 votes) · EA · GW

While this is a different sort of issue, and has nothing to do with tax policy, it seems relevant to mention the example of vote swapping, which seems to be legal:

“The basic reason is that there is no exchange of anything of value,” Douglas told TheWrap. “The second is that there is no way to prove that both people went through with it.”

In the case of donation swapping, both of these ideas are shaky: It depends on the definition of "value" (do I technically get value out of giving money away?) and "prove" (receipts are easier to find for donations than votes). And of course, every country has its own law. But this example updates me slightly in the direction of viewing this favorably (though EA Hub should certainly keep looking for a definitive answer).

Literature Review: Why Do People Give Money To Charity?

2018-11-21T04:09:30.271Z · score: 23 (10 votes)

W-Risk and the Technological Wavefront (Nell Watson)

2018-11-11T23:22:24.712Z · score: 8 (8 votes)

Welcome to the New Forum!

2018-11-08T00:06:06.209Z · score: 13 (8 votes)

What's Changing With the New Forum?

2018-11-07T23:09:57.464Z · score: 17 (11 votes)

Book Review: Enlightenment Now, by Steven Pinker

2018-10-21T23:12:43.485Z · score: 16 (10 votes)

On Becoming World-Class

2018-10-19T01:35:18.898Z · score: 16 (10 votes)

EA Concepts: Share Impressions Before Credences

2018-09-18T22:47:13.721Z · score: 7 (5 votes)

EA Concepts: Inside View, Outside View

2018-09-18T22:33:08.618Z · score: 2 (1 votes)