Comment by aarongertler on EA Still Needs an Updated and Representative Introductory Guidebook · 2019-05-23T12:38:01.395Z · score: 6 (3 votes) · EA · GW

I'm the current Content Specialist for CEA (the same position held by Max Dalton when he put together the second version of the EA Handbook).

We're aware that the Handbook isn't an ideal resource and have been thinking for a while about how we might want to update it. In the process, we've consulted a few stakeholders with expertise in different cause areas. So far, we haven't officially released any edits, but we might do so in the future; we're still uncertain about timelines and how we want to prioritize this project.

Comment by aarongertler on EA Forum: Footnotes are live, and other updates · 2019-05-21T22:13:31.115Z · score: 3 (2 votes) · EA · GW

See saulius's comment for an example!

LessWrong does have footnotes (at least on the test server I just checked), which use the same format. (Nearly every feature on the EA Forum is also available on LessWrong.)

Comment by aarongertler on How the Giving Games Project Tracks Its Impact · 2019-05-21T21:41:06.858Z · score: 4 (3 votes) · EA · GW

I'm glad to hear about all of this tracking!

The paper probably won't contain much useful information for you; I only meant to use it in support of the point that the aforementioned correlations are weak (the rest of the paper is a long, dry literature review).

Comment by aarongertler on EA Forum: Footnotes are live, and other updates · 2019-05-21T21:33:55.904Z · score: 2 (1 votes) · EA · GW

I agree, and I'll pass this feedback to the tech team (though Markdown having built-in footnote support was key for this feature's existence).

Comment by aarongertler on EA Forum: Footnotes are live, and other updates · 2019-05-21T21:32:49.745Z · score: 3 (2 votes) · EA · GW

That setting is left over from an April Fool's Day joke on LessWrong where GPT-2 was trained to generate comments. As far as I know, there have never been any GPT-2 comments on the Forum. (If you see anything that looks to have been written by a nonhuman intelligence, please mark it as spam so I can see and remove it!)

Comment by aarongertler on Software: Private sector to non-profits · 2019-05-21T21:31:22.840Z · score: 3 (2 votes) · EA · GW

Thanks for the word of warning -- I'm not sure what anyone's schedule is like, and it's good to know who wouldn't be a good target for an email. But I still think that sending an email to a few different people, and noting that you don't expect a response if they're too busy, is valuable in this scenario.

Comment by aarongertler on Stories and altruism · 2019-05-21T05:39:22.719Z · score: 3 (2 votes) · EA · GW

Seeing Life in a Day changed the way I see the world, and was a key step in preparing me to be interested in effective altruism when I first heard about it (roughly a year later).

The film shows hundreds of snippets of ordinary life around the world, creating an overwhelming effect of "everyone is basically the same and matters equally" (at least for me). The notion of caring more about people from, say, my own city no longer made any sense to me after the film ended.

Comment by aarongertler on Stories and altruism · 2019-05-21T05:37:11.816Z · score: 2 (1 votes) · EA · GW

Creating a spreadsheet? You can have two columns (media title and a number) and ask people to increment the number by one if they found something helpful could be better than a survey -- it's faster to go through and easier to update.

Comment by aarongertler on Software: Private sector to non-profits · 2019-05-21T05:33:40.473Z · score: 2 (1 votes) · EA · GW

Have you considered reaching out to someone like Andrew Critch, or other experts in the X-risk space, to ask?

There are quite a few people in your position (early-career people thinking of diving into X-risk/AI safety), but I don't think there are so many that X-risk professionals are deluged with more questions than they can answer. If you have even a slight track record of demonstrated interest/understanding of these issues, I imagine you could get a phone call set up with one of the people you might eventually want to work for.

Rather than thinking about the choice in the sense of "outworking/outcompeting", it seems better to consider comparative advantage; if you add a skillset that's in short supply, competition won't be so important. I don't know whether management/recruiting is in shorter supply than coding/academic work, or which of those you're more naturally inclined toward, but answering those questions should be a good start.

Comment by aarongertler on How the Giving Games Project Tracks Its Impact · 2019-05-21T05:29:04.079Z · score: 2 (1 votes) · EA · GW

Thanks for posting this example of how you're tracking Giving Games! The data from just one group is very thin, of course, but I'll be curious to see aggregate changes.

I may have missed this in the post, but do you plan on trying to track giving behavior as well as beliefs about giving with your post-GG surveys? I wrote a literature review of interventions tested for their effect on charitable giving, and studies on that topic tend to find only weak correlation between intentions/statements about giving and actual behavior.

Comment by aarongertler on How to improve your productivity: a systematic approach to sustainably increasing work output · 2019-05-21T05:21:40.337Z · score: 2 (1 votes) · EA · GW

Thanks for posting your methodology ahead of time! I'm glad to see this "open science" practice on the Forum (knowing someone's research plans ahead of time lets you see whether they might be changing their methodology to bolster apparent results, so this post serves as an extra dose of integrity for the research).

The productivity-hacking movement is large and prolific, but they don't perform all that many systematic experiments. I look forward to seeing your results.

EA Forum: Footnotes are live, and other updates

2019-05-21T00:26:54.713Z · score: 29 (14 votes)
Comment by aarongertler on The Frontpage/Community distinction · 2019-05-21T00:17:29.534Z · score: 2 (1 votes) · EA · GW

Answer: No. We've noticed that many Forum users assume that this is the case, but we categorize each post based on its topic, rather than any measure of perceived "quality".

Comment by aarongertler on The Frontpage/Community distinction · 2019-05-21T00:16:11.557Z · score: 2 (1 votes) · EA · GW

Question: Is putting a post in Frontpage meant to signify that it is higher-quality than posts in the Community section?

Comment by aarongertler on Why do EA events attract more men than women? Focus group data · 2019-05-20T05:40:40.812Z · score: 10 (4 votes) · EA · GW

Thanks for this writeup! The actual spoken statements from attendees were especially interesting to me.

Comment by aarongertler on Effective Altruism London Landscape in 2019 · 2019-05-20T05:30:42.699Z · score: 5 (3 votes) · EA · GW

I really like the recent trend of "local area" writeups, including both this post and Evan Gaensbauer's post on Vancouver. If I keep seeing them, I may write a meta-post to contain them all (and replace the links when updates happen).

Comment by aarongertler on The Narrowing Circle (Gwern) · 2019-05-19T23:21:17.399Z · score: 2 (1 votes) · EA · GW

I don't think we disagree here, but I can see how that section was ambiguous. I think many people would think of "expanding abortion rights" as part of "the expanding circle" (people having more freedom and fewer restrictions, as long as you take it for granted that fetuses don't "count"). Of course, there are multiple ways to argue that fetuses might "count" (as ensoulled entities, as potential future people, as living creatures, etc.), so one could also look at expanded abortion rights as a case of "the narrowing circle".

As you outlined, those on the side of the "narrowing circle" have a better case if you consider the literal meaning of "expanding circle" (more beings are in the moral domain, full stop), as well as the parallels between abortion rights and, say, animal rights.

But I think there's a difference in that certain rights which feel "fundamental" are in play on either side (I think there are important differences between "the right to eat meat" and "the right not to bring human life into the world for which you will be held responsible"). In the less literal sense of "expanding circle", which turns into something more like "the moral arc of the universe bends towards justice", there are perspectives from which expanded abortion rights bend the universe either toward or away from justice.

Anyway, to clarify, I don't think it's obvious whether abortion rights expand or narrow the circle in the way that I normally hear "expanding circle" used, though they do narrow it by the literal "who gets considered" definition.


More crudely: Some people think of early-term fetuses as being morally akin to a plant or an amoeba, and if Peter Singer is among them (I don't know whether he is), I'm not sure that plants/amoebas entering the moral domain would qualify as "expanding the circle" from his point of view.

The only time he uses the expression in his essay "The Drowning Child and the Expanding Circle":

At the end of the nineteenth century WH Lecky wrote of human concern as an expanding circle which begins with the individual, then embraces the family and ‘soon the circle... includes first a class, then a nation, then a coalition of nations, then all humanity, and finally, its influence is felt in the dealings of man [sic] with the animal world’.

I'm not sure whether abortion, or at least early-term abortion, qualifies as "the dealings of man with the animal world" in the same way as factory farming.

That said, I haven't read Singer's full book on the expanding circle concept, so there are probably nuances and details in his complete definition that I'm not aware of.

Comment by aarongertler on How does one live/do community as an Effective Altruist? · 2019-05-19T23:07:08.698Z · score: 2 (1 votes) · EA · GW

These answers aren't necessarily correct, or complete, but they represent time and energy and experience being put into attempting to figure out community, which is definitely valuable. I hope to hear more of your thoughts in the future -- the more people contributing to our collective pool of knowledge, the better!

Comment by aarongertler on How does one live/do community as an Effective Altruist? · 2019-05-17T09:52:52.832Z · score: 7 (3 votes) · EA · GW

Every question you asked in this post has been discussed... somewhere. Tracking down all the conversation is difficult, and there are few true "experts" on this material, but existing EA communities have been slowly shaped by it into their current forms (though of course, there are always new people and new groups and old lessons that must be learned anew).

Some examples:

  • Many of Eliezer Yudkowsky's posts in his sequence "The Craft and the Community" apply just as well to EA as to rationality. He's also written a lot of essays about argumentation and persuasion which have seeped into the EA community through osmosis. Compared to other communities I've known, EA has much less "chasing arguments that should be dropped" and much more "making and discussing models when that would be helpful". We aren't perfect, but I'm not sure whether remaining issues come from any deficit of theory.
  • There have been many discussions of EA and parenthood. Some of those that are findable on the Forum include this (on the question of whether to become a parent) and this (on building parent-inclusive communities).
  • Nearly every large EA event includes specific written standards for appropriate behavior, influenced by years of experience, discussion, and research on best practices drawn from other communities.
  • One more example of a social norm that EA has adopted, see "Ask, Guess, and Tell Culture" (a series of discussions dealing with, among other things, how people can communicate desires/opinions across oft-invisible cultural divides).

I don't know that a very broad discussion of "community" will be very helpful, but suggestions for specific improvements and solutions to specific problems often lead to concrete progress and mass adoption of new ideas. Are there any specific issues you think are especially important to address?

Also, in discussions like this, it helps to have a fair amount of experience living, working, or at least regularly interacting with one or more EA communities (physically or online). Are there any local EA communities you've spent time in, Nathan?

Comment by aarongertler on How does one live/do community as an Effective Altruist? · 2019-05-17T09:38:17.561Z · score: 3 (2 votes) · EA · GW

There are Facebook groups for EA houses in specific cities, though I'm not aware of any that cover wider areas. Is there a specific place you'd want to know about?

Comment by aarongertler on Are there groups of medical symptoms that could be impactful to be turned into a specific diagnosis? I learned that PTSD was created as a result of lobbying, previously it was just disparate symptoms. · 2019-05-17T09:35:59.254Z · score: 2 (3 votes) · EA · GW

My impression is that the "medicalization" of "ordinary" human feelings gets a lot of criticism from writers on every part of the political spectrum, but I don't know whether an expert perspective might show these definitions to actually be beneficial.

PTSD seems like a reasonably good case of a definition being helpful, while counterpoints may include ADHD and Oppositional Defiant Disorder (based on complaints I've read; I don't personally have an opinion on whether those diagnostic categories are net-positive). This isn't to say that people diagnosed with ADHD shouldn't have medication available to them; instead, some writers argue that children are often diagnosed and overmedicated as a result of behavior that comes standard in humans of that age.

As for your question: What are conditions that can be helped by legal medicine or therapy that aren't currently covered by existing diagnoses? Nothing springs immediately to mind for me, though I wouldn't be surprised if there are major gaps I'm not aware of.

Your example, "ordinary human unhappiness", seems like it wouldn't respond very well to medication unless it was already classifiable as minor depression. Is there evidence that antidepressants improve the well-being of people without depression?

Comment by aarongertler on Is this a valid argument against clean meat? · 2019-05-17T09:20:28.059Z · score: 12 (7 votes) · EA · GW

I think this would be an unusual way to think about the issue, and is likely to make very little overall difference to animal welfare.

Consider the population of Americans (I'm using my country as one example, with made-up statistics) who don't care how much meat they consume. Let's say that's 90% of the country.

Of the remaining 10%, what fraction eat less meat for religious reasons that won't be affected by this argument? What fraction eat less meat for health reasons? What fraction eat less meat because they don't like the idea of hurting animals, even if the overall "problem of animals being hurt" may be "solved" by cultured meat (according to their view of the world)?

None of those people should care about this argument. The only people who seem likely to eat more meat as a result of cell-based meat development are those who currently eat very little meat because they don't like the idea of contributing to the abstract "problem of factory farming", rather than caring about the suffering of particular animals, religious prohibitions, their own health, etc.

These people probably exist, but I'd think that they are very, very rare. (Probably overrepresented in EA, but still rare.)

Comment by aarongertler on Concrete Ways to Reduce Risks of Value Drift and Lifestyle Drift · 2019-05-16T23:55:49.691Z · score: 2 (1 votes) · EA · GW

Yes, this is what happened. There are cases where it's good to be able to have the date adjust (e.g. if you accidentally publish a post before it's finished and want to edit and repost), but in this case, it was unintentional. I'll change the date.

Comment by aarongertler on Non-Profit Insurance Agency · 2019-05-14T02:27:41.726Z · score: 5 (3 votes) · EA · GW

I think that trust is contextual. People may distrust some insurance/financial companies, but they trust others. People trust some nonprofits, but distrust others. An insurance company that is set up as a nonprofit for some reason might be thought of as uniquely trustworthy, but might also be thought of as strange. Atypical nonprofits have a bad history of being used as vehicles for tax evasion and money laundering.

Comment by aarongertler on The Turing Test podcast #8: Spencer Greenberg · 2019-05-14T00:32:21.384Z · score: 9 (3 votes) · EA · GW

Spencer's work (both his blog and his various apps, quizzes, etc.) is some of the most consistently high-quality material I've experienced from anyone producing intellectual content. I've been impressed for years at the way he juggles so many balls, and will be listening to this podcast soon. Thanks for sharing it!

Comment by aarongertler on Non-Profit Insurance Agency · 2019-05-13T23:57:06.084Z · score: 10 (4 votes) · EA · GW

This is a very positive thing you're doing, and it sounds like you're going to help a lot of people!

I've heard a few stories before about people who advertise the fact that they donate in the course of their work, and I've even tried it myself (as a freelance tutor). Results have been mixed:

  • It's offputting to some people, either because it's unusual and they don't know how to respond or because they want to pay money for services, not "charity" (even if the price is the same either way).
  • On the other hand, this car salesman managed to make his giving into a positive way to interact with customers. Notably, he only talks about giving mid-deal, and only once people have expressed curiosity, rather than advertising his donations ahead of time.

Do you have a rough estimate of how much more you'd be able to donate through this unusual business structure, as opposed to if you just went into business and then donated money from your earnings? I haven't seen this particular arrangement before, and I'm curious about the benefits.

Comment by aarongertler on Concrete Ways to Reduce Risks of Value Drift and Lifestyle Drift · 2019-05-13T22:28:13.394Z · score: 6 (4 votes) · EA · GW

Thank you for this detailed description of your experience!

I would guess that many other people in the EA community have a similar story to tell about the challenge of self-presentation/conspicuous consumption, as well as the ease with which you can drift when you find a new partner/friend group. I'm trying to understand value drift better, and this comment added value for me.

Comment by aarongertler on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-13T22:22:58.792Z · score: 4 (2 votes) · EA · GW

I don't think that sarcastic comments like this, especially when they don't include evidence or serious discussion of the question, are helpful to the post's author or to other readers.

Comment by aarongertler on Does the status of 'co-founder of effective altruism' actually matter? · 2019-05-13T07:51:25.711Z · score: 14 (7 votes) · EA · GW

It makes sense to me that an author who writes a book about a social movement should mention that they were a co-founder of that movement just for the sake of clarity, even if it doesn't actually sell more books.

Will MacAskill isn't a philosophy professor writing in the abstract about ideas he finds interesting; he helped to develop those ideas, and the community around them. Knowing that is useful to me as a reader.

I agree with you that this debate is of little consequence. An exception might be that we should try to make sure that people who don't have a strong claim to be "co-founders" don't try to make such claims, as it makes EA history confusing to follow and helps people be seen as speaking "for effective altruism" when this isn't the case.

(One could argue that no one can speak for effective altruism at all, but Will MacAskill has a better claim to do so than, say, me.)

Comment by aarongertler on What caused EA movement growth to slow down? · 2019-05-13T07:45:52.688Z · score: 8 (4 votes) · EA · GW

This seems likely to me. I can think of several instances of cases where an EA organization had the chance to market certain content widely, but chose not to take it, because they preferred to focus on other work.

As one example, the EA Newsletter grew its subscriber list dramatically by advertising on Facebook, but no longer does so. I've surveyed subscribers to find out which actions the newsletter may have prompted (donating, career change. etc.), and almost all subscribers who took significant action found the Newsletter "organically" through prior involvement in EA (rather than having it advertised to them). This indicates that not trying to optimize for "number of subscribers" could make sense for the newsletter.

Comment by aarongertler on Diversifying money on different charities · 2019-05-13T07:18:38.028Z · score: 2 (1 votes) · EA · GW

Why do you think this? Are you trying to account for uncertainty about what the most efficient charity is? Otherwise, I don't understand this particular argument for "multiple charities" over "one charity".

Comment by aarongertler on Structure EA organizations as WSDNs? · 2019-05-10T23:56:37.269Z · score: 4 (2 votes) · EA · GW

This lines up with what I've seen at EA orgs. People don't always agree on how things should be run, but they almost always share a common goal. I also expect that most EA orgs are much more flat/democratic than the average private corporation. (For example, CEA has managers and people who are managed, but in most team meetings and on Slack, seniority matters much less than your direct experience with an issue and the strength of your ideas.)

Comment by aarongertler on Is EA ignoring significant possibilities for impact? · 2019-05-10T23:43:22.411Z · score: 24 (9 votes) · EA · GW

(Warning: Long comment ahead.)

First: Thank you for posting these thoughts. I have a lot of disagreements, as I explain below, but I appreciate the time you spent to express your concerns and publish them where people could read and respond. That demonstrates courage, as well as genuine care for the EA movement and the people it wants to help. I hope my responses are helpful.

Second: I recommend this below, but I'll also say it here: If you have questions or uncertainties about something in EA (for example, how EA funders model the potential impact of donations), try asking questions!

On the Forum is good, but you can also write directly to the people who work on projects. They'll often respond to you, especially if your question is specific and indicates that you've done your own research beforehand. And even if they don't respond, your question will indicate the community's interest in a topic, and may be one factor that eventually leads them to write a blog/Forum post on the topic.

(For example, it may not be worth several hours of time for an EA Funds manager to write up their models for one person, but they may decide to do so after the tenth person asks -- and for all you know, you could be person #10.)

Anyway, here are some thoughts on your individual points:

Samasource has lifted tens of thousands of people out of poverty with a self-sustaining model that, unlike GiveDirectly, is completely unreliant on continual donor funding, providing a tremendous multiplier on top of the funds that were initial used to establish Samasource.

It's easy to cherry-pick from among the world's tens of thousands of charities and find a few that seem to have better models than GiveWell's recommendations. The relevant questions are:

  • Could we have predicted Samasource's success ahead of time and helped it scale faster? If so, how? Overall, job/skills-training programs haven't had much success, and since only GiveWell was doing much charity research when Samasource was young (2008), it's understandable that they'd focus on areas that were more promising overall.
  • Could someone in EA found a program as successful as Samasource? If so, how? A strategy of "take the best thing you can find and copy it" doesn't obviously seem stronger than "take an area that seems promising and try to found an unusually good charity within that area", which people in EA are already doing.

Also, have you heard of Wave? It's a for-profit startup co-founded by a member of the EA community, and it has at least a few EA-aligned staffers. They provide cheap remittances to help poor people lift their families out of poverty faster, and as far as I know, they haven't had to take any donations to do so. That's the closest thing to an EA Samasource I can think of.

(If you have ideas for other self-sustaining projects you think could be very impactful, please post about them on the Forum!)

The EA movement originally threw around the idea of earning to give, a concept which was later retracted as a key talking point in favor of theoretically more impactful options. But the fact that a movement oriented around maximizing impact started out with earning to give is worrying. Even if earning to give became popular with hundreds to thousands of people, which in fact ended up happening, the impact on the world would be fairly minimal compared to the impact other actors have.

My model of early EA is that it focused on the following question:

"How can I, as an individual, help the world as much as possible?"

But that question also had some subtext:

"...and also, I probably want to do this somewhat reliably, without taking on too much risk.

The first people in EA were more or less alone. There weren't any grants for EA projects. There wasn't a community of thousands of people working in dozens of EA-aligned organizations. There were a few lonely individuals (and one or two groups large enough to meet up at someone's house and chat).

Under these circumstances, projects like "founding the next Samasource" seem a lot less safe, and it's hard to fault early adopters for choosing "save a couple of lives every year, reliably, while holding down a steady job and building career capital for future moves".

(Consider that a good trader at an investment bank could become a C-level executive with tens of millions of dollars at their disposal. The odds of this don't seem much worse than the odds that a random EA-aligned nonprofit founder creates something as good as Samasource -- and they might be better.)

In general, this is a really good thing to remember when you think about the early history of the EA community: for the first few years, there really wasn't much of a "community". Even after a few hundred people had joined up, it would have taken a lot of gumption to predict that the movement was going to be capable of changing the world in a grand-strategic sense.

As an example issue, in terms of financial resources, the entire EA community and all of its associated organizations are being outspent and outcompeted by St. Jude's alone. Earning to give might not resolve the imbalance, but getting a single additional large donor on board might.

There are quite a few people in EA who work full-time on donor relations and donor advisory. As a result of this work, I know of at least three billionaires who have made substantial contributions to EA projects, and there are probably more that I don't know of (not to mention many more donors at lower but still-stratospheric levels of wealth).

Also, earning to give has outcomes beyond "money goes to EA charities". People working at high-paid jobs in prestigious companies can get promoted to executive-level positions, influence corporate giving, influence colleagues, etc.

For example, employees of Google Boston organize a GiveWell fundraiser that brings in hundreds of thousands of dollars each year on top of their normal jobs (I'd guess this requires a few hundred hours of work at most).

Another example: in his first week on the job, the person who co-founded EA Epic with me walked up to the CEO after her standard speech to new employees and handed her a copy of a Peter Singer book. The next Monday, he got a friendly email from the head of Epic's corporate giving team, who told him the CEO had enjoyed the book and asked her to get in touch. While his meeting with the corporate giving head didn't lead to any concrete results, the CEO was beginning to work on her foundation this year, and it's possible that some of her donations may eventually be EA-aligned. Things like that won't happen unless people in EA put themselves in a position to talk to rich/powerful people, and not all of those people use philanthropic advisory firms.

(A lot of good can still be done through philanthropic advisory, of course; my point is that safe earning-to-give jobs still offer opportunities for high-reward risks.)

Perhaps EAs would be fanning out at high net worth advisory offices to do philanthropic advisory instead of working at Jane Street. Perhaps EAs would be working as chiefs of staff for major CEOs to have a chance at changing minds.

Some specific examples of high-net-worth advisory projects from people in EA:

This isn't to say that we couldn't have had a greater focus on reaching high-net-worth advisory offices earlier on in the movement, but it didn't take EA very long to move in that direction.

(I would be curious to hear how various people involved in early EA viewed the idea of "trying to advise rich people in a more formal way".)

It's also worth mentioning that 80K does list philanthropic advising as one of their priority paths. My guess is that there aren't many jobs in that area, and that existing jobs may require luck/connections to get, but I'd love to be proven wrong, because I've thought for a long time that this is a promising area. (I myself advise a small family foundation on their giving, and it's been a rewarding experience.)

Perhaps the movement would conduct research on how Warren Buffet decided on the Bill and Melinda Gates Foundation instead of less optimal choices, and whether outreach, networking, or persuasion methods would be effective.

There is some EA research on the psychology of giving (the researchers I know of here are Stefan Schubert and Lucius Caviola), but this is an area I think we could scale if anyone were interested in the subject -- maybe this is a genuine gap in EA?

I'd be interested to see you follow up on this specific topic.

There are multitudes of high impact activities that may not require small ultra-curated teams and can involve currently underutilized community members.

Which activities? If you point out an opportunity and make a compelling case for it, there's a good chance that you'll attract funding and interested people; this has happened many times already in the brief history of EA. But so far, EA projects that tried to scale quickly with help from people who weren't closely aligned generally haven't done well (as far as I know; I may be forgetting or not know of more successful projects).

As a final example, EA is very weak compared to all of the other forces in the world in all relevant senses of the term: weak in financial resources, weak in number of people, weak in political power.

This is true, but considering that the movement literally started from scratch ten years ago, and is built around some of the least marketable ideas in the world (don't yield to emotion! Give away your money! Read long articles!), it has gained strength at an incredible pace.

Some achievements:

  • Multiple billionaires are heavily involved.
  • One of the top X-risk organizations is run by a British lord who has held some of the most influential positions in his country.
  • GiveDirectly is working on multiple projects with the world's largest international aid organizations, which have the potential to sharply increase the impact of billions of dollars in spending.
  • There are active student effective altruism groups at more than half of the world's top 20 universities. Most of these groups are growing and becoming more active over time.
  • One of the most popular media sources for the Western liberal elite has an entire section devoted to effective altruism, whose top journalist is someone who didn't have much (any?) prior journalistic experience but did run the most popular EA Tumblr.
  • The former head of IARPA runs an AI risk think tank in Washington.

Ten years ago, a nascent GiveWell was finding its footing after an online scandal nearly ended the project, and Giving What We Can was about to launch with 23 members. We've come a long way.

Is this rate of growth sufficient? Maybe not. We may not acquire enough influence to stop the next world-rending disaster before it happens. But we've done remarkably well despite some setbacks, and critique-in-hindsight of EA goals has a high bar to clear in order to show that things could have gone much better.

(As I noted above, though, I think you're right that we could have paid more attention to certain ideas early on.)

Substantial strategic research and analysis is required to assess the current course of action and evaluate better courses of action. It's not clear to me why there has been such limited discussion of this and progress so far unless everyone thinks being financially outmatched by St. Jude's for the next 5+ years is an optimal course of action that does not require community strategizing to address.

The end of the last sentence has a condescending tone that slightly sours my feelings toward this piece, even though I can appreciate the point you're trying to make.

I'm in favor of more strategic discussion, but many of the strategy suggestions I've seen on the Forum suffer from at least one of the following:

  • A lack of specificity (a problem is noted, but no solution is proposed, or a solution is proposed with very little detail / no modeling of any kind)
  • A lack of knowledge of the full scope of the present-day movement (it's easy to reduce EA to consisting of GiveWell, Open Phil, 80K, and CEA, but there's a lot more going on than that; I often see people propose ideas that are already being implemented)
  • "Someone should do X" syndrome (an idea is proposed which could go very well, but then no one ever follows up with a more detailed proposal or a grant application). In theory, EA orgs could pick up these ideas and fund people to work on them, but if your idea doesn't fit the focus of any particular organization, some individual will have to pick it up and run with it.

These suggestions are still frequently useful, and I've heard many of them be discussed within EA organizations, but I wish that writers would, on average, move away from abstract worries and criticism and move toward concrete suggestions and proposals.

(By the way, I'm always happy to read anyone's Forum posts ahead of time and make suggestions for ways to make them more concrete, people the author might want to talk to before publishing, etc.)

Samasource, for example, may very be orders of magnitude more effective per dollar of total lifetime donations than GiveDirectly. The longer Samasource runs a financially self-sustaining model, the better the impact per donor dollar will be. But Samasource was not started based on rigorous research. If we pretend it was never started and it sought funding from the EA community today to launch, Samasource may very well have gone unfunded and never have existed, which is a problem if it is actually comparably effective or more effective than GiveDirectly.

Two notes:

1. GiveDirectly isn't just giving money directly to people; it is also changing the aid sector by establishing the idea that aid should clear the "cash benchmark". This has already begun to influence WHO and USAID, as well as many NGOs and private foundations, and the eventual impact of that influence is really hard to calculate (not to mention the value of experimental data on basic income programs, etc.)

2. The apt comparison is not "funding Samasource vs. funding GiveDirectly". The apt comparison is "funding the average early-stage Samasource-like thing vs. funding GiveDirectly". Most of the money put into Samasource-like things probably won't have nearly as much impact as money given directly to poor people. We might hit on some kind of fantastically successful program and get great returns, but that isn't guaranteed or even necessarily likely.

It is also possible that we can work out with reasoning based on Fermi estimates whether organizations have been more effective than top EA charities with reasonable confidence. We can certainly use Fermi estimates to assess the potential impact of ideas, startups, and proposed projects. I expect that a relevant number of these estimates will have a higher expected impact per dollar than top charities.

We will definitely find that some organizations have been more effective than top EA charities, but as I've said already, this cherry-picking won't help us unless we learn general lessons that help us make future funding decisions. Open Phil does some of this already with their History of Philanthropy work.

There's value in using Fermi estimates for potential projects, yes, but why do you think those would help us make better predictions about the world than the models used by GiveWell, Open Phil, EA Funds, etc.? Is there some factor you think these organizations routinely undervalue? Some valuable type of idea they never look at?

(Also, EA funding goes well beyond "top charities" at this point: GiveWell's research is expanding to cover a lot more ground, and the latest grant recommendations from the Long-Term Future Fund included a lot of experimental research and ideas.)

I am not aware if funding entities like EA Grants apply explicit quantitative models to estimate EVs and use model outputs for decision making.

Did you write to any funding entities before writing this post to ask about their models?

Generally, these organizations are happy to share at least the basics of their approach, and I think this post would have benefited from having concrete models to comment on (rather than guesses about how Fermi estimates and decision analysis might compare to whatever funders are doing).

It is possible that strategically thinking about career impact is a superior option compared to common courses of action like directly working at an EA organization in operations or earning to give. Careers can have unintuitive but wonderful opportunities for impact.

No EA organization in the world will try to stop you from "strategically thinking about career impact". 80K's process explicitly calls on individuals to consider their options carefully, with a lot of self-reflection, before making big decisions. I'm not sure what you think is missing from the "standard" EA career decision process (if such a thing even exists).

Kevin Briggs' career approach saved many more lives than a typical police officer, and amounted to the same general range of the number of statistical lives that can be saved with global health donations.

Let's say I'm choosing between two careers. In Career A, I can save 200 lives before I retire if I manage to perform unusually well, to the point where my career is newsworthy and I'm hailed as a moral exemplar. In Career B, I can save 200 lives before I retire if I do my job reasonably well, collect paychecks, and donate what I don't need.

The higher-EV option in this scenario is Career B, and it isn't close.

On the other hand, this next example gets closer to proving your point, which is that some careers have much higher potential impact than most ETG opportunities:

The Introduction to Effective Altruism mentions the fantastic actions of Stanislav Petrov, Norman Borlaug, and others that saved a tremendous number of lives, each with a different career.

The point of that section of the introduction isn't to comment on the career choices of Petrov and Borlaug, but to emphasize that even "ordinary" people can have a tremendous impact; it's meant to be inspirational, not advisory. (Source: I recently rewrote that section of the introduction.)

Petrov's heroic actions came about as a result of a very unlikely accident and have little bearing on whether one should become a soldier. Maybe soldiering is worthwhile if you can specifically become an officer at a nuclear facility, but that seems difficult.

Borlaug's work is a bit more typical of what an impact-focused scientist can achieve, in that at least a few other scientists have also saved millions of lives.

Open Phil agrees with both of us on the potential of science; they've given tens of millions of dollars to hundreds of scientists over the last few years. Meanwhile, 80K considers certain branches of science to be priority paths, and the 2017 EA Donor Lottery winner gave most of his winnings to an organization trying to follow in Borlaug's footsteps.

It may be possible to have a tremendous social impact in a large number of specialties from accounting, to dentistry, to product testing, simply by identifying scalable, sufficiently positive interventions within the field.

I agree! This is one of the reasons I'm enthusiastic about earning-to-give: if people in EA enter a variety of influential/wealthy fields and keep their wits about them, they may notice opportunities to create change. On the other hand, studying these professions and trying to change them from the outside seems less promising.

Remember also that problems must be tractable as well as large-scale. Taking your example of "accounting", one could save Americans tens of millions of hours per year by fighting for tax simplification. But in the process, you'd need to:

  • Develop a strong understanding of tax law and the legislative process.
  • Raise millions of dollars in lobbying funds and use them effectively to grab attention from congresspeople.
  • Go head-to-head with Intuit and Grover Norquist, who will be spending their own millions to fight you.

I love tax simplification. It's one of my pet causes, something I'll gripe about or retweet at the slightest opportunity. But I don't think I'd be likely to have much of an impact throwing my hat into that particular ring, alongside hundreds of other people who have been arguing about it for decades. I'd rather focus on pulling the rope sideways (fighting for causes and ideas that have high potential and no major enemies).

Comment by aarongertler on Is EA unscalable central planning? · 2019-05-10T07:06:09.653Z · score: 2 (1 votes) · EA · GW

Also, we're very far from a world where even most people in EA choose careers based on 80K's advice.

I'd guess that among EA community members with "direct work" jobs, many or even most of them mostly used their own judgment to evaluate which career path would optimize their impact. (If "optimizing impact" was even their goal, that is; many of us chose jobs partly or mostly based on things like "personal interest" and "who we'd get to work with" rather than 100% "what will help most".)

And of course, most members don't have "direct work" jobs; they just donate and/or discuss EA while working in positions that 80K doesn't recommend anymore (or never did), because they found the jobs before they found EA or because they don't take 80K recommendations seriously enough to want to switch jobs (or any of a dozen other reasons).

Comment by aarongertler on Is EA unscalable central planning? · 2019-05-10T07:00:46.672Z · score: 3 (2 votes) · EA · GW

Is there a particular article or statement from an organization that made you think influencing legislation isn't one of the movement's aims?

In the last year or two, there's been a lot more focus within EA on influencing policy, at least in areas thought to be especially impactful. It's helped that some organizations within the movement have gradually become more experienced and credible, with more connections in the political sphere. I don't see any reason that this focus wouldn't continue to increase as we build our ability to succeed in this area.

As far as "grow the movement more", that's a tough question, and it's been the subject of debate for many years. Growth has many obvious upsides, but also some downsides. For example, if many new people join, it can be hard to transmit ideas in a high-fidelity way, and EA's focus/philosophy may drift as a result. Also, some people have argued that EA organizations currently struggle to provide enough resources/opportunities to current community members; adding a lot of new people without being selective might not let us actually give these people very much to do.

(I work for CEA, but these views are my own.)

Comment by aarongertler on What are good reasons for or against working on protecting biodiversity and ecosystem services? · 2019-05-10T06:54:08.539Z · score: 3 (2 votes) · EA · GW

This is a good question. I'm not aware of any EA-related investigations of this area.

Generally, when I've read articles and papers on this topic (which have ranged from "very alarmed" to "mildly concerned, but pushing back on alarm"), I've had a hard time figuring out where humans come in.

I worry about climate change and crop disease and soil degredation and other things that could damage our food supply, but the extinction of not-very-populous species, or the deterioration of a forest where no food is grown and few people visit, seems... bad, but not nearly as directly threatening as the issues EA usually thinks about. And it's hard to imagine ecosystem shifts making the lives of wild animals too much worse than they already are.

Did the report you've linked have any particular theories about the ways in which these changes will affect humans? Which parts of our civilization are at risk?

Yes, we may be losing things that are precious and beautiful, and robbing our children of their natural heritage, but are people going to starve or get sick or otherwise suffer harms deeper than disappointment and wistful regret? Those are bad, but many other things seem to be equally disappointing and regrettable to people; the environment (save for climate change X-risk) seems not to have been especially "sticky" as a thing people care about.

(I'm not trying to argue against the importance of biodiversity/ecosystem health; I'm just genuinely uncertain about the main risk/source of negative impact.)

Comment by aarongertler on High School EA Outreach · 2019-05-10T06:44:27.985Z · score: 13 (5 votes) · EA · GW

For anyone who skimmed instead of reading the entire thing, here are two must-read lines (that is, I found them hilarious):

On teaching X-risk:

Intriguingly, we’ve found that a large proportion of students believe human extinction to be much more likely than existential risk experts do, so we think it more likely that students found our discussion of this topic reassuring, rather than disturbing.

On teaching animal welfare:

Apart from the food and nutrition teacher, who had concerns about reducing students meat intake, these activities were well supported by the staff.
Comment by aarongertler on High School EA Outreach · 2019-05-10T06:43:13.202Z · score: 11 (5 votes) · EA · GW

This article's format (asking lots of different people who did conceptually related things to report their own results) was brilliant. Thanks for pulling it all together!

One running theme I noticed is that young people have personality traits and underlying beliefs that seem very suited to EA:

  • They're open to the idea that extinction could happen
  • They're happy to engage with controversial ideas
  • They're naturally liberal and cosmopolitan
  • They're probably more aware of their privilege than previous generations of high schoolers would have been, and I'd guess more likely to feel moral obligation towards less-privileged people as a result

The difficulty is in actually giving these kids something to do with their beliefs. High schoolers are busy, accustomed to being ordered around, and inexperienced with the world outside the school system. They can't travel easily. They don't have much money. They're used to thinking in terms of what they'll do "later", when they are adults, but they're in a stage of life where things will soon change dramatically for them (when they enter college), frequently shoving those plans aside.

But given some easy way to apply leverage that already exists in their lives, they can make an impact:

Around $10,000 NZD was donated to “EA” charities due to the students influencing existing school fundraisers.

I wonder how much each of these classes/projects focused on "what you, a 16-year-old, can do right now" vs. a focus on standard EA activities (donating, choosing a career) that weren't very applicable to the target audience?

Comment by aarongertler on [Link] 80,000 Hours 2018 annual review · 2019-05-10T06:21:57.444Z · score: 6 (3 votes) · EA · GW

Just found a relevant line from the review:

We think the question of how fast to hire is very difficult and we’re constantly debating it. Currently we think that five per year is manageable and will help us grow. Much above five per year, and most of our time would be spent hiring and there would be risks to the culture. Much below that seems like a failure to adequately grow our team capacity.

Still not sure about the importance of this vs. other factors, though.

Comment by aarongertler on [Link] 5-HTTLPR · 2019-05-10T00:05:24.834Z · score: 2 (3 votes) · EA · GW

Could you say a little more about how you think this paper might be relevant to EA?

There's a point about scientific bias to be made, perhaps, but the methods of genetics research don't seem especially similar to the kinds of research I associate with EA (development RCTs, theorizing about future technology, trying to understand the nature of animal consciousness, etc.).

Comment by aarongertler on Effective Thesis project: second-year update · 2019-05-09T23:57:37.762Z · score: 6 (5 votes) · EA · GW

Thanks for writing up the project! I liked the idea of Effective Thesis when I first heard about it, but I wasn't sure how well a small team would be able to advise a large number of students with varying backgrounds and majors. It sounds as though the results have exceeded my expectations. (Also, the "Agendas" page of the website has been a really handy resource for me.)

1/3 of students appreciated general advice on research direction, 1/3 appreciated academic career-related advice and 1/3 appreciated guidance in the topic they came up with themselves.

This is an interesting statistic. I'd have thought that nearly all the students would mostly benefit from "general advice on research direction", since specialized EA knowledge is something Effective Thesis has that professors and career offices don't.

1. Can you give an example of what "guidance in the topic they came up with themselves" might look like? Particularly in a case where the coach isn't an expert on the topic?

2. Do you have any general observations of where your applications came from? I'd be interested in both the country/regional breakdown and a breakdown of applicant school rankings (e.g. "1/3 from schools in or around the top 100 of this list, most of the rest from other private American/European schools, a few from other continents").


Finally, thanks for linking to your interview questions; it helps put the answers you got in context, and I like being able to see the back-end infrastructure of EA projects.

Comment by aarongertler on Why we should be less productive. · 2019-05-09T23:39:54.991Z · score: 7 (6 votes) · EA · GW

I second Lukas's thoughts on how this post could have been more useful. In addition, I haven't seen much evidence of the phenomena discussed by the author (those people in EA who I've met tend to be fairly productive, but basically all of them also have hobbies and enjoy doing silly/non-productive things on a regular basis). More numbers, or even a couple of specific anecdotes, would have been helpful.

I also agree with the meta-meta-note. Unless someone explains that they downvoted because they disagreed, it seems healthier to assume that a downvote indicates displeasure with an argument's presentation, rather than the associated opinion/subject matter. Few communities are more likely to say "great post, even though I disagree" than this one.

That said, if anyone reading this has a habit of downvoting arguments they disagree with, even if those arguments are presented clearly and with solid data/logic, I'd weakly recommend against doing that; I think the Forum will flourish in the long run if well-crafted writing and thinking is reliably rewarded -- or at least not punished.

(I work for CEA, but these views are my own.)

Comment by aarongertler on How do we check for flaws in Effective Altruism? · 2019-05-09T10:02:45.674Z · score: 4 (2 votes) · EA · GW

CEA also has a Mistakes page, though it seems to be less well-known than those other examples.

Comment by aarongertler on How do we check for flaws in Effective Altruism? · 2019-05-09T10:01:24.472Z · score: 2 (1 votes) · EA · GW

Yes, I agree that infrastructure would be better than no infrastructure. I'm not sure who I'd trust to do this job well among people who aren't already working on something impactful, but perhaps there are people in EA inclined towards a watchdog/auditor mentality who would be interested, assuming that the orgs being "audited" could work out an arrangement that everyone felt good about.

Comment by aarongertler on [Link] 80,000 Hours 2018 annual review · 2019-05-09T09:44:40.686Z · score: 6 (3 votes) · EA · GW

The rule of thumb I've heard in various CEO interviews is that an organization's nature fundamentally changes when it triples in size. 80K has ten full-time staff at the moment (counting their team page plus one person who hasn't been added yet), but if only a couple of those are advisors, tripling the size of the advisory team might change a lot.

Not that I'm saying "concern with difficulty around scaling up" is 80K's only motivation; I'd just guess that it's roughly as important as difficulty finding good advisors.

Comment by aarongertler on Is there a job board with measures of impact where users can add jobs? · 2019-05-09T02:19:19.093Z · score: 2 (1 votes) · EA · GW

I definitely think there's room for improvement! In particular, I'd be interested to see people who have or want to obtain Job X write about their assessment of the job's impact -- ideally on this very website, though a site cataloging all such reviews and presenting them with nice formatting could also be useful.

(The Forum is a good place to try "content-only" versions of this kind of idea while features like dynamic sorting and web scraping get figured out.)

I'm not sure crowdsourcing from people without direct interest would work very well, but that's largely based on my experience with other EA crowdsourcing projects; it's easy to get initial enthusiasm, but few such projects keep going over the long run.

More measures of job impact are on my long list of "things I wish existed, but which seem not to be getting generated by EA's collective consciousness". If people can be convinced to make more of them exist, that would be wonderful, but I expect that getting solid writeups will be difficult.

That said, I hope you convince people to try! I'll definitely read any job-impact posts published on the Forum, and I will appreciate the authors' efforts, even if they only make loose/"law of large numbers" estimates.

Comment by aarongertler on [Link] 80,000 Hours 2018 annual review · 2019-05-09T02:11:12.691Z · score: 4 (2 votes) · EA · GW

The hypothesis I jumped to was "scaling up well takes time".

Even if you hire people who have the potential to be good advisors, who are advising people with the potential to be "tier-1", trying to, say, quadruple the number of advisors within a year would strain 80K's attention and could bring about many difficulties (interpersonal clashes, message drift, dropping non-advisory projects because advisors require managerial time for supervision, increased pressure on the operations team...).

Comment by aarongertler on Aligning Recommender Systems as Cause Area · 2019-05-09T02:07:26.067Z · score: 12 (7 votes) · EA · GW

This is fantastic. I don't have high confidence in the numbers you've put forth (for example, it's hard to compare QALYs from "more entertainment"/"better articles" to QALYs from "no malaria"), but I love the way this post was put together:

  • Lots of citations (to a stunning variety of sources; it feels like you've been thinking about these questions for a long time)
  • Careful analysis of what could go wrong
  • Willingness to use numbers, even if they are made up

Even putting aside flow-through effects on alignment, I think that "microtime" is important. Even saving people a few minutes of wasted time each day can be hugely beneficial at scale (especially if that time is replaced with something that fits a user's extrapolated volition). Our lives are made up of the way we spend each hour, and we could certainly be having better hours.

In a world where this is not a promising cause area, even if the risks turn out not to be a concern, I think the most likely cause of "failure" would be something like regulatory capture, where people enter large tech companies hoping to better their algorithms but get swept up by existing incentives. I'd guess that many people who already work at FANG companies entered with the goal of improving users' lives and slowly drifted away -- or came to believe that metrics companies now use are in fact improving users' lives to a "sufficient" extent.

(If you spend all day at Netflix, and come to think of TV as a golden wonderland of possibility, why not work to get people spending as much time as possible watching TV?)

It's possible that these employees still generally feel bad about optimizing for bad metrics, but however they feel, it hasn't yet added up to deliberative anti-addictive properties for any of the biggest tech companies (as far as I'm aware). It would be nice to see evidence that people have successfully advocated for these changes from the inside (Mark Zuckerberg has recently made some noises about trying to improve the situation on Facebook, but I'm not sure how much of that is due to pressure from inside Facebook vs. external pressure or his own feelings).

...including harm to relationships, reduced cognitive capacity, and political radicalization.

The first two links are identical; was that your intention?

Recommender systems often have facilities for deep customization (for instance, it's possible to tell the Facebook News Feed to rank specific friends’ posts higher than others) but the cognitive overhead of creating and managing those preferences is high enough that almost nobody uses them.

In addition to work on improved automated recommendation systems, it seems like there should be valuable projects out there that focus on getting more people to exercise their existing control over present-day systems (e.g. an app that gamifies changing your newsfeed settings, apps that let you more easily set limits for how you'll spend time online).


  • FB Purity claims to have over 450,000 users; even if only 100,000 are currently blocking their own newsfeeds, that probably represents ~10,000,000 hours each year spent somewhere other than Facebook.
  • StayFocusd has saved me, personally, thousands of hours on things my extrapolated volition would have regretted.
Comment by aarongertler on What are some neglected practices that EA community builders can use to give feedback on each other's events, projects, and efforts? · 2019-05-09T01:40:36.331Z · score: 4 (3 votes) · EA · GW

This is an interesting idea!

It sounds like this kind of feedback might be harder to give than feedback on a visual/physical activity like martial arts, but some new groups would probably still benefit.

Even if a group's members aren't all open to having a session recorded, it can be valuable to put together "postmortem" writeups on events. Some of my favorite writing about EA groups is on-the-ground reporting from leaders who wanted to improve (e.g. EA Berkeley's retrospectives, EA Yale's fellowship writeup).

Postmortem writeups don't need to be anywhere this detailed, of course; even comments on the level of "someone talked a lot and kept going off-topic, and we couldn't figure out how to handle it gracefully" give groups the opportunity to receive a lot of advice.

Comment by aarongertler on Charity opportunity going on over at /r/veganactivism · 2019-05-09T01:31:18.187Z · score: 3 (2 votes) · EA · GW

Because this is a time and funds-limited opportunity, I'm moving it to the Frontpage for today. I'll check back daily to see whether it still belongs; once the money runs out or the deadline hits, I'll shift it back to the Personal Blog category, since Frontpage content generally shouldn't "expire".

Comment by aarongertler on Diversifying money on different charities · 2019-05-09T01:27:57.459Z · score: 4 (2 votes) · EA · GW

This answer holds for almost all situations a small donor might encounter.

The most common exception I can think of: A charity is running a special campaign where dollars are more valuable until a certain point (for example, they are eligible to get their next $5000 in donations matched dollar-for-dollar, but no further matching past that, so your 5001st dollar suddenly has half the impact of the first dollar, and you might start giving elsewhere at that point).

There are other situations I've encountered with the same kind of artificial limit. Facebook, for example, only allowed each donor to give $20,000 for their last Giving Tuesday match. This means that someone planning to give more might find that their best marginal use of money is to transfer money to other people so they can donate to the Giving Tuesday match themselves (though I'd consider this to be an ethical gray area, as it clearly goes against Facebook's intentions for that restriction).

EA Forum Prize: Winners for March 2019

2019-05-07T01:36:59.748Z · score: 42 (16 votes)

Open Thread #45

2019-05-03T21:20:43.340Z · score: 10 (4 votes)

EA Forum Prize: Winners for February 2019

2019-03-29T01:53:02.491Z · score: 46 (18 votes)

Open Thread #44

2019-03-06T09:27:58.701Z · score: 10 (4 votes)

EA Forum Prize: Winners for January 2019

2019-02-22T22:27:50.161Z · score: 30 (16 votes)

The Narrowing Circle (Gwern)

2019-02-11T23:50:45.093Z · score: 34 (15 votes)

What are some lists of open questions in effective altruism?

2019-02-05T02:23:03.345Z · score: 22 (12 votes)

Are there more papers on dung beetles than human extinction?

2019-02-05T02:09:58.568Z · score: 14 (9 votes)

You Should Write a Forum Bio

2019-02-01T03:32:29.453Z · score: 21 (15 votes)

EA Forum Prize: Winners for December 2018

2019-01-30T21:05:05.254Z · score: 46 (27 votes)

The Meetup Cookbook (Fantastic Group Resource)

2019-01-24T01:28:00.600Z · score: 15 (10 votes)

The Global Priorities of the Copenhagen Consensus

2019-01-07T19:53:01.080Z · score: 43 (26 votes)

Forum Update: New Features, Seeking New Moderators

2018-12-20T22:02:46.459Z · score: 23 (13 votes)

What's going on with the new Question feature?

2018-12-20T21:01:21.607Z · score: 10 (4 votes)

EA Forum Prize: Winners for November 2018

2018-12-14T21:33:10.236Z · score: 49 (24 votes)

Literature Review: Why Do People Give Money To Charity?

2018-11-21T04:09:30.271Z · score: 24 (11 votes)

W-Risk and the Technological Wavefront (Nell Watson)

2018-11-11T23:22:24.712Z · score: 8 (8 votes)

Welcome to the New Forum!

2018-11-08T00:06:06.209Z · score: 13 (8 votes)

What's Changing With the New Forum?

2018-11-07T23:09:57.464Z · score: 17 (11 votes)

Book Review: Enlightenment Now, by Steven Pinker

2018-10-21T23:12:43.485Z · score: 18 (11 votes)

On Becoming World-Class

2018-10-19T01:35:18.898Z · score: 20 (12 votes)

EA Concepts: Share Impressions Before Credences

2018-09-18T22:47:13.721Z · score: 9 (6 votes)

EA Concepts: Inside View, Outside View

2018-09-18T22:33:08.618Z · score: 2 (1 votes)

Talking About Effective Altruism At Parties

2017-11-16T20:22:46.114Z · score: 8 (8 votes)

Meetup : Yale Effective Altruists

2014-10-07T02:59:35.605Z · score: 0 (0 votes)