Comment by aarongertler on Why isn't GV psychedelics grantmaking housed under Open Phil? · 2019-07-19T12:14:21.053Z · score: 4 (2 votes) · EA · GW

I can imagine a couple of scenarios:

a) GV asked Open Phil if they had the capacity to look into psychedelics/Alzheimer's, and Open Phil said "no"

b) GV asked Open Phil for shallow investigations of those areas, and the results weren't promising enough for Open Phil to want to continue, but weren't so un-promising that GV gave up

c) GV has some research capacity independent of Open Phil, and decided to use it on these causes (maybe because Dustin/Cari see them as personally motivating/"warm fuzzies", even if they are potentially high-impact)

...there are plenty of other possibilities I haven't had time to think of, but some combination of (a) and (c) feels pretty likely to me. (This is entirely speculative; I have no special insight into the relationship between GV and Open Phil.)

Comment by aarongertler on Debrief: "cash prizes for the best arguments against psychedelics" · 2019-07-19T12:09:50.098Z · score: 3 (2 votes) · EA · GW
After watching the voting unfold, I think the identity-politics concern is real. Many comments were heavily upvoted soon after being posted.

I'm not sure what timescale "soon after being posted" represents here. Is your concern more along the lines of:

(a) People seem to have been upvoting comments without having had time to read/think about them,

(b) People seem to have been upvoting comments without having had time to read/think about your post and how it interacted with those comments, or

(c) People seem to have been upvoting comments without having had time to read/think about all the other comments up to that point?

(Or some mix of those, of course.)

I remember taking 5-10 minutes to read some of the shorter arguments, then upvoting them because they made an interesting point or linked me to an article I found useful.

It feels like people aren't likely to spend more than 10 minutes reading/thinking about a Forum comment other than in exceptional cases, but perhaps there are ways you can encourage the right kind of slow thinking in contest posts?

Comment by aarongertler on Needed EA-related Articles on the English Wikipedia · 2019-07-18T05:40:02.827Z · score: 5 (3 votes) · EA · GW

From the edit logs: "almost no unique, well-sourced content here. Merged what was unique to GiveWell"

This is the final note from an editor who deleted the page. This was in early 2017; I'd expect an independent Open Phil page to make a lot more sense now (if they want one to exist).

Comment by aarongertler on There's Lots More To Do · 2019-07-18T05:37:44.403Z · score: 2 (1 votes) · EA · GW

Dropping in late to note that I really like the meta-point here: It's easy to get caught up in arguing with the "implications" section of a post or article before you've even checked the "results" section. Many counterintuitive arguments fall apart when you carefully check the author's data or basic logic.

(None of the points I make here are meant to apply to Ben's points -- these are just my general thoughts on evaluating ideas.)

Put another way, arguments often take the form:

  • If A, then B
  • A
  • Therefore, B

It's tempting to attack "Therefore, B" with anti-B arguments C, D, and E, but I find that it's usually more productive to start by checking the first two points. Sometimes, you'll find issues that render "Therefore, B" moot; other times, you'll see that the author's facts check out and find yourself moving closer to agreement with "Therefore, B". Both results are valuable.

Comment by aarongertler on Corporate Global Catastrophic Risks (C-GCRs) · 2019-07-18T05:07:41.503Z · score: 2 (1 votes) · EA · GW

For those who haven't read the Meditation, it's a discussion of ways in which competitive pressures push civilizations into situations where almost all of our energy and happiness are eaten up by the scramble for scarce resources.

(This is a very brief summary that leaves out a lot of important ideas, and I recommend reading the entire thing, despite its formidable length.)

Comment by aarongertler on Extinguishing or preventing coal seam fires is a potential cause area · 2019-07-18T04:51:04.363Z · score: 4 (2 votes) · EA · GW

I especially appreciate the "causal story" section of the post! I'm not sure I fully believe the explanation*, but it's always good to propose one, rather than handwaving away the reasons that a good cause would be so neglected (an error I frequently see outside of EA, and occasionally in EA-aligned work on other new cause areas).

*The part that rings truest to me is "no ready channels for donation". Ignorance seems more likely than deliberate neglect; I can picture many large environmental donors being asked about coal seam fires and reacting with "huh, never thought about it" or "is that actually a problem?"

Comment by aarongertler on Extinguishing or preventing coal seam fires is a potential cause area · 2019-07-18T04:49:34.482Z · score: 5 (2 votes) · EA · GW

My impression is that few people are researching new interventions in general, whether in climate change or other areas (I could name many promising ideas in global development that haven't been written up by anyone with a strong connection to EA).

I can't speak for people who individually choose to work on topics like AI, animal welfare, or nuclear policy, and what their impressions of marginal impact may be, but it seems like EA is just... small, without enough research-hours available to devote to everything worth exploring.

(Especially considering the specialization that often occurs before research topics are chosen; someone who discovers EA in the first year of their machine-learning PhD, after they've earned an undergrad CS degree, has a strong reason to research AI risk rather than other topics.)

Perhaps we should be doing more to reach out to talented researchers in fields more closely related to climate change, or students who might someday become those researchers? (As is often the case, "EAs should do more X" means something like "these specific people and organizations should do more X and less Y", unless we grow the pool of available people/organizations.)

Comment by aarongertler on Please vote for AMF in Tab for a Cause. · 2019-07-18T04:09:54.182Z · score: 3 (2 votes) · EA · GW

Thanks for sharing this resource!

In future posts like this, I'd recommend including a little more information about the voting system. Does AMF need to win a majority of the votes to receive any money? If so, how close is it to doing so? If not, how much money is a vote likely to be worth?

(There's a difference between taking time to install a new extension in order to give $50 in expectation vs. giving $0.50 or $0.00 in expectation -- the "zero" number is possible if AMF needs a thousand votes to win, given the small size of the EA community.)

Comment by aarongertler on EA Forum Prize: Winners for May 2019 · 2019-07-17T01:11:34.419Z · score: 13 (6 votes) · EA · GW

After Julia decided to step down, I proposed a list of six Forum users who I thought might be good candidates. She and I discussed the options and decided to begin by reaching out to Larks and Khorton, who both accepted; if they hadn't, I'd have approached other candidates who I believe would also be solid judges.

(There are many more than six contributors who I'd by open to considering; the original shortlist was just six people who quickly came to mind, among whom I expected we'd get at least two "yes" responses.)

I wanted to start with a relatively small addition, but there's a good chance that the roster will expand later on. I can imagine getting up to a group of 8-10 people without the Prize becoming too difficult to coordinate, and I also wouldn't be surprised if people sometimes joined up for a couple of months and then stepped down, based on their available time.

Comment by aarongertler on In what ways and in what areas might it make sense for EA to adopt more a more bottoms-up approach? · 2019-07-16T22:59:36.559Z · score: 3 (2 votes) · EA · GW

What gives you the sense that there's a lot of causal/top-down planning in EA? It may make more sense to ask: "a lot of causal/top-down planning compared to what?"

On the one hand, the movement's largest organizations sometimes recommend specific courses of action; on the other hand, they also sometimes recommend "keeping your options open" and "staying flexible".

Also, EA encompasses a huge range of charities that work on a lot of different things, and new organizations spring up all the time. Overall, even the largest/oldest EA organizations are still practically startups compared with most major American corporations; they frequently make large changes to their mission/strategy on a year-to-year basis, as the result of new data or changes in the resources available to them. (CEA has done many different things over the last five years, GiveWell is undergoing massive change, etc.)

Comment by aarongertler on Some solutions to utilitarian problems · 2019-07-16T22:50:38.447Z · score: 4 (2 votes) · EA · GW

I enjoy posts like these, but it seems difficult to adapt to using them when I'm actually making a charitable donation (or taking other substantive action).

An idea along those lines: Examine the work of an EA organization that has public analysis of the benefits of various interventions (e.g. GiveWell) from the perspective of variable critical-level utilitarianism, and comment on how you'd personally change the way they calculate benefits if you had the chance.

(This may not actually be applicable to GiveWell; if no orgs fit the bill, you could also examine how donations to a particular charity might look through various utilitarian lenses. In general, I'd love to see more concrete applications of this kind of analysis.)

Comment by aarongertler on Charity Vouchers [public policy idea] · 2019-07-16T22:41:16.648Z · score: 4 (3 votes) · EA · GW

Information that makes me lean toward "most giving is local":

  • In 2017, roughly 31% of all American donations went to religious institutions, and I'd guess that almost all of that money was for local churches and missions. Only 6% of giving was international.
  • More than half of all animal-related giving goes to animal shelters (again, I assume these are mostly local shelters).
  • Many popular giving categories are almost exclusively local: Community centers, food banks, museums, charity hospitals...
Comment by aarongertler on Charity Vouchers [public policy idea] · 2019-07-16T22:33:45.280Z · score: 2 (1 votes) · EA · GW

I'd also predict similar effects, though with a smaller magnitude, since some of the funding will be chewed up by marketing and processing costs for charities.

Comment by aarongertler on Philanthropy Vouchers as Systemic Change · 2019-07-16T22:32:50.549Z · score: 3 (2 votes) · EA · GW

Within EA, I've seen a couple of examples of limited voucher systems (or systems with some similar properties):

  • Giving Games give people the chance to allocate funding to one charity out of a small group (usually 2-5)
  • I've personally run a limited "donation voucher" system on Facebook before, offering to donate a small amount ($50, IIRC) to any GiveWell charity on behalf of the first X people who took the offer (I think X = 5, but only three people actually asked)

I'd worry about what the existence of a voucher system might mean for charities that take more risks; if it were sufficiently well-funded, it could increase the attention paid to the philanthropic sector and lead to many more fights over controversial organizations like Planned Parenthood (or even someplace like MIRI).

I also suspect that much of the best philanthropy happens through large donations to lesser-known organizations from people who have the time and money to conduct research (e.g. the kind of work Open Phil does), and that people with less knowledge making smaller donations might not make high-impact choices. (I think it's very likely that a voucher system would increase the correlation between charities' spending on marketing and their donation revenue.)

That said, if someone were to propose a bill in the House that redistributed $25 billion in the form of $100 vouchers to every American adult for charitable giving, I might support it, assuming that the costs of the redistribution/vouchering process weren't too high. The average charity people chose to support (e.g. "buying food for hungry people") might still have more impact on total welfare than the average use of government funds.

Comment by aarongertler on EA Forum 2.0 Initial Announcement · 2019-07-12T08:50:05.695Z · score: 11 (3 votes) · EA · GW

Are there particular instances of complaints related to voting behavior that you can recall?

I remember seeing a couple of cases over the last ~8 months where users were concerned about low-information downvotes (people downvoting without explaining what they didn't like). I don't remember seeing any instances of concern around other aspects of the current system (for example, complaints about high-karma users dominating the perception of posts by strong-voting too frequently). However, I could easily be forgetting or missing comments along those lines.

Currently, you can see the number of votes a post or comment has received by hovering over its karma count. This does let you distinguish between "many upvotes and many downvotes" and "no votes". Adding a count of upvotes and downvotes would provide more information about the distribution of strong votes (e.g. one strong upvote vs. several weak downvotes, or vice-versa). I can see how that could be useful, and I'll bring it up with the Forum's tech team to hear their thoughts. Thank you for the suggestion!

Comment by aarongertler on Rationality, EA and being a movement · 2019-07-12T08:39:24.955Z · score: 4 (2 votes) · EA · GW

I didn't intend it as a dodge, though I understand why this information is difficult to provide. I just think that talking about problems in a case where one party is anonymous may be inherently difficult when examples can't easily come into play.

I could try harder to come up with my own examples for the claims, but that seems like an odd way to handle discussion; it allows almost any criticism to be levied in hopes that the interlocutor will find some fitting anecdote. (Again, this isn't the fault of the critics; it's just a difficult feature of the situation.)

What are some EA projects you consider "status quo", and how is following the status quo relevant to the worthiness of the projects? (Maybe your concern comes from the idea that projects which could be handled by non-contrarians are instead taking up time/energy that could be spent on something more creative/novel?)

Comment by aarongertler on Seeking EAs to Interview on Career Change Resources · 2019-07-12T08:34:44.315Z · score: 3 (2 votes) · EA · GW

Could you say more about yours and Vaidehi's history in the EA movement? What experience are you both bringing to this project that will help you make these connections?

(If you answer here, you may also want to update your Forum bio!)

Comment by aarongertler on Rationality, EA and being a movement · 2019-07-12T01:58:04.612Z · score: 2 (1 votes) · EA · GW

Not a problem -- I posted the reply long after the post went up, so I wouldn't expect you to recall too many details. No need to send a PM, though I would love to read the article for point four (your link is currently broken). Thanks for coming back to reply!

EA Forum Prize: Winners for May 2019

2019-07-12T01:48:57.209Z · score: 25 (9 votes)
Comment by aarongertler on Rationality, EA and being a movement · 2019-07-11T02:43:15.041Z · score: 23 (7 votes) · EA · GW

I work for CEA, but these views are my own -- though they are, naturally, informed by my work experience.


First, and most important: Thank you for taking the time to write this up. It's not easy to summarize conversations like this, especially when they touch on controversial topics, but it's good to have this kind of thing out in public (even anonymized).


I found the concrete point about Open Phil research hires to be interesting, though the claimed numbers for CFAR seem higher than I'd expect, and I strongly expect that some of the most recent research hires came to Open Phil through the EA movement:

  • Open Phil recruited for these roles by directly contacting many people (I'd estimate well over a hundred, perhaps 300-400) using a variety of EA networks. For example, I received an email with the following statement: "I don't know you personally, but from your technical experience and your experience as an EA student group founder and leader, I wonder if you might be a fit for an RA position at Open Philanthropy."
  • Luke Muehlhauser’s writeup of the hiring round noted that there were a lot of very strong applicants, including multiple candidates who weren’t hired but might excel in a research role in the future. I can’t guarantee that many of the strong applicants applied because of their EA involvement, but it seems likely.
  • While I wasn't hired as an RA, I was a finalist for the role. Bastian Stern, one of the new researchers mentioned in this post, founded a chapter of Giving What We Can in college, and new researcher Jacob Trefethen was also a member of that chapter. If there hadn't been an EA movement for them to join, would they have heard about the role? Several other Open Phil researchers (whose work includes the long-term future) also have backgrounds in EA community-building.

I'll be curious to see whether, if Open Phil makes another grant to CFAR, they will note CFAR's usefulness as a recruiting pipeline (they didn't in January 2018, but this was before their major 2018 hiring round happened).

Also, regarding claims about 80,000 Hours specifically:

  • Getting good ops hires is still very important, and I don’t think it makes sense to downplay that.
  • Even assuming that none of the research hires were coached by 80K (I assume it’s true, but I don’t have independent knowledge of that):
    • We don’t know how many of the very close candidates came through 80,000 Hours…
    • ...or how many actual hires were helped by 80K’s other resources…
    • ...or how many researchers at other organizations received career coaching.
  • Open Phil’s enormous follow-on grant to 80K in early 2019 seems to indicate their continued belief that 80K’s work is valuable in at least some of the ways Open Phil cares about.


As for the statements about "compromising on a commitment to truth"... there aren't enough examples or detailed arguments to say much.

I've attended a CFAR workshop, a mini-workshop, and a reunion, and I've also run operations for two separate CFAR workshops (over a span of four years, alongside people from multiple "eras" of CFAR/rationality). I've also spent nearly a year working at CEA, before which I founded two EA groups and worked loosely with various direct and meta organizations in the movement.

Some beliefs I've come to have, as a result of this experience (corresponding to each point):

1. "Protecting reputation" and "gaining social status" are not limited to EA or rationality. Both movements care about this to varying degrees -- sometimes too much (in my view), and sometimes not enough. Sometimes, it is good to have a good reputation and high status, because these things both make your work easier and signify actual virtues of your movement/organization.

2. I've met some of the most rigorous thinkers I've ever known in the rationality movement -- and in the EA movement, including EA-aligned people who aren't involved with the rationality side very much or at all. On the other hand, I've seen bad arguments and intellectual confusion pop up in both movements from time to time (usually quashed after a while). On the whole, I've been impressed by the rigor of the people who run various major EA orgs, and I don't think that the less-rigorous people who speak at conferences have much of an influence over what the major orgs do. (I'd be really interested to hear counterarguments to this, of course!)

3. There are certainly people from whom various EA orgs have wanted to dissociate (sometimes successfully, sometimes not). My impression is that high-profile dissociation generally happens for good reasons (the highest-profile case I can think of is Gleb Tsipursky, who had some interesting ideas but on the whole exemplified what the rationalists quoted in your post were afraid of -- and was publicly criticized in exacting detail).

I'd love to hear specific examples of "low-status" people whose ideas have been ignored to the detriment of EA, but no one comes to mind; Forum posts attacking mainstream EA orgs are some of the most popular on the entire site, and typically produce lots of discussion/heat (though perhaps less light).

I've heard from many people who are reluctant to voice their views in public around EA topics -- but as often as not, these are high-profile members of the community, or at least people whose ideas aren't very controversial.

They aren't reluctant to speak because they don’t have status — it’s often the opposite, because having status gives you something to lose, and being popular and widely-read often means getting more criticism over even minor points than an unknown person would. I’ve heard similar complaints about LessWrong from both well-known and “unknown” writers; many responses in EA/rationalist spaces take a lot of time to address and aren’t especially helpful. (This isn’t unique to us, of course — it’s a symptom of the internet — but it’s not something that necessarily indicates the suppression of unpopular ideas.)

That said, I am an employee of CEA, so people with controversial views may not want to speak to me at all -- but I can't comment on what I haven't heard.

4. Again, I'd be happy to hear specific cases, but otherwise it's hard to figure out which people are "interested in EA's resources, instead of the mission", or which "truth-finding processes" have been corrupted. I don't agree with every grant EA orgs have ever made, but on the whole, I don't see evidence of systemic epistemic damage.


The same difficulties apply to much of the rest of the conversation -- there's not enough content to allow for a thorough counterargument. Part of the difficulty is that the question "who is doing the best AI safety research?" is controversial, not especially objective, and tinged by one's perspective on the best "direction" for safety research (some directions are more associated with the rationality community than others). I can point to people in the EA community whose longtermist work has been impressive to me, but I'm not an AI expert, so my opinion means very little here.

As a final thought: I wonder what the most prominent thinkers/public faces of the rationality movement would think about the claims here? My impression from working in both movements is that there’s a lot of mutual respect between the people most involved in each one, but it’s possible that respect for EA’s leaders wouldn’t extend to respect for its growth strategy/overall epistemics.

Comment by aarongertler on Get-Out-Of-Hell-Free Necklace · 2019-07-09T23:47:45.118Z · score: 15 (10 votes) · EA · GW

I appreciate the use of a specific scenario to outline one aspect of suffering-focused thinking -- the range of examples (mental, physical) was especially helpful.

However, my intuition is that the things people consider to be their own least pleasant experiences are often not the moments of greatest pain/suffering in their lives. I've been punched in the face, I've run wind sprints until I dry-heaved, and I've been dragged along a rocky ocean floor by a strong wave...

...but those barely even register on my list of "things I wish hadn't happened", compared both to major negative life events but also to something like "the time I wrote someone an embarrassing love letter", which still bothers me to this day.

How many "hell-seconds" might someone experience from a sense that they never lived up to their potential, or from their parents' acrimonious divorce, or from a friend's instantaneous but untimely death?

Of course, your metric isn't meant to be universal, but as far as metrics of well-being go, I see extreme pain as relatively narrow -- I'm curious whether there are any systems that go beyond DALY and "hell-seconds" in capturing lingering low-valence negative emotions, even if they don't amount to "depression".

Comment by aarongertler on Advice for an Undergrad · 2019-07-09T00:47:18.858Z · score: 4 (3 votes) · EA · GW

A couple of thoughts, which may or may not be redundant with 80K/other material you've seen:

  • 80,000 Hours has a very long wait list, but even if you never get coaching, it's clear from this post that you're thinking carefully about the future and have taken initiative to become more skilled -- those are both rare traits, and indicate a lot of promise for your future work!
  • Learning statistics/"how to read and evaluate papers that make quantitative claims" seems really valuable, much more so than learning facts about a particular subject area. This doesn't mean you shouldn't be researching actual subject areas, but the most important thing is getting good at the methodology of good thinking/research. Open Phil has a lot of research areas and will probably add more in the future, and their current research staff often move between different topics -- making specific subject-level expertise seem less useful, comparatively.
  • That said, there are certainly subjects that will predictably be useful to Open Phil for a long time to come (see this post for a promising example). But those subjects may not fit neatly into a single academic discipline, so general "thinking skills" still seem more promising to focus on.
  • Spend time reading good research; reading through the most recent collection of GiveWell Top Charity writeups and Open Phil's cause overviews will help you get a sense for common patterns of thought/analysis. Past that, try to learn about particular work these organizations have found valuable/reliable, and read that material too. Good research seems to involve a lot of small touches/habits that would be hard to communicate explicitly, but might be acquirable if you view enough examples.
  • If you're interested in movement-building, check out this page from CEA's groups team. It's hard to calculate the expected value of organization; it's highly dependent on the nature of the people you'd be organizing (that is, the number of potentially EA-aligned people you can reach at your campus) and your own talent as an organizer. But the main purpose should be to get more people involved in work/research; donations are valuable and important, but it's hard to get students to set up donation habits that persist, and donations from students are likely to be quite small in the grand scheme of EA funding.
    • Given that you're an older student and Amherst is a small campus, organizing seems like it might be tough to pull off. It's also possible that someone has tried to start an Amherst group before (if so, CEA's groups team would probably know about it -- consider contacting them). It can also be really difficult to organize a group on your own. Given your apparent passion for research, I'm tempted to recommend focusing on your own skill-building, but you should probably talk to someone from CEA to see what they think.


I work for CEA (not on the Groups team), but these views are my own.

Comment by aarongertler on Critique of Superintelligence Part 1 · 2019-07-09T00:28:15.170Z · score: 3 (2 votes) · EA · GW

I think the truth is a mix of both hypotheses. I don't have time to make a full response, but some additional thoughts:

  • It's very likely that there exist reliable predictors of success that extend across many fields.
  • Some of these are innate traits (intelligence, conscientiousness, etc.)
  • But if you look at a group of people in a field who have very similar traits, some will still be more successful than others. Some of this inequality will be luck, but some of it seems like it would also be related to actions/habits/etc.
  • Some of these actions will be trait-related (e.g. "excitement-seeking" might predict "not following unwritten rules"). But it should also be possible to take the right actions even if you aren't strong in the corresponding traits; there are ways you can become less bound by unwritten rules even if you don't have excitement-seeking tendencies. (A concrete example: Ferriss sometimes recommends practicing requests in public to get past worries about social faux pas -- e.g. by asking for a discount on your coffee. CFAR does something similar with "comfort zone expansion".)

No intellectual practice/"rule" is universal -- if many people tried the sorts of things Tim Ferriss tried, most would fail or at least have a lot less success. But some actions are more likely than others to generate self-improvement/success, and some actions seem like they would make a large difference (for example, "trying new things" or "asking for things").

One (perhaps pessimistic) picture of the world could look like this:

  • Most people are going to be roughly as capable/successful as they are now forever, even if they try to change, unless good or bad luck intervenes
  • Some people who try to change will succeed, because they expose themselves to the possibility of good luck (e.g. by starting a risky project, asking for help with something, or giving themselves the chance to stumble upon a habit/routine that suits them very well)
  • A few people will succeed whether or not they try to change, because they won the trait lottery, but within this group, trying to change in certain ways will still be associated with greater success.

One of Ferriss's stated goals is to look at groups of people who succeed at X, then find people within those groups who have been unexpectedly successful. A common interview question: "Who's better at [THING] than they should be?" (For example, an athlete with an unusual body type, or a startup founder from an unusual background.) You can never take luck out of the equation completely, especially in the complex world of intellectual/business pursuits, but I think there's some validity to the common actions Ferriss claims to have identified.

Comment by aarongertler on Target Malaria begins a first experiment on the release of sterile mosquitoes in Africa · 2019-07-08T23:08:49.102Z · score: 4 (2 votes) · EA · GW

Thanks for posting the article!

It may be a relatively non-controversial piece of simple "good news", but it's still really valuable to have on the Forum; it can be really time-consuming to aggregate EA news (I compile the EA Newsletter, so this is often annoying to me), and the more the Forum becomes a good "default place to see what's new", the more useful it will be.

Comment by aarongertler on Running Effective Altruism Groups: A Literature Review · 2019-07-08T23:06:10.996Z · score: 8 (2 votes) · EA · GW

Thank you for compiling this! It's now the best resource of its kind of which I am aware.

One note related to my own cited post (footnote #6), with another five years of hindsight:

While our initial Giving Game didn't bring in any new members, another Giving Game later in the year found someone who went on to lead/co-lead the organization for the next three years. It's anecdotal evidence either way, but that plus a few other recruits from later Giving Games updated me toward them being a good activity to run for purposes beyond group bonding.

Comment by aarongertler on Here Be Epistemic Dragons · 2019-07-05T18:37:25.773Z · score: 8 (3 votes) · EA · GW

I'm not sure what I'm meant to take away from this post, and I think some of the following would help it be more readable for other Forum users:

  • Header text that describes the content of each section (here's an example)
  • A summary before the full post that briefly describes your key ideas and/or actions that a reader might take if they agree with your full post. Your concluding sentence seems like it summarizes a few of the other things you talk about, but I'm not sure whether that was the intention.
  • Links to articles you cite (like the developmental milestones essay)
  • Examples of how the content relates to current areas of EA research, past discussions on the Forum, or anything else in the EA intellectual landscape. I can try to make guesses about which topics might be considered "dragons", but I don't know if those guesses are "correct", or whether your list of topics differs from mine.
Comment by aarongertler on I find this forum increasingly difficult to navigate · 2019-07-05T18:28:28.942Z · score: 6 (3 votes) · EA · GW

If you want to create a poll, consider using Effective Altruism Polls (a Facebook group I made), then linking to the poll here.

Not all Forum users have Facebook accounts, so that's a downside, but you also get additional data from Forum users who are in the polling group but might not have otherwise seen your post (e.g. if they only visit every so often).

Comment by aarongertler on I find this forum increasingly difficult to navigate · 2019-07-05T18:25:55.942Z · score: 4 (2 votes) · EA · GW

I'm curious about your vision for what better search results would look like. I'm not sure how the current search works, but I expect that it prioritizes title-matching, so that the phrase "Effective Altruism" returns posts whose titles begin with that phrase (which isn't very common, as most people here abbreviate it "EA").

Would your preferred results look like any of the following?

  • The highest-karma posts using the phrase "effective altruism" anywhere (title or text)
  • The newest posts using the phrase "effective altruism" anywhere (title or text)
  • The posts using the phrase "effective altruism" most often (between title and text)

Keyword search seems like a pretty good way to find "that one post you're looking for", but I could imagine that karma/newness should be factors if people are using Forum search to hunt down "interesting posts about topic X". I don't know how common each use case is.


Regarding screenshots: Images can't be added to comments (correction: they can, see here); is that what you were trying to find a workaround for? (If so, it's useful feedback for us that this is something people want.)

You can add an image to a post by leaving a blank line where you want the image to go. Then, highlight a space in the blank line. You'll see a bar pop up with options like "bold" and "italic".

Click the image of a photo on that bar, and you'll be able to add the URL where your image is hosted. The Forum doesn't support attachments, but there are a lot of sites where you can upload an image for free. My favorite is imgbb.

Comment by aarongertler on I find this forum increasingly difficult to navigate · 2019-07-05T18:13:23.109Z · score: 4 (2 votes) · EA · GW

Thanks for pointing out the existence of the "All Posts" page, Moses. That page has acquired a lot of new filtering options since the Forum was launched in November (at that point, it was literally just a list of all posts).

Arepo: Aside from time-range filters, are there particular features you miss from older versions, or wish existed in the current version? What "useful data" was on the front page of V1.0 that isn't accessible now?

Comment by aarongertler on A Guide to Early Stage EA Group-Building at Liberal Arts Colleges · 2019-07-04T09:58:44.923Z · score: 2 (1 votes) · EA · GW

Sure! Anything I write on the Forum can be used elsewhere.

Comment by aarongertler on A Guide to Early Stage EA Group-Building at Liberal Arts Colleges · 2019-07-04T02:34:04.968Z · score: 9 (5 votes) · EA · GW

Thanks for the detailed writeup!

You mentioned some difficulty in getting people from the "periphery" to the "middle" with discussion groups and other activities. This is a common feature of EA groups (certainly the two that I've run).

Some things I've found to be helpful:

  • More meetups with students outside your college. Even if there isn't an EAGx in your area, there might be another college EA group a reasonable distance away, such that you could take a day trip to their school (or vice/versa), or meet at a location in the middle. Examples:
    • The Yale group visited the Harvard group the night before the 2014 Yale/Harvard football game (a time when lots of students travel from one school to the other).
    • The Madison (WI) and Chicago groups had a meetup at a vegan restaurant between the two cities.
  • Encouraging group members to follow EA resources outside the local group -- for example, inviting them to the main EA Facebook group, sending a list of EA Twitter accounts (e.g. Rob Wiblin, Kelsey Piper), recommending the 80,000 Hours podcast, or signing them up for the EA Newsletter (with their permission).

What both of these have in common: I suspect that identifying as a member of the EA community is much easier when you see it as a community, rather than a personal philosophy. It's tough to adapt an unusual, self-sacrificing set of moral principles held by only a few other people you know; it's easier to do so in a context where you see something EA-related pop up in your life every few days, even during your summer break, and EA becomes just "a normal thing in the world" for you.

(This seems likely to be true for almost any other activity; someone who only ever plays the violin for ~4 hours/week in a casual college string quartet probably won't play as much after college as someone who also follows lots of fun violin YouTube channels and listens to violin-centric playlists while they study. I don't have empirical evidence for these assumptions, though.)

Comment by aarongertler on Announcing plans for a German Effective Altruism Network focused on Community Building · 2019-07-04T01:29:23.121Z · score: 15 (6 votes) · EA · GW

Thanks for writing this up! It's nice to see a detailed plan and cost estimates ahead of time for the kind of community-building project that is often treated more informally.

A couple of notes/questions:

  • I may have missed this, but are you modeling the proposed organization of your network after any of the other national-level EA groups? It seems like there should be many lessons to learn, but I didn't see any explicit mention of groups you're emulating or groups you've spoken to.
    • I'm especially curious about this in regards to the IT infrastructure section of the proposal; it can be difficult to set up and maintain websites that require participation/input from many different groups, and I wonder how much of the event information could be handled with off-the-shelf tools (as you plan to use for your CRM software).
  • You have a lot of different projects and priorities. If this project were to be funded, what would you do first?
  • To what extent do you believe that EA movement growth in Germany has been hampered by a lack of coordination?
    • You mention that the population density of EAs is lower there than in the UK or Norway; I'm not sure how much of this is based on difficulty retaining members vs. difficulty reaching people in the first place. A coordination system seems especially helpful for retention (as interacting with a badly-run group is a really easy way for someone to lose involvement with EA), but I'm less clear on how much it would help with outreach.


i work for CEA, but these views are my own.

Comment by aarongertler on Open for comment: EA career changer worksheet · 2019-07-04T01:12:02.022Z · score: 3 (2 votes) · EA · GW

I left some comments on the document. Thanks for producing and sharing it!

It seems as though, by "questionnaire", you mean something more like "individual worksheet someone can use to think through a career change". Based on your first post, I had been thinking of it as a survey/"invitation to share your story". My initial suggestions sometimes treated the document as a potential survey.

As a general piece of advice, think carefully about every question to make sure it pulls its weight; I think you're currently on the side of "trying to do too much", and that you could help people get almost as much insight into themselves with many fewer questions. Especially watch out for questions that will have somewhat redundant answers, like "what standard of living is acceptable?" and "can you downsize to save money?".

Comment by aarongertler on Aid Scepticism and Effective Altruism · 2019-07-04T00:54:42.732Z · score: 8 (6 votes) · EA · GW

Will: Thanks for posting this! I look forward to more posts in the series. To expand on a question from another commenter:

  • What has it been like to engage the broader philosophical community with arguments based on effective altruism? Do you feel as though EA is generally taken seriously as a philosophical perspective, even when people don't agree with it?
  • I'd guess that the people you're trying to persuade are mostly bystanders rather than direct opponents; have you had good results in...
    • ...moving either type of philosopher closer to your position?
    • Convincing philosophers to start donating/examine EA-relevant-topics? (Recently, that is -- since it seems clear that you were influential in getting a lot of philosophers on board with EA in the early days.)
  • It seems to me like EA has changed and adapted new ideas reasonably often over the last ten years, but I'm not sure how much of this change came out of conversations with philosophers and other intellectuals who were generally opposed to the movement or the ideas. Have you gotten any especially useful feedback from people who disagreed with EA's core arguments? (Say, people who were as critical or more critical than Temkin?)
Comment by aarongertler on Should we talk about altruism or talk about justice? · 2019-07-04T00:19:48.754Z · score: 7 (4 votes) · EA · GW

Your discussion of the ways that these frames appeal to different audiences seems broadly accurate to me. However, I feel as though your Singer/Pogge comparison leaves out a couple of important details:

1. Singer had a massive head start. "Famine, Affluence, and Morality" was published in 1972, and "Animal Liberation" in 1975, while the "Global Justice" page of Pogge's website only includes one pre-2000 publication. This seems important when considering how each view spread among intellectuals and philosophers.

2. As far as the relative prominence of each thinker within early EA, I don't know much about the dynamics of how Giving What We Can formed, but I wouldn't be surprised if Singer and Pogge (and others) wound up using Singer's framing for reasons that weren't rigorously examined.

(I personally prefer Singer's framing, and I'm glad that it's the dominant frame within EA, but I could imagine the movement developing from a justice/fairness perspective in some alternative timeline.)

3. Most importantly, neither view is very popular in the grand scheme of things.

Singer's main-stage TED talk has been viewed nearly two million times on TED's website alone, but the number of people who have taken even moderate action as a result of his views seems like it's at least an order of magnitude lower. Even this very popular altruistic argument, which has been taught in universities since it was first proposed, is quite niche by the standards of "popular ideas". (Compare the number of people who protested Trump's "Muslim ban" in airports or advocated against Nixon's bombings of Cambodia, neither of which had any direct effect on most of those who took action.)

Historically, justice-oriented arguments seem to have had a much greater chance of going "viral" than altruistic arguments; if some version/framing of EA eventually catches on with the mainstream, my prior is that it will be more justice-oriented than the way the average EA thinks about the movement today. This isn't necessarily good, but it's something to think about -- there may be more value than we'd think in finding the best justice-oriented frame to promote "officially" (there are many options).

Also, beyond justice and altruism, there are plenty of other ways to talk about EA. Two examples:

  • Efficiency/"hack the system". Appeal to Lifehacker and Wired readers by positing EA as the best way to do charity "right" while avoiding bloated megacharities and mainstream causes. Tim Ferriss, one of this mentality's exemplars, had Will MacAskill on his podcast (which has something like a million subscribers) because Will fit into his bucket of "system hackers" and "top performers".
  • Humanity/"we are one". Blends the friendly neutrality of altruism with the value-driven approach of "justice". Frame EA as the best way to live as though you are a member of the human race, rather than a particular country/race/creed/etc. Talk about how there are fundamental things that everyone wants, and that EA goes after some of the deepest, more important fundamentals (health, security, self-determination through GiveDirectly...). My favorite EA-aligned film, Life in a Day, uses this perspective to (inadvertently) make a strong case for moral cosmopolitanism and location-neutral donations.

No single frame needs to dominate discussions of EA, and I suspect that an ideal introductory resource for a broad audience would make use of several different frames.

Comment by aarongertler on X-risks of SETI and METI? · 2019-07-03T23:49:46.441Z · score: 9 (4 votes) · EA · GW

When I hear about articles like this, I worry about journalists conflating "could be an X-risk" with "is an X-risk as substantial as any other"; journalism tends to wash out differences in scale between problems.

If you're still in communication with the author, I'd recommend emphasizing that this risk has undergone much less study than AI alignment or biorisk and that there is no strong EA consensus against projects like SETI. It may be that more people in EA would prefer SETI to cease broadcasting than to maintain the status quo, but I haven't heard about any particular person actively trying to make them stop/reconsider their methods. (That said, this isn't my area of expertise and there may be persuasion underway of which I'm unaware.)

I'm mostly concerned about future articles that say something like "EAs are afraid of germs, AIs, and aliens", with no distinction of the third item from the first two.

Comment by aarongertler on For older EA-oriented career changers: discussion and community formation · 2019-07-02T18:22:11.467Z · score: 3 (2 votes) · EA · GW

I was thinking of a survey from a third-party website.

Comment by aarongertler on EA Forum Prize: Winners for April 2019 · 2019-07-02T05:46:25.366Z · score: 2 (1 votes) · EA · GW

Thanks for the suggestion! We haven't tried this before, but it's something that we've been thinking about for the last month or so. Many Forum comments are more detailed and labor-intensive than the average post, and I'm a fan of rewarding people.

I'll discuss this further with the judge team and see if we can figure out a good reward system, though any changes would be implemented for the June prize at the earliest.

Comment by aarongertler on For older EA-oriented career changers: discussion and community formation · 2019-07-02T05:29:39.297Z · score: 5 (4 votes) · EA · GW

Thanks for writing this post! I especially liked your acknowledgment of pressure from family; that can be a major factor in career choice even for people who are trying to focus on impact, and it's often difficult to talk about.

A note: You should consider adding your questions here to a survey and linking to it. Right now, the only way anyone can respond is through a private message or by writing out long-form answers in public, which will reduce the number of people who reply to you.

Comment by aarongertler on Optimizing Activities Fairs · 2019-07-02T05:23:29.643Z · score: 5 (4 votes) · EA · GW

This is a good introduction! Certainly a few things in here that I wish I'd known back in 2014 when I was handling my first activity fairs.

I strongly second the idea that you shouldn't be looking to persuade anyone at these booths. If someone's interested enough to want to discuss this with you in detail, that's really good, and you should prioritize them for follow-up, but you will be tempted to talk to them in the moment, especially if they make an argument you "know how to refute". You need to resist that temptation.


Another note: You should be ready to tell people not just what the group does, but what the first event is. The post talked about the importance of having an event early, and i'm adding that you should mention said event.

During the first 1-2 weeks of college, dozens of groups will be fighting for recruits, and the people you talk to will already be cataloging what they'll do when they aren't in class. You want to add yourself to that catalog -- certainly with a first email, but ideally even before that.

"We do a lot of X, Y, and Z. We're having our first X on Thursday night -- sign up here, and we'll send you the details!"

(Hopefully, you've chosen an X that sounds like fun.)

This is generally a good idea even if your group doesn't run many big events, and even if you prioritize 1-on-1s; I'd guess that you'll come across as slightly odd if the first thing people know about your activities is that "someone will get lunch with you to talk". Parties, speaker events, and Giving Games are all "safer" and more normal.

Comment by aarongertler on Upcoming interviews on the 80,000 Hours Podcast · 2019-07-02T04:17:19.872Z · score: 3 (2 votes) · EA · GW

Rob Wiblin: Are Forum comments fine for submitting questions, or is there another way you'd prefer they be submitted?

Laura Deming: Some question derived from the following cluster:

  • What are some ways in which your youth has been a hindrance in setting up your projects?
  • Do you think that youth is a current bottleneck for many people who might be much more productive if they could get past artificial age-based barriers to doing good work?
  • What kinds of policy/legal/culture changes would give us a chance to unlock a lot of young talent?

Liv and Natalie: If you're working with/talking to a potential donor who already has a history of donating heavily to non-EA causes/organizations, how does that change the way you approach your pitch? Do you ever attempt to nudge donors away from organizations they've supported, or is that always a bad idea? (It seems difficult to do this kind of nudge if the donor doesn't initiate it themselves, but nudging successfully could free up millions of dollars in additional funding.)

Comment by aarongertler on Effective Altruism is an Ideology, not (just) a Question · 2019-06-30T08:55:07.963Z · score: 6 (4 votes) · EA · GW

I don't think the post was wrong not to address any of these questions (they would all require serious effort to answer). I only meant to point out that these are questions which occurred to me as I read the post and thought about it afterward. I'd be happy if anything in my response inspired someone to create a follow-up post.

Comment by aarongertler on Effective Altruism is an Ideology, not (just) a Question · 2019-06-30T05:42:20.475Z · score: 8 (7 votes) · EA · GW

I liked this post, and agree with many of these comments regarding types of analysis that are less common within EA.

However, I'll make the same comment here that I do on many other posts: Given that EA doesn't have much of (thing X), how should we get more of it?

For your post, my questions are:

  • Which of the types of analysis you mentioned, if any, do you think would be most useful to pursue? You could make an argument for this based on the causes you think are most important to gain information about, the types of analysis you think EA researchers are best-suited to pursue, the missions of the organizations best-positioned to make new research grants, etc.
  • Are there types of EA research you think should be less popular? Do the research agendas of current EA orgs overlap in ways that could be "fixed" by one org agreeing to move in a different direction? Does any existing research stand out as redundant, or as "should have been done using/incorporating other methodologies"?
  • Are there fields you think have historically been better at getting "correct answers" than EA about certain fields that are or ought to be interesting to EAs -- or, if not "better", at least "getting some correct answers EA missed or would have missed"? What are those answers?
    • This question might run headlong into problems around methodologies EA doesn't use or give credit to, but I'd hope that certain answers derived by methods unpopular within EA might still be "verifiable" by EA, e.g. by generating results the movement can appreciate/understand.
Comment by aarongertler on Effective Altruism is an Ideology, not (just) a Question · 2019-06-30T05:34:44.947Z · score: 5 (4 votes) · EA · GW

I'd like to promote a norm of suggesting typo corrections via private message, rather than in comments. This helps to keep comments free of clutter, especially on long posts that might have many typos. The only person interested in seeing a typo comment is likely to be the author.

You could argue that typo comments help the author avoid getting lots of PMs from different people that point out the same typos. For now, I'd guess that very few people will send such PMs regularly, and that we should favor PMs > comments for this. This may change as the Forum's readership increases, or if authors begin to complain about getting too many typo PMs.

Comment by aarongertler on What is the effect of relationship status on EA impact? · 2019-06-30T04:46:38.046Z · score: 8 (5 votes) · EA · GW

Alex Foster's answer covers most of what I wanted to say, but I'll also note that thinking in this way...

On the flip side, being in a relationship might help cut costs enabling more donating and could increase happiness to have an indirect effect on productivity? unlikely to be conducive to a successful romantic relationship, unless this thinking is secondary to thoughts like "I really like this person, they make me happy, and I also want them to be happy".

(There's a word for a relationship you form to cut costs, and the word is "roommate". There's a word for a relationship you form to increase happiness, and the word is "friend". As a bonus, it's easier to have multiple roommates or friends than it is to have multiple romantic partners.)

Comment by aarongertler on Who are the people that most publicly predicted we'd have AGI by now? Have they published any kind of retrospective, and updated their views? · 2019-06-30T04:40:14.864Z · score: 4 (3 votes) · EA · GW

I'm not aware of any major thinkers who predicted AGI by this date at any time during the 21st century. Would you still be interested in seeing predictions by people who guessed that we'd have AGI now in, say, 1980?

Comment by aarongertler on Babbling on Singleton Governance · 2019-06-23T21:30:10.091Z · score: 2 (1 votes) · EA · GW

When you use the term "babbling", do you mean it in the sense of this post?

An excerpt:

Babble with a weak heuristic to generate many more possibilities than necessary, Prune with a strong heuristic to find a best, or the satisfactory one.
Comment by aarongertler on [Link] Seed stage philanthropy · 2019-06-23T21:26:51.028Z · score: 3 (2 votes) · EA · GW

I really like Nadia's writing -- thanks for sharing!

Related: someone linked one of her other posts here a few months ago (on a similar topic).

Comment by aarongertler on Increase Impact by Waiting for a Recession to Donate or Invest in a Cause. · 2019-06-21T22:23:07.940Z · score: 3 (2 votes) · EA · GW

But how closely are IPOs and cryptocurrency activity linked to broader macroeconomic trends? It felt like the Bitcoin bubble inflated and popped with little impact on, say, the conventional stock market (but I could be wrong about that).

Comment by aarongertler on Increase Impact by Waiting for a Recession to Donate or Invest in a Cause. · 2019-06-21T06:37:55.599Z · score: 9 (7 votes) · EA · GW

Intuitively, I'd expect this to matter somewhat, all else being equal -- but all else is rarely equal.

In the nonprofit world, a promising organization might succeed or fail based on the amount it can raise within a few months. These org-specific timing considerations will, I suspect, usually overwhelm macroeconomic considerations. If you decide to wait until an economic slowdown, and it takes two years, you might see a dozen promising organizations emerge or fade away in the process. Heck, if you had decided in 2009 to wait until the next recession to give, you'd have missed... essentially the entirety of the EA movement.

At this point, with a recession being more likely to happen within a few years than was the case in 2009 (maybe?), it could make sense to save money. But I don't know how much EA giving lines up with economic cycles, for a few reasons:

  • A lot of donated EA money comes from very wealthy donors who have already set aside tens or hundreds of millions of dollars in safe places that a recession won't impact much
  • Another good chunk of EA money comes from people who made a pledge to donate a fixed percentage of their income, rather than seeing charity as one of the first things to cut back on when times are tough (as I'd guess is the case for many donors outside of EA)
  • A lot of people in the EA community have jobs that are fairly "recession-proof". They tend to be highly educated and work in growing sectors (as opposed to, say, construction)

All in all, I'm moderately confident that EA giving during a recession would drop much less than the general public's giving, though we aren't wholly immune to the larger economy.

Comment by aarongertler on Announcing the launch of the Happier Lives Institute · 2019-06-21T06:25:00.980Z · score: 4 (2 votes) · EA · GW

You've written more here than I can easily respond to, especially the day before EA Global begins! ;-)

...but I'll focus on your last point:

I worry one default outcome of 'Frontpage' posts, well, being on the frontpage on the EA Forum, and their receiving more attention, meaning they will be assumed to be of higher quality.

Some Forum posts seem like they will more accessible than others to people who have little previous experience with the EA community. Because these posts have a larger potential audience (in theory), we currently expose them to a larger audience using the Frontpage category.

This doesn't mean that Frontpage posts are necessarily "better", or even more useful to the average Forum visitor. But they could theoretically appeal to new audiences who aren't as familiar with EA.

For example, while a lot more Forum users might be interested in a post on the historical growth of the movement than on a post about nuclear war (because most Forum users are experienced with/invested in the EA community), a post about nuclear war could be interesting to people from many communities totally separate from EA (think-tank researchers, scientists, journalists, etc.)

Historically, a lot more posts get the "Frontpage" category than the "Community" category. But as you can see by going to the "Community" page on the Forum, posts in that category often get a lot of votes and comments -- probably because they appeal broadly to the people who use the Forum most, whatever cause area they might care about.

I doubt that someone looking at posts in both categories would conclude that "Frontpage" posts were "better" or "more important", at least if they took the time to read a couple of posts in each category.

That said, we did inherit the "Frontpage" name from LessWrong, and we may consider changing it in the future. (I'd welcome any suggestions for new names -- "Research" doesn't quite fit, I think, but good names are probably something along those lines.)


Historically, the Forum's homepage gets roughly ten times as much traffic as the Community page. But of the dozen posts with the most views in June, seven are Frontpage and five are Community. This is partly because many visitors to the homepage don't read anything, or read one article and bounce off (as for basically any website) and partly because much of the Forum's traffic comes from link-sharing through social media, the EA Newsletter, etc. (places where categorization doesn't matter at all).

Do you have any further questions about this point?

EA Forum Prize: Winners for April 2019

2019-06-04T00:09:45.687Z · score: 29 (14 votes)

EA Forum: Footnotes are live, and other updates

2019-05-21T00:26:54.713Z · score: 24 (16 votes)

EA Forum Prize: Winners for March 2019

2019-05-07T01:36:59.748Z · score: 45 (18 votes)

Open Thread #45

2019-05-03T21:20:43.340Z · score: 10 (4 votes)

EA Forum Prize: Winners for February 2019

2019-03-29T01:53:02.491Z · score: 46 (18 votes)

Open Thread #44

2019-03-06T09:27:58.701Z · score: 10 (4 votes)

EA Forum Prize: Winners for January 2019

2019-02-22T22:27:50.161Z · score: 30 (16 votes)

The Narrowing Circle (Gwern)

2019-02-11T23:50:45.093Z · score: 34 (15 votes)

What are some lists of open questions in effective altruism?

2019-02-05T02:23:03.345Z · score: 22 (12 votes)

Are there more papers on dung beetles than human extinction?

2019-02-05T02:09:58.568Z · score: 14 (9 votes)

You Should Write a Forum Bio

2019-02-01T03:32:29.453Z · score: 21 (15 votes)

EA Forum Prize: Winners for December 2018

2019-01-30T21:05:05.254Z · score: 46 (27 votes)

The Meetup Cookbook (Fantastic Group Resource)

2019-01-24T01:28:00.600Z · score: 15 (10 votes)

The Global Priorities of the Copenhagen Consensus

2019-01-07T19:53:01.080Z · score: 43 (26 votes)

Forum Update: New Features, Seeking New Moderators

2018-12-20T22:02:46.459Z · score: 23 (13 votes)

What's going on with the new Question feature?

2018-12-20T21:01:21.607Z · score: 10 (4 votes)

EA Forum Prize: Winners for November 2018

2018-12-14T21:33:10.236Z · score: 49 (24 votes)

Literature Review: Why Do People Give Money To Charity?

2018-11-21T04:09:30.271Z · score: 24 (11 votes)

W-Risk and the Technological Wavefront (Nell Watson)

2018-11-11T23:22:24.712Z · score: 8 (8 votes)

Welcome to the New Forum!

2018-11-08T00:06:06.209Z · score: 13 (8 votes)

What's Changing With the New Forum?

2018-11-07T23:09:57.464Z · score: 17 (11 votes)

Book Review: Enlightenment Now, by Steven Pinker

2018-10-21T23:12:43.485Z · score: 18 (11 votes)

On Becoming World-Class

2018-10-19T01:35:18.898Z · score: 20 (12 votes)

EA Concepts: Share Impressions Before Credences

2018-09-18T22:47:13.721Z · score: 9 (6 votes)

EA Concepts: Inside View, Outside View

2018-09-18T22:33:08.618Z · score: 2 (1 votes)

Talking About Effective Altruism At Parties

2017-11-16T20:22:46.114Z · score: 8 (8 votes)

Meetup : Yale Effective Altruists

2014-10-07T02:59:35.605Z · score: 0 (0 votes)