Some extremely rough research on giving and happiness 2020-09-09T08:33:06.084Z · score: 27 (15 votes)
CEA Mid-year update (2020) 2020-08-11T10:06:41.512Z · score: 81 (34 votes)
CEA's Plans for 2020 2020-04-23T07:50:44.921Z · score: 58 (30 votes)
CEA's 2019 Annual Review 2020-04-23T07:39:59.289Z · score: 36 (12 votes)
The Frontpage/Community distinction 2018-11-16T17:54:15.072Z · score: 24 (16 votes)
Why the EA Forum? 2018-11-07T23:24:49.981Z · score: 28 (18 votes)
Which piece got you more involved in EA? 2018-09-06T07:25:01.218Z · score: 15 (8 votes)
Announcing the Effective Altruism Handbook, 2nd edition 2018-05-02T07:58:24.124Z · score: 22 (28 votes)
Announcing Effective Altruism Grants 2017-06-09T10:28:15.441Z · score: 20 (22 votes)
Returns Functions and Funding Gaps 2017-05-23T08:22:44.935Z · score: 15 (15 votes)
Should we give to SCI or fund research into schistosomiasis? 2015-09-23T15:00:36.615Z · score: 10 (10 votes)


Comment by maxdalton on What types of charity will be the most effective for creating a more equal society? · 2020-10-13T10:06:33.522Z · score: 13 (5 votes) · EA · GW

Thanks! Those are both good points. I think you're right that they're open to changing their minds about some important aspects of their worldview (though I do think that "Please, if you disagree with me, carry your precious opinion elsewhere. " is some evidence that there are aspects that they're not very open to changing their mind about).

I also think that I reacted too strongly to the emotionally laden language - I agree this can be justified and appropriate, though I think it can also make collaborative truth-seeking harder. This makes me think that it's good to acknowledge, feel, and empathize with anger/sadness, whilst still being careful about the potential impact it might have when we're trying to work together to figure out what to do to help others. I do still feel worried about some sort of oversimplification/overconfidence wrt "all other problems are just derivatives".

To be clear, I always thought it was good to engage in discussion here rather than downvote, but I'm now a bit more optimistic about the dialogue going well.

Comment by maxdalton on What types of charity will be the most effective for creating a more equal society? · 2020-10-13T05:57:15.186Z · score: 13 (13 votes) · EA · GW

I didn't downvote, but I imagine people are reacting to a couple of phrases in the OP:

Please, if you disagree with me, carry your precious opinion elsewhere. I am only interested in opinions on how to most effectively create a more equal society.

I think that being open to changing your mind is an important norm. I think you could read this sentence as a very reasonable request to keep this discussion on topic, but I worry that it is a more general stance. (I also find the phrasing a bit rude.)

Some of the other phrases (e.g. "conviction" "deeply sick" "all other problems are just derivatives") make me worry about whether this person will change their mind, make me worry that they're overconfident, and make me worry that they'll use heated discourse in arguments rather than collaboratively truth seeking. All of these (if true) would make me a bit less excited about welcoming them to the community.

I also think that I could be reading too much into such phrases - I hope this person will go on to engage open-mindedly in discussion.

I really liked your answer - I think it's absolutely worth sharing resources, gently challenging, and reinforcing norms around open-minded cause prio. I personally think that that's a better solution than downvoting, if people have the time to do so.

Comment by maxdalton on CEA Mid-year update (2020) · 2020-10-08T09:35:52.415Z · score: 2 (1 votes) · EA · GW
Did you do any benchmarking for retention?

We did some light googling and also got some data from other EA orgs. Retention varies a lot by industry, but broadly I think our retention rates are now comparable or a bit better than the relevant average.

Do you know what the convention is for counting part-time versus full-time, interns, student employees, etc.?

Not sure what the convention is, but we looked at full-time employees.

Voluntary versus involuntary turnover?

Again, not sure what the convention is, but we're including both. I think that voluntary turnover is also a mildly negative sign (of bad hiring or training).

Comment by maxdalton on EA Survey Series 2019: How many EAs live in the main EA hubs? · 2020-09-03T10:04:05.266Z · score: 4 (2 votes) · EA · GW

Thanks for the extra analysis, that's interesting. Good point that it depends on your purpose.

Also, just to be clear, I didn't intend this as a criticism of the OP at all - this point just came up in conversation yesterday and I thought it was worth sharing! I find these posts really helpful and keep coming back to them as I think through all sorts of problems.

Comment by maxdalton on EA Survey Series 2019: How many EAs live in the main EA hubs? · 2020-09-03T08:43:07.116Z · score: 6 (3 votes) · EA · GW

I think some people might look at this to choose which EA hub to live in or where to found an organization (of course, not everyone can/should live in a hub).

I think it is easy to overlook the density of EAs when making such a decision: e.g. Oxford's population is ~60x smaller than London's and its land area is maybe 100-300x smaller. So the travel time to visit another random EA tends to be much lower, and it's a lot more likely that you bump into people on the street. (My impression is that Berkeley is somewhere between Oxford and London, but I don't know the details.)

Comment by maxdalton on CEA Mid-year update (2020) · 2020-08-13T19:02:39.234Z · score: 4 (2 votes) · EA · GW

Me too!

Comment by maxdalton on CEA Mid-year update (2020) · 2020-08-13T19:01:56.753Z · score: 6 (3 votes) · EA · GW

Thanks for sharing this! I found it somewhat surprising that the scale of effect looks like it's bigger for comments vs. posts. (I imagine that the difference in significance is also partly that the sample size for posters is much smaller, so it's harder to reach a significance threshhold.)

Comment by maxdalton on EA Forum feature suggestion thread · 2020-06-30T07:43:07.640Z · score: 2 (1 votes) · EA · GW

I don't know if you've seen - that has a dark mode (in the left hand menu). 

Comment by maxdalton on What are some good charities to donate to regarding systemic racial injustice? · 2020-06-04T14:50:09.018Z · score: 13 (10 votes) · EA · GW

I think that applying EA principles and concepts to different areas is really valuable, even if they’re areas that EA hasn’t focused on a lot up to this point. I’m glad you asked this question!

Comment by maxdalton on CEA's Plans for 2020 · 2020-05-04T09:52:23.213Z · score: 10 (6 votes) · EA · GW

I think a very common failure mode for CEA over the past ~5 years has been: CEA declares they are doing X, now no one else wants to or can get funding to do X, but CEA doesn't actually ever do X, so X never gets done.

I agree with this.  I think we've been making progress both in following through on what we say we'll do and in welcoming others to fill neglected roles, and I'd like to see us continue to make progress, particularly on the latter.

Comment by maxdalton on CEA's Plans for 2020 · 2020-04-27T15:34:41.149Z · score: 7 (5 votes) · EA · GW

I agree that it’s important that CEA reliably and verifiably listens to the community.

I think that we have been listening, and we published some of that consultation - for instance in this post and in the appendix to our 2019 review (see for instance the EA Global section).

Over the next few months we plan to send out more surveys to community members about what they like/dislike about the EA community members, and as mentioned above, we’re thinking about using community member satisfaction as a major metric for CEA. If it did become a key metric, it’s likely that we would share some of that feedback publicly. 

We don’t currently have plans for a democratic structure, but we’ve talked about introducing some democratic elements (though we probably won’t do that this year). 

Whilst I agree that consultation is vital, I think the benefits of democracy over consultation are unclear. For instance, voters are likely to have spent less time engaging with arguments for different positions and there is a risk of factionalism. Also the increased number of stakeholders means that the space of feasible options is reduced because there are few options that a wide spread of the community could agree on, which makes it harder to pursue more ambitious plans. 

I think you’re right that this would increase community support for CEA’s work and make CEA more accountable. I haven’t thought a lot about the options here, and it may be that there are some mechanisms which avoid the downsides. I’d be interested in suggestions.

Anyway, I definitely think it’s important for CEA to listen to the community and be transparent about our work, and I hope to do more of that in the future.

Comment by maxdalton on CEA's Plans for 2020 · 2020-04-27T15:16:25.443Z · score: 7 (4 votes) · EA · GW

Yes, we’ve thought about this. We currently think that it’s probably best for them to spin off separately, so that’s the main option under consideration, but we might change our minds (for instance as we learn more about which candidates are available, and what their strategic vision for the projects would be). 

This is a bit of a busy week for me, so if you’d like me to share more about our considerations, upvote this comment, and I’ll check back next week to see if there’s been sufficient interest.

Comment by maxdalton on CEA's Plans for 2020 · 2020-04-24T15:42:58.739Z · score: 14 (9 votes) · EA · GW

I think this is a really important point, and one I’ve been thinking a lot about over the past month. As you say, I do think that having a strategy is an important starting point, but I don’t want us to get stuck too meta. We’re still developing our strategy, but this quarter we’re planning to focus more on object-level work.  Hopefully we can share more about strategy and object-level work in the future. 

That said, I also think that we’ve made a lot of object-level progress in the last year, and we plan to make more this year, so we might have underemphasized that. You can read more in the (lengthy, sorry!) appendix to our 2019 post, but some highlights are:

  • Responding to 63 community concerns (ranging from minor situations (request for help preparing a workshop about self-care at an event) to major ones (request for help working out what to do about serious sexual harassment in a local group)).
  • Mentoring 50 group organizers, funding 80 projects run by groups, running group organizer retreats and making 30 grants for full-time organizers, with 25 case studies of promising people influenced by the groups we funded.
  • Helping around 1000 EA Global attendees make eight new connections on average, with 350 self-reported minor plan changes and 50 self-reported major plan changes (note these were self-reports, so are nowhere near as vetted as e.g. 80k plan changes).
  • 85% growth over 10 months in our key Forum metric and 34% growth in views of EA Global talks.
  • ~50% growth in donations to EA Funds, and millions in reported donations from GWWC members

Of course, there are lots of improvements we still need to make, but I still feel happy with this progress, and with the progress we made towards more reliably following through on commitments (e.g. addressing some of the problems with EA Grants). 

Comment by maxdalton on CEA's Plans for 2020 · 2020-04-24T15:40:36.432Z · score: 4 (3 votes) · EA · GW

Sorry, that paragraph wasn’t clear. Before we had offices in Oxford and Berkeley. The change is to close the Berkeley office (for reasons discussed above) and keep the Oxford office open. We think it’s useful to be in Oxford because that’s where a lot of our staff are currently based, and because it allows us to keep in touch with other EA orgs (e.g. the Global Priorities Institute) who share our office in Oxford. 

Comment by maxdalton on CEA's Plans for 2020 · 2020-04-23T14:00:56.986Z · score: 7 (6 votes) · EA · GW

Thanks for your comments! 

>Wasn't GWWC previously independent, before it was incorporated into CEA in 2016?

Essentially, yes. Giving What We Can was founded in 2009. CEA was set up as an umbrella legal entity for GWWC and 80,000 Hours in 2011, but the projects had separate strategies, autonomous leadership etc. In 2016, there was a restructure of CEA such that GWWC and some of the other activities under CEA’s umbrella came together under one CEO (Will MacAskill at that time), whilst 80,000 Hours continued to operate independently. 

>What's changed over the last 5 years to warrant a reversal?

To be honest, I think it’s less that the strategic landscape has changed, and more that the decision 5 years ago hasn’t worked out as well as we hoped. 

(I wasn’t around at the time the decision was made, and I’m not sure if it was the right call in expectation. Michelle (ex GWWC Executive Director) previously shared some thoughts on this on the Forum.)

As discussed here, from 2017 to 2019 CEA did not invest heavily in Giving What We Can. Communications became less frequent and the website lost some features. 

We’ve now addressed the largest of those issues, but the trustees and I think that Giving What We Can is an important project that hasn’t lived up to its (high) potential under the current arrangement (although pledges continue to grow).

Giving What We Can is one of the most successful parts of CEA. Over 4500 members have logged over $125M in donations. Members have pledged to donate $1.5B.  Beyond the money raised, it has helped to introduce lots of people (myself included) to the EA community. This means that we are all keen to invest more in GWWC.

I also think it’s important to narrow CEA’s focus. That focus looks like it’s going to be nurturing spaces for people to discuss and apply EA principles. GWWC is more focused on encouraging a particular activity (pledging to donate to charities). Since it was successfully run as an independent project in the past, trying to spin it out seemed like the right call. I’m leading on this process and trustees are investing a lot of time in it too, and we’ll work very closely with new leadership to test things out and make sure the new arrangement works well.

Comment by maxdalton on Why I'm Not Vegan · 2020-04-10T06:35:52.738Z · score: 19 (14 votes) · EA · GW

"I think there's a very large chance they don't matter at all, and that there's just no one inside to suffer" - this strikes me (for birds and mammals at least) as a statement in direct conflict with a large body of scientific evidence, and to some extent, consensus views among neuroscientists (e.g. the Cambridge Declaration on Consciousness

I think that the Cambridge Declaration on Consciousness is weak evidence for the claim that this is a "consensus view among neuroscientists".

From Luke Muehlhauser's 2017 Report on Consciousness and Moral Patienthood:

1. The document reads more like a political document than a scientific document. (See e.g. this commentary.)

2. As far as I can tell, the declaration was signed by a small number of people, perhaps about 15 people, and thus hardly demonstrates a “scientific consensus.”

3. Several of the signers of the declaration have since written scientific papers that seem to treat cortex-required views as a live possibility, e.g. Koch et al. (2016) and Laureys et al. (2015), p. 427.

Comment by maxdalton on EA Leaders Forum: Survey on EA priorities (data and analysis) · 2019-12-08T13:24:18.292Z · score: 21 (9 votes) · EA · GW

(I was the interim director of CEA during Leaders Forum, and I’m now the executive director.) 

I think that CEA has a history of pushing longtermism in somewhat underhand ways (e.g. I think that I made a mistake when I published an “EA handbook” without sufficiently consulting non-longtermist researchers, and in a way that probably over-represented AI safety and under-represented material outside of traditional EA cause areas, resulting in a product that appeared to represent EA, without accurately doing so). Given this background, I think it’s reasonable to be suspicious of CEA’s cause prioritisation. 

(I’ll be writing more about this in the future, and it feels a bit odd to get into this in a comment when it’s a major-ish update to CEA’s strategy, but I think it’s better to share more rather than less.) In the future, I’d like CEA to take a more agnostic approach to cause prioritisation, trying to construct non-gameable mechanisms for making decisions about how much we talk about different causes. An example of how this might work is that we might pay for an independent contractor to try to figure out who has spent more than two years full time thinking about cause prioritization, and then surveying those people. Obviously that project would be complicated - it’s hard to figure out exactly what “cause prio” means, it would be important to reach out through diverse networks to make sure there aren’t network biases etc.

Anyway, given this background of pushing longtermism, I think it’s reasonable to be skeptical of CEA’s approach on this sort of thing.

When I look at the list of organizations that were surveyed, it doesn’t look like the list of organizations most involved in movement building and coordination. It looks much more like a specific subset of that type of org: those focused on longtermism or x-risk (especially AI) and based in one of the main hubs (London accounts for ~50% of respondents, and the Bay accounts for ~30%).* Those that prioritize global poverty, and to a lesser extent animal welfare, seem notably missing. It’s possible the list of organizations that didn’t respond or weren’t named looks a lot different, but if that’s the case it seems worth calling attention to and possibly trying to rectify (e.g. did you email the survey to anyone or was it all done in person at the Leaders Forum?)

I think you’re probably right that there are some biases here. How the invite process worked this year was that Amy Labenz, who runs the event, draws up a longlist of potential attendees (asking some external advisors for suggestions about who should be invited). Then Amy, Julia Wise, and I voted yes/no/maybe on all of the individuals on the longlist (often adding comments). Amy made a final call about who to invite, based on those votes. I expect that all of this means that the final invite list is somewhat biased by our networks, and some background assumptions we have about individuals and orgs. 

Given this, I think that it would be fair to view the attendees of the event as “some people who CEA staff think it would be useful to get together for a few days” rather than “the definitive list of EA leaders”. I think that we were also somewhat loose about what the criteria for inviting people should be, and I’d like us to be a bit clearer on that in the future (see a couple of paragraphs below). Given this, I think that calling the event “EA Leaders Forum” is probably a mistake, but others on the team think that changing the name could be confusing and have transition costs - we’re still talking about this, and haven’t reached resolution about whether we’ll keep the name for next year.

I also think CEA made some mistakes in the way we framed this post (not just the author, since it went through other readers before publication.) I think the post kind of frames this as “EA leaders think X”, which I expect would be the sort of thing that lots of EAs should update on. (Even though I think it does try to explicitly disavow this interpretation (see the section on “What this data does and does not represent”, I think the title suggests something that’s more like “EA leaders think these are the priorities - probably you should update towards these being the priorities”). I think that the reality is more like “some people that CEA staff think it’s useful to get together for an event think X”, which is something that people should update on less. 

We’re currently at a team retreat where we’re talking more about what the goals of the event should be in the future. I think that it’s possible that the event looks pretty different in future years, and we’re not yet sure how. But I think that whatever we decide, we should think more carefully about the criteria for attendees, and that will include thinking carefully about the approach to cause prioritization.

Comment by maxdalton on Movement Collapse Scenarios · 2019-09-05T12:22:09.987Z · score: 23 (11 votes) · EA · GW

Thanks for raising these points, John! I hadn't considered the "cash prize for criticism" idea before, but it does seem like it's worth more consideration.

I agree that CEA could do better on the front of generating criticisms from outside the organization, as well as making it easier for staff to criticize leadership. This is one of the key things that we have been working to improve since I took up the Interim Executive Director role in early 2019. Back in January/February, we did a big push on this, logging around 100 hours of user interviews in a few weeks, and sending out surveys to dozens of community members for feedback. Since then, we've continued to invest in getting feedback, e.g. staff regularly talk to community members to get feedback on our projects (though I think we could do more); similarly, we reach out to donors and advisors to get feedback on how we could improve our projects; we also have various (including anonymous) mechanisms for staff to raise concerns about management decisions. Together, I think these represent more than 0.1% of CEA's staff time. None of this is to say that this is going as well as we'd like - maybe I'd say one of CEA's "known weaknesses" is that I think we could stand to do more of this.

I agree that more of this could be public and transparent also - e.g. I'm aware that our mistakes page ( is incomplete. We're currently nearing the end of our search for a new CEO, and one of the things that I think they're likely to want to do is to communicate more with the community, and solicit the community's thoughts on future plans.

Comment by maxdalton on New protein alternative produced from CO2 · 2019-08-13T16:50:22.189Z · score: 12 (6 votes) · EA · GW

I wonder if this is also a thing that ALLFED might be interested in - I haven't looked into this much, but the article claims that the process only requires water, CO2, and electricity, which we might have in lots of disaster scenarios. So if production of this were scaled up in the short term, that might be helpful for ALLFED's mission.

Comment by maxdalton on Optimizing Activities Fairs · 2019-07-11T09:20:52.316Z · score: 3 (2 votes) · EA · GW

Thanks for the writeup! I really appreciate people taking the time to share what they've learned. I agree that activities fairs are a really high leverage time for student groups.

My summary of this approach is "Try to get as many email addresses as possible, and anticipate that many people will unsubscribe/never engage". I'd be interested to hear more about why this approach is recommended over others.

I think that this could well be the right approach, but it's not totally clear to me. It could be that having slightly longer conversations with people would build more raport, give them a better sense of the ideas, and make them a lot more likely to continue to engage, so you get more/higher quality people lower down your funnel. My memory of going to freshers fairs was that if I had a proper conversation with someone it did make some difference to the likelihood that I engaged later on.

I also worry a bit about the maximizing for email addresses approach coming across as unfriendly.

It does seem right to me that arguing with people isn't worth the time.

I'd be interested in why Eli and Aaron think that the "maximize for email addresses" approach is correct long-term. I could well imagine that they've tried both approaches, and seen more engagement lower down the funnel with the "max for email addresses" approach.

[Speaking from my experience as a groups organizer, not on behalf of CEA]

Comment by maxdalton on Impact investing is only a good idea in specific circumstances · 2018-12-06T12:22:03.949Z · score: 11 (9 votes) · EA · GW

I strong upvoted this. I think it's great to have a reference piece on this, and particularly one which has such a good summary.

Comment by maxdalton on What's Changing With the New Forum? · 2018-11-12T10:23:55.076Z · score: 4 (4 votes) · EA · GW

That's right, this is intended as a feature. All comments and posts start with a weak upvote (we assume you think the thing is good, or you wouldn't have posted it). You can strong upvote your content, which is designed as a way for you to signal-boost contributions that you think are unusually valuable. Obviously, we don't want people to be strong-upvoting all their content, and we'll keep an eye on that happening.

Comment by maxdalton on Even non-theists should act as if theism is true · 2018-11-09T08:58:20.106Z · score: 4 (3 votes) · EA · GW

To link this to JP's other point, you might be right that subjectivism is implausible, but it's hard to tell how low a credence to give it.

If your credence in subjectivism + model uncertainty (+ I think also constructivism + quasi-realism + maybe others?) is sufficiently high relative to your credence in God, then this weakens your argument (although it still seems plausible to me that theistic moralities end up with a large slice of the pie).

I'm pretty uncertain about my credence in each of those views though.

Comment by maxdalton on Even non-theists should act as if theism is true · 2018-11-09T08:41:04.334Z · score: 0 (6 votes) · EA · GW

Upvote for starting with praise, and splitting out separate threads.

Comment by maxdalton on Burnout: What is it and how to Treat it. · 2018-11-08T18:14:52.298Z · score: 4 (4 votes) · EA · GW

I found the Manager Tools basics podcasts, and the Effective Manager a great way to cover the basics. (But I know others have found them less helpful.)

A great piece on this from the Forum is: Ben West's post on Deliberate Performance in People Management.

Comment by maxdalton on How to use the Forum · 2018-11-08T14:41:52.402Z · score: 2 (2 votes) · EA · GW

As long as you make clear how it's relevant to figuring out how to do as much good as possible, that sort of content is welcome.

Comment by maxdalton on Why the EA Forum? · 2018-11-08T11:48:40.597Z · score: 10 (5 votes) · EA · GW

That's right - one of the main goals of having posts sorted by karma (as well as having two sections) - is to allow people to feel more comfortable posting, knowing that the best posts will rise to the top.

Comment by maxdalton on Which piece got you more involved in EA? · 2018-11-08T11:37:21.945Z · score: 2 (2 votes) · EA · GW

If you highlight the text, a hover appears above the text, and the link icon is one of the options - click on it, paste the url, and press enter.

Comment by maxdalton on Burnout: What is it and how to Treat it. · 2018-11-08T11:14:31.018Z · score: 5 (5 votes) · EA · GW

I sleep a lot better when I'm cooler, and I've found this helpful: Others recommend

Comment by maxdalton on Burnout: What is it and how to Treat it. · 2018-11-08T11:12:03.534Z · score: 5 (4 votes) · EA · GW

Link to Zvi's sequence on LessWrong, which includes the posts you mentioned:

Comment by maxdalton on What's Changing With the New Forum? · 2018-11-08T11:03:30.716Z · score: 5 (4 votes) · EA · GW

Hi Richard, I think you're right that "basic concepts" is incorrect: I agree that it's important to discuss advanced ideas which build off each other. We'd want both of the posts you mention to be frontpage posts. I'll suggest an edit to Aaron.

By default, we're moving all content to either Frontpage or Community, since we're trying to have a slightly less active moderation policy than LessWrong. We might revisit this at some point. You can still click on a user's name to see their personal feed of posts.

Comment by maxdalton on Why the EA Forum? · 2018-11-08T10:31:04.352Z · score: 1 (1 votes) · EA · GW

Moderation notice: stickied on community.

Comment by maxdalton on What's Changing With the New Forum? · 2018-11-08T10:30:21.025Z · score: 1 (1 votes) · EA · GW

Moderation notice: Stickied in Community to give context for people familiar with the old Forum.

Comment by maxdalton on Keeping Absolutes in Mind · 2018-11-07T10:36:35.775Z · score: 1 (1 votes) · EA · GW

I agree with your point about subjective expected value (although realized value is evidence for subjective expected value). I'm not sure I understand the point in your last paragraph?

Comment by maxdalton on Keeping Absolutes in Mind · 2018-11-06T12:28:32.289Z · score: 15 (10 votes) · EA · GW

Strong upvote. I think this is an important point, nicely put.

A slightly different version of this, which I think is particularly insidious, is feeling bad about doing a job which is your relative advantage. If I think Cause A is the most important, it's tempting to feel that I should work on Cause A even if I'm much better at working on Cause B, and that's my comparative advantage within the community. This also applies to how one should think about other people - I think one should praise people who work on Cause B if that's the thing that's best for their skills/motivations.

Comment by Maxdalton on [deleted post] 2018-09-18T08:40:37.201Z

Hi Peter, thanks for the feedback! To respond to the things that others haven't already responded to:

  • We hope to keep working on the typography to improve things.
  • We are definitely aiming for a lighter touch to moderation than on LW: we've deleted the "curated" section, and we want karma to sort out what ends up on the Frontpage. The main moderation decisions we'll be taking are policing the Community/Frontpage distinction, commenting to encourage good discourse norms, and sending messages to people where they could improve their discourse. The norms we're trying to promote are set out in the moderation policy, and are focused on tone/style etc. We'd love people to help us out by enforcing good norms, and by checking that we're following the policy. The main reason that we want to be a little more active is to provide users with more positive feedback, and fewer difficult responses, so that they're encouraged to engage more. I don't think the current Forum is particularly bad at this, but I would like to see another nudge in that direction. I'd be interested how that sounds to you?
  • I would like to see a sidebar eventually. Currently we want to focus on rebuilding in some elements from LW (like sequences, and their map of local groups), but this is on our long-list
  • Although we've removed curated, we are aiming to reintroduce the sequences feature (NB these are significantly less obtrusive once you're logged in). The reasoning behind this is that we expect some new people to come to the Forum, and we think it's good that they are initially sent to more introductory material. We also think that it's valuable to have some set of common knowledge for the community. This is a way to cement intellectual progress: rather than rebuilding the same wall, there can be an (expanding) set of core ideas which we can build on. Users will be able to create their own sequences, and we are consulting about what sort of things should be in the core sequences, which are most visible. We want the core sequences to be representative of the community.
  • Bug report has been filed for Ctrl + K
  • isn't intended to be a link (we've called this subforum "Community" rather than "meta" ( I'm guessing you're referring to the sidebar link, which I'll ask that we remove.

Comment by maxdalton on Additional plans for the new EA Forum · 2018-09-07T18:16:28.879Z · score: 4 (4 votes) · EA · GW

Sorry, that should be fixed now.

Comment by maxdalton on Which piece got you more involved in EA? · 2018-09-07T07:11:58.657Z · score: 3 (3 votes) · EA · GW

Chapter 2 in particular is slightly broader, and motivates some general EA/consequentialist questions. There are technical bits throughout, but I enjoyed reading it.

Comment by maxdalton on Announcing the Effective Altruism Handbook, 2nd edition · 2018-08-13T08:39:30.106Z · score: 0 (0 votes) · EA · GW

If you would like to translate the Handbook, please email for permission.

Comment by maxdalton on Problems with EA representativeness and how to solve it · 2018-08-04T15:17:20.548Z · score: 17 (17 votes) · EA · GW

Hi Joey, thanks for raising this with such specific suggestions for how this should be done differently.

I won't respond to the specific Handbook concerns again, since people can easily find my previous responses in the comment threads that you link to.

I think that part of the problem was caused by the general trend that you're discussing, but also that I made mistakes in editing the Handbook, which I'm sorry for. In particular, I should have:

  • Consulted more widely before releasing the Handbook

  • Made clearer that the handbook was curated and produced by CEA

  • Included more engaging content related to global poverty and animal welfare.

I've tried to fix all of these mistakes, and we are currently working on producing a new edition which includes additional content (80,000 Hours' Problem Framework, Sentience Institute's Foundational question summaries (slightly shortened), David Roodman's research for GiveWell on the worm wars).

Comment by maxdalton on EA Forum 2.0 Initial Announcement · 2018-08-01T16:48:19.147Z · score: 0 (0 votes) · EA · GW

My suggestion here would be to remove the default criterion for which posts are visible, so that per default all posts are visible (irrespective of the downvotes), but that people can select in their settings a threshold of votes a post should have in order to be visible.

Our proposal for how this would work is that all posts would be visible on personal blogs, but that posts with a negative karma score wouldn't show up on the "frontpage" (the default view). People would still be able to see it on the "All posts" view until the post reached -5 karma, and would be able to upvote it back onto the frontpage. Sometimes this might lead to us losing quality posts, but it also helps prevent users seeing very low quality posts (e.g. spam).

Comment by maxdalton on EA Forum 2.0 Initial Announcement · 2018-08-01T16:33:16.283Z · score: 2 (2 votes) · EA · GW

Hi Ryan, Thanks again for all setting up the Forum, and looking after it!

On some of the points you raise:

  • I agree that moderators should be able to produce content, and vote: we were not proposing that CEA staff or moderators would not do that.

  • I like the idea of integrating with Facebook events, I'll add it to our list.

  • I also agree that the community is not currently large enough for many additional fora: if we implement this, it will be slowly and carefully.

Comment by maxdalton on EA Forum 2.0 Initial Announcement · 2018-08-01T16:22:58.896Z · score: 1 (1 votes) · EA · GW

I agree that it's sometimes useful for people to be able to post anonymously. Currently this is done by people creating separate anonymous accounts, which seems like a reasonable work-around. (And +1 to Greg's comment about your second use case.)

Comment by maxdalton on EA Forum 2.0 Initial Announcement · 2018-08-01T16:11:10.862Z · score: 0 (0 votes) · EA · GW

I think that this looks like a promising feature, I'll add it to our list of things we might do once the beta is stable.

Comment by maxdalton on EA Forum 2.0 Initial Announcement · 2018-08-01T16:10:07.607Z · score: 0 (0 votes) · EA · GW

Hey Dunja, it's true that a downvote provides less information than a comment, but I think it does provide some information, and that people can update based on that information, particularly if they get similar feedback on multiple comments: e.g. I might notice "Oh, when I write extremely short comments, they're more likely to be downvoted, and less likely to be upvoted. I'll elaborate more in the future" or similar.

Comment by maxdalton on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-23T13:20:28.851Z · score: 2 (2 votes) · EA · GW

Thanks so much for all of the feedback everyone, this was very helpful for me to work out the problems with the old version. I've been working on getting a version which everyone can agree will be an improvement.

All of the more minor changes to the Handbook have now been completed, and are available at the links in the OP.

In addition to the minor changes, I plan to add the following articles:

I may also make some edits to the three "cause profile" pieces, some for length, and adding some details to the long-term future piece. The more major edits might take a couple of months (for the 80,000 Hours piece to be ready, for redesign).

I've reached out to some of the original commenters, and some of the main research/community building orgs in the space, asking for further comments. Thanks again to everyone who took the time to try to make this a better product. I for one am more excited about the version-to-come than the version-as-was.

Comment by maxdalton on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-08T11:22:43.465Z · score: 0 (0 votes) · EA · GW

Thanks, we'll look into that.

Comment by maxdalton on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-07T13:39:06.330Z · score: 7 (7 votes) · EA · GW

Thanks for the comments Evan. First, I want to apologize for not seeking broader consultation earlier. This was clearly a mistake.

My plan now is to do as you suggest: talk to other actors in EA and get their feedback on what to include etc. Obviously any compromise is going to leave some unhappy - different groups do just favour different presentations of EA, so it seems unlikely to me that we will get a fully independent presentation that will please everyone. I also worry that democracy is not well suited to editorial decisions, and that the "electorate" of EA is ill-defined. If the full compromise approach fails, I think it would be best to release a CEA-branded resource which incorporates most of the feedback above. This option also seems to me to be cooperative, and to avoid harm to the fidelity of EA's message, but I might be missing something.

Comment by maxdalton on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-07T13:27:56.682Z · score: 5 (5 votes) · EA · GW

Thanks Tom. I've discussed the reasoning for including three articles on AI a bit on Facebook. To quote from that:

"I want to explain some of the reasoning behind including several articles on AI. AI risk is a more unusual area, which is more susceptible to misinterpretation than global health or animal welfare [or, I think e.g. biosecurity and other long-term focused causes]. Partly for this reason, we thought that it was sensible to include several articles on this topic, with the intention that this would provide more needed background and convey more of the nuance of the idea. I will talk with some of the commenters above to discuss if it makes sense to do some sort of merge so that AI dominates the contents page less."

Thanks for suggestions for alternatives to the global poverty and animal welfare articles. I think you may well be right that we should change those. This is another mistake that I made. The content for the EA Handbook grew out of a sequence of content on As a consequence, it included only content that we had produced (or that had been produced by others at our events). At the point when we shifted to a pdf/ebook format, I should have reconsidered the selection of articles, which would have given us the possibility of trying to include the excellent content that you mention. I hope that changing those articles will also reduce the impression that AI follows obviously from a long-term future focus. I'm sorry for making this mistake.

Comment by maxdalton on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-07T08:35:06.215Z · score: 1 (1 votes) · EA · GW

Good idea. I'll do this when and if there is more consensus that people want to promote this content over the old.