Comment by maxdalton on Impact investing is only a good idea in specific circumstances · 2018-12-06T12:22:03.949Z · score: 11 (9 votes) · EA · GW

I strong upvoted this. I think it's great to have a reference piece on this, and particularly one which has such a good summary.

The Frontpage/Community distinction

2018-11-16T17:54:15.072Z · score: 10 (10 votes)
Comment by maxdalton on What's Changing With the New Forum? · 2018-11-12T10:23:55.076Z · score: 4 (4 votes) · EA · GW

That's right, this is intended as a feature. All comments and posts start with a weak upvote (we assume you think the thing is good, or you wouldn't have posted it). You can strong upvote your content, which is designed as a way for you to signal-boost contributions that you think are unusually valuable. Obviously, we don't want people to be strong-upvoting all their content, and we'll keep an eye on that happening.

Comment by maxdalton on Even non-theists should act as if theism is true · 2018-11-09T08:58:20.106Z · score: 4 (3 votes) · EA · GW

To link this to JP's other point, you might be right that subjectivism is implausible, but it's hard to tell how low a credence to give it.

If your credence in subjectivism + model uncertainty (+ I think also constructivism + quasi-realism + maybe others?) is sufficiently high relative to your credence in God, then this weakens your argument (although it still seems plausible to me that theistic moralities end up with a large slice of the pie).

I'm pretty uncertain about my credence in each of those views though.

Comment by maxdalton on Even non-theists should act as if theism is true · 2018-11-09T08:41:04.334Z · score: 0 (6 votes) · EA · GW

Upvote for starting with praise, and splitting out separate threads.

Comment by maxdalton on Burnout: What is it and how to Treat it. · 2018-11-08T18:14:52.298Z · score: 4 (4 votes) · EA · GW

I found the Manager Tools basics podcasts, and the Effective Manager a great way to cover the basics. (But I know others have found them less helpful.)

A great piece on this from the Forum is: Ben West's post on Deliberate Performance in People Management.

Comment by maxdalton on How to use the Forum · 2018-11-08T14:41:52.402Z · score: 2 (2 votes) · EA · GW

As long as you make clear how it's relevant to figuring out how to do as much good as possible, that sort of content is welcome.

Comment by maxdalton on Why the EA Forum? · 2018-11-08T11:48:40.597Z · score: 10 (5 votes) · EA · GW

That's right - one of the main goals of having posts sorted by karma (as well as having two sections) - is to allow people to feel more comfortable posting, knowing that the best posts will rise to the top.

Comment by maxdalton on Which piece got you more involved in EA? · 2018-11-08T11:37:21.945Z · score: 2 (2 votes) · EA · GW

If you highlight the text, a hover appears above the text, and the link icon is one of the options - click on it, paste the url, and press enter.

Comment by maxdalton on Burnout: What is it and how to Treat it. · 2018-11-08T11:14:31.018Z · score: 4 (4 votes) · EA · GW

I sleep a lot better when I'm cooler, and I've found this helpful: https://www.chilitechnology.com/. Others recommend https://bedjet.com/.

Comment by maxdalton on Burnout: What is it and how to Treat it. · 2018-11-08T11:12:03.534Z · score: 5 (4 votes) · EA · GW

Link to Zvi's sequence on LessWrong, which includes the posts you mentioned: https://www.lesswrong.com/s/HXkpm9b8o964jbQ89

Comment by maxdalton on What's Changing With the New Forum? · 2018-11-08T11:03:30.716Z · score: 5 (4 votes) · EA · GW

Hi Richard, I think you're right that "basic concepts" is incorrect: I agree that it's important to discuss advanced ideas which build off each other. We'd want both of the posts you mention to be frontpage posts. I'll suggest an edit to Aaron.

By default, we're moving all content to either Frontpage or Community, since we're trying to have a slightly less active moderation policy than LessWrong. We might revisit this at some point. You can still click on a user's name to see their personal feed of posts.

Comment by maxdalton on Why the EA Forum? · 2018-11-08T10:31:04.352Z · score: 1 (1 votes) · EA · GW

Moderation notice: stickied on community.

Comment by maxdalton on What's Changing With the New Forum? · 2018-11-08T10:30:21.025Z · score: 1 (1 votes) · EA · GW

Moderation notice: Stickied in Community to give context for people familiar with the old Forum.

Why the EA Forum?

2018-11-07T23:24:49.981Z · score: 28 (18 votes)
Comment by maxdalton on Keeping Absolutes in Mind · 2018-11-07T10:36:35.775Z · score: 1 (1 votes) · EA · GW

I agree with your point about subjective expected value (although realized value is evidence for subjective expected value). I'm not sure I understand the point in your last paragraph?

Comment by maxdalton on Keeping Absolutes in Mind · 2018-11-06T12:28:32.289Z · score: 14 (9 votes) · EA · GW

Strong upvote. I think this is an important point, nicely put.

A slightly different version of this, which I think is particularly insidious, is feeling bad about doing a job which is your relative advantage. If I think Cause A is the most important, it's tempting to feel that I should work on Cause A even if I'm much better at working on Cause B, and that's my comparative advantage within the community. This also applies to how one should think about other people - I think one should praise people who work on Cause B if that's the thing that's best for their skills/motivations.

Comment by maxdalton on EA Forum 2.0 Initial Announcement · 2018-09-18T08:40:37.201Z · score: 3 (2 votes) · EA · GW

Hi Peter, thanks for the feedback! To respond to the things that others haven't already responded to:

  • We hope to keep working on the typography to improve things.
  • We are definitely aiming for a lighter touch to moderation than on LW: we've deleted the "curated" section, and we want karma to sort out what ends up on the Frontpage. The main moderation decisions we'll be taking are policing the Community/Frontpage distinction, commenting to encourage good discourse norms, and sending messages to people where they could improve their discourse. The norms we're trying to promote are set out in the moderation policy, and are focused on tone/style etc. We'd love people to help us out by enforcing good norms, and by checking that we're following the policy. The main reason that we want to be a little more active is to provide users with more positive feedback, and fewer difficult responses, so that they're encouraged to engage more. I don't think the current Forum is particularly bad at this, but I would like to see another nudge in that direction. I'd be interested how that sounds to you?
  • I would like to see a sidebar eventually. Currently we want to focus on rebuilding in some elements from LW (like sequences, and their map of local groups), but this is on our long-list
  • Although we've removed curated, we are aiming to reintroduce the sequences feature (NB these are significantly less obtrusive once you're logged in). The reasoning behind this is that we expect some new people to come to the Forum, and we think it's good that they are initially sent to more introductory material. We also think that it's valuable to have some set of common knowledge for the community. This is a way to cement intellectual progress: rather than rebuilding the same wall, there can be an (expanding) set of core ideas which we can build on. Users will be able to create their own sequences, and we are consulting about what sort of things should be in the core sequences, which are most visible. We want the core sequences to be representative of the community.
  • Bug report has been filed for Ctrl + K
  • https://forum.effectivealtruism.org/meta isn't intended to be a link (we've called this subforum "Community" rather than "meta" (https://forum.effectivealtruism.org/community)). I'm guessing you're referring to the sidebar link, which I'll ask that we remove.

Comment by maxdalton on Additional plans for the new EA Forum · 2018-09-07T18:16:28.879Z · score: 4 (4 votes) · EA · GW

Sorry, that should be fixed now.

Comment by maxdalton on Which piece got you more involved in EA? · 2018-09-07T07:11:58.657Z · score: 3 (3 votes) · EA · GW

Chapter 2 in particular is slightly broader, and motivates some general EA/consequentialist questions. There are technical bits throughout, but I enjoyed reading it. https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxuYmVja3N0ZWFkfGd4OjExNDBjZTcwNjMxMzRmZGE

Which piece got you more involved in EA?

2018-09-06T07:25:01.218Z · score: 14 (7 votes)
Comment by maxdalton on Announcing the Effective Altruism Handbook, 2nd edition · 2018-08-13T08:39:30.106Z · score: 0 (0 votes) · EA · GW

If you would like to translate the Handbook, please email content@effectivealtruism.org for permission.

Comment by maxdalton on Problems with EA representativeness and how to solve it · 2018-08-04T15:17:20.548Z · score: 17 (17 votes) · EA · GW

Hi Joey, thanks for raising this with such specific suggestions for how this should be done differently.

I won't respond to the specific Handbook concerns again, since people can easily find my previous responses in the comment threads that you link to.

I think that part of the problem was caused by the general trend that you're discussing, but also that I made mistakes in editing the Handbook, which I'm sorry for. In particular, I should have:

  • Consulted more widely before releasing the Handbook

  • Made clearer that the handbook was curated and produced by CEA

  • Included more engaging content related to global poverty and animal welfare.

I've tried to fix all of these mistakes, and we are currently working on producing a new edition which includes additional content (80,000 Hours' Problem Framework, Sentience Institute's Foundational question summaries (slightly shortened), David Roodman's research for GiveWell on the worm wars).

Comment by maxdalton on EA Forum 2.0 Initial Announcement · 2018-08-01T16:48:19.147Z · score: 0 (0 votes) · EA · GW

My suggestion here would be to remove the default criterion for which posts are visible, so that per default all posts are visible (irrespective of the downvotes), but that people can select in their settings a threshold of votes a post should have in order to be visible.

Our proposal for how this would work is that all posts would be visible on personal blogs, but that posts with a negative karma score wouldn't show up on the "frontpage" (the default view). People would still be able to see it on the "All posts" view until the post reached -5 karma, and would be able to upvote it back onto the frontpage. Sometimes this might lead to us losing quality posts, but it also helps prevent users seeing very low quality posts (e.g. spam).

Comment by maxdalton on EA Forum 2.0 Initial Announcement · 2018-08-01T16:33:16.283Z · score: 2 (2 votes) · EA · GW

Hi Ryan, Thanks again for all setting up the Forum, and looking after it!

On some of the points you raise:

  • I agree that moderators should be able to produce content, and vote: we were not proposing that CEA staff or moderators would not do that.

  • I like the idea of integrating with Facebook events, I'll add it to our list.

  • I also agree that the community is not currently large enough for many additional fora: if we implement this, it will be slowly and carefully.

Comment by maxdalton on EA Forum 2.0 Initial Announcement · 2018-08-01T16:22:58.896Z · score: 1 (1 votes) · EA · GW

I agree that it's sometimes useful for people to be able to post anonymously. Currently this is done by people creating separate anonymous accounts, which seems like a reasonable work-around. (And +1 to Greg's comment about your second use case.)

Comment by maxdalton on EA Forum 2.0 Initial Announcement · 2018-08-01T16:11:10.862Z · score: 0 (0 votes) · EA · GW

I think that this looks like a promising feature, I'll add it to our list of things we might do once the beta is stable.

Comment by maxdalton on EA Forum 2.0 Initial Announcement · 2018-08-01T16:10:07.607Z · score: 0 (0 votes) · EA · GW

Hey Dunja, it's true that a downvote provides less information than a comment, but I think it does provide some information, and that people can update based on that information, particularly if they get similar feedback on multiple comments: e.g. I might notice "Oh, when I write extremely short comments, they're more likely to be downvoted, and less likely to be upvoted. I'll elaborate more in the future" or similar.

Comment by maxdalton on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-23T13:20:28.851Z · score: 2 (2 votes) · EA · GW

Thanks so much for all of the feedback everyone, this was very helpful for me to work out the problems with the old version. I've been working on getting a version which everyone can agree will be an improvement.

All of the more minor changes to the Handbook have now been completed, and are available at the links in the OP.

In addition to the minor changes, I plan to add the following articles:

I may also make some edits to the three "cause profile" pieces, some for length, and adding some details to the long-term future piece. The more major edits might take a couple of months (for the 80,000 Hours piece to be ready, for redesign).

I've reached out to some of the original commenters, and some of the main research/community building orgs in the space, asking for further comments. Thanks again to everyone who took the time to try to make this a better product. I for one am more excited about the version-to-come than the version-as-was.

Comment by maxdalton on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-08T11:22:43.465Z · score: 0 (0 votes) · EA · GW

Thanks, we'll look into that.

Comment by maxdalton on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-07T13:39:06.330Z · score: 7 (7 votes) · EA · GW

Thanks for the comments Evan. First, I want to apologize for not seeking broader consultation earlier. This was clearly a mistake.

My plan now is to do as you suggest: talk to other actors in EA and get their feedback on what to include etc. Obviously any compromise is going to leave some unhappy - different groups do just favour different presentations of EA, so it seems unlikely to me that we will get a fully independent presentation that will please everyone. I also worry that democracy is not well suited to editorial decisions, and that the "electorate" of EA is ill-defined. If the full compromise approach fails, I think it would be best to release a CEA-branded resource which incorporates most of the feedback above. This option also seems to me to be cooperative, and to avoid harm to the fidelity of EA's message, but I might be missing something.

Comment by maxdalton on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-07T13:27:56.682Z · score: 5 (5 votes) · EA · GW

Thanks Tom. I've discussed the reasoning for including three articles on AI a bit on Facebook. To quote from that:

"I want to explain some of the reasoning behind including several articles on AI. AI risk is a more unusual area, which is more susceptible to misinterpretation than global health or animal welfare [or, I think e.g. biosecurity and other long-term focused causes]. Partly for this reason, we thought that it was sensible to include several articles on this topic, with the intention that this would provide more needed background and convey more of the nuance of the idea. I will talk with some of the commenters above to discuss if it makes sense to do some sort of merge so that AI dominates the contents page less."

Thanks for suggestions for alternatives to the global poverty and animal welfare articles. I think you may well be right that we should change those. This is another mistake that I made. The content for the EA Handbook grew out of a sequence of content on effectivealtruism.org. As a consequence, it included only content that we had produced (or that had been produced by others at our events). At the point when we shifted to a pdf/ebook format, I should have reconsidered the selection of articles, which would have given us the possibility of trying to include the excellent content that you mention. I hope that changing those articles will also reduce the impression that AI follows obviously from a long-term future focus. I'm sorry for making this mistake.

Comment by maxdalton on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-07T08:35:06.215Z · score: 1 (1 votes) · EA · GW

Good idea. I'll do this when and if there is more consensus that people want to promote this content over the old.

Comment by maxdalton on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-04T17:02:53.222Z · score: 6 (8 votes) · EA · GW

(Copying across some comments I made on Facebook which are relevant to this.)

Thanks for the passionate feedback everyone. Whilst I don’t agree with all of the comments, I’m sorry for the mistakes I made. Since some of the comments above make similar comments, I’ll try to give general replies in some main-thread comments. I’ll also be reaching out to some of the people in the thread above to try to work out the best way forward.

My understanding is that the main worry that people have is about calling it the Effective Altruism Handbook vs. CEA’s Guide to Effective Altruism or similar. For the reasons given in my reply to Scott above, I think that calling it the EA Handbook is not a significant change from before: unless we ask Ryan to take down the old handbook, then whatever happens, there will be a CEA-selected resource called the EA Handbook. For reasons given above and below, I think that the new version of the Handbook is better than the old. I think that there is some value in explicitly replacing the old version for this reason, and since “EA Handbook” is a cleaner name. However, I do also get people’s worries about this being taken to represent the EA community as a whole. For that reason, I will make sure that the title page and introduction make clear that this is a project of CEA, and I will make clear in the introduction that others in the community would have selected different essays.

My preferred approach would then be to engage with people who have expressed concern, and see if there are changes we can make that alleviate their concerns (such as those we already plan to make based on Scott’s comment). If it appears that we can alleviate most of those concerns whilst retaining the value of the Handbook from from CEA’s perspective, it might be best to call it the Centre for Effective Altruism’s EA Handbook. Otherwise, we would rebrand. I’d be interested to hear in comments whether there are specific changes (articles to add/take away/design things) that would reassure you about this being called the EA Handbook.

In this comment I’ll reply to some of the more object-level criticisms. I want to apologize for how this seemed to others, but also give a clearer sense of our intentions. I think that it might seem that CEA has tried merely to push AI safety as the only thing to work on. We don’t think that, and that wasn’t our intention. Obviously, poorly realized intentions are still a problem, but I want to reassure people about CEA’s approach to these issues.

First, re there not being enough discussion of portfolios/comparative advantage, this is mentioned in two of the articles (“Prospecting for Gold” and “What Does (and Doesn't) AI Mean for Effective Altruism?”). However, I think that we could have emphasised this more, and I will see if it’s possible to include a full article on coordination and comparative advantage.

Second, I’d like to apologise for the way the animal and global health articles came across. Those articles were commissioned at the same time as the long-term future article, and they share a common structure: What’s the case for this cause? What are some common concerns about that cause? Why might you choose not to support this cause? The intention was to show how many assumptions underlie a decision to focus on any cause, and to map out some of the debate between the different cause areas, rather than to illicitly push the long-term future. It looks like this didn’t come across, sorry. We didn’t initially commission sub-cause profiles on government, AI and biosecurity, which explains why those more specific articles follow a different structure (mostly talks given at EA Global).

Third, I want to explain some of the reasoning behind including several articles on AI. AI risk is a more unusual area, which is more susceptible to misinterpretation than global health or animal welfare. Partly for this reason, we thought that it was sensible to include several articles on this topic, with the intention that this would provide more needed background and convey more of the nuance of the idea. I will talk with some of the commenters above to discuss if it makes sense to do some sort of merge so that AI dominates the contents page less.

Comment by maxdalton on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-04T16:52:25.499Z · score: 0 (0 votes) · EA · GW

Thanks, that's a mistake which we'll fix.

Comment by maxdalton on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-03T14:17:17.646Z · score: 1 (1 votes) · EA · GW

Doing Good Better is still a useful introduction to EA, and it's possible to distribute physical copies of the book, so that will sometimes be more useful. The EA Handbook might work better as a more advanced introduction, or in online circumstances (see also some of the other comment threads).

Comment by maxdalton on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-03T12:41:30.866Z · score: 1 (1 votes) · EA · GW

Thanks so much for this!

Comment by maxdalton on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-03T08:55:24.733Z · score: 2 (2 votes) · EA · GW

Thank you for pointing this out! I'll remove that reference.

Comment by maxdalton on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-03T08:48:54.579Z · score: 0 (0 votes) · EA · GW

Thanks for your feedback.

Thanks for catching that mistake, we'll fix that floating period, and the other errors that others have spotted. When you say "as mentioned by others", are you only referring to the comments above, or is there some discussion of this that I'm missing? It would be good to catch all of the mistakes!

Thanks for the suggestions on content. I'll have a think about whether it would be useful to include profiles somewhere.

Comment by maxdalton on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-03T08:44:21.083Z · score: 6 (6 votes) · EA · GW

As Josh says, they're slightly different resources, and I think it will depend on the person.

The EA Handbook was designed for people who have already showed some interest and inclination towards EA's core principles - maybe they've been pitched by a friend, or listened to a podcast. I think Doing Good Better is likely to be better as an introduction to those core principles, whilst the Handbook is an exploration of where the principles might lead. So in terms of level, Doing Good Better feels more introductory.

In my view, the content of the EA Handbook better reflects our best current understanding of which causes to prioritize, and so I would prefer it in terms of content.

Overall, my guess is that if you've had a chance to briefly explain some of EA's core principles and intuitions, it would be best to recommend the EA Handbook.

Comment by maxdalton on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-03T08:26:12.515Z · score: 1 (1 votes) · EA · GW

I encourage you to share copies online. For reasons similar to those discussed above, we didn't get consent to make physical copies from all of the authors. I have not yet asked for permission on translation, but I will do so and reply in this thread.

(This post may be interesting for you if you haven't already read it: http://effective-altruism.com/ea/1lh/why_not_to_rush_to_translate_effective_altruism/).

Comment by maxdalton on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-03T08:22:05.888Z · score: 2 (6 votes) · EA · GW
  1. Thanks for pointing that out, we'll fix that.
    • Cause areas: Unfortunately we couldn't include everything. One of the core tenets of effective altruism is making difficult calls about cause prioritization, and these will always be contentious. We had to make those calls as we decided what to include. Our current best guess is that we should be focusing our efforts on a variety of different attempts to improve the long-term future, and this explains the calls that we made in the handbook.
    • Career paths: You're right, I'll make some changes to make clear that this is cause focused, and point people to 80,000 Hours for career focused advice.
    • Sorry that the article is not so helpful for non-Americans. Unfortunately this varies quite a bit between countries, and we couldn't cover them all.
  2. That's a good point, I'll consider changing/adding those books.
  3. That's another good point. I might include another section at the end on criticisms.
Comment by maxdalton on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-02T18:21:05.838Z · score: 3 (3 votes) · EA · GW

Edit, this should be fixed now, let me know if there are still problems.

(Sorry, I don't have time to reply to all of the comments here today.) Sorry about this! Not sure what's going on here, but does this version work better for you?

[There was a link here]

I'll try to get a full fix tomorrow.

Announcing the Effective Altruism Handbook, 2nd edition

2018-05-02T07:58:24.124Z · score: 20 (26 votes)
Comment by maxdalton on How scale is often misused as a metric and how to fix it · 2018-01-30T12:14:47.925Z · score: 5 (9 votes) · EA · GW

[Writing personally] This post seems to argue (at least implicitly) that scale is bad if it is the only metric that is used to assess a cause area or intervention. I think this is clearly correct.

But I don't think anyone has used scale as the only metric: 80,000 Hours very explicitly modify it with other factors (neglectedness, solvability), to account for things like the "bottlenecking" you discuss.

There's a separate thing you could mean, which is "Scale, neglectedness, solvability is only one model for prioritisation. It's useful to have multiple different models for prioritising. One alternate model is to assess what the biggest bottleneck is for solving a problem." (Note that this does not really support the claim that scale is misused: it's just that other lenses are also useful.)

I respect the inclination to use multiple models, and I think that thinking in terms of bottlenecks is useful for e.g. organizational prioritization. I think it's harder to apply to cause prioritization because we face so many problems and potential solutions that it's hard to see what the bottlenecks are. It may be useful for prioritizing how to use resources to pursue an intervention, which seems to be how you are mostly using it in this case.

Overall, I worry that your title doesn't really reflect what you show in the text.

Comment by maxdalton on Mental Health Shallow Review · 2017-11-22T08:23:21.188Z · score: 2 (2 votes) · EA · GW

Hi cubup, Just in case you're a newbie who doesn't understand why you're being downvoted: If you just want to express approval/disapproval for a post, you can use the thumbs up/down at the bottom of articles. Please try to keep comments for something more substantive. :)

Comment by maxdalton on Talent gaps from the perspective of a talent limited organization. · 2017-11-07T08:54:44.891Z · score: 3 (3 votes) · EA · GW

[My views, not my employer's.] Just a data point, but I interpreted "We have experimented with different levels of salaries between 10k and 50k USD and have not found increasing the salary increases the talent pool in the traits we would like to see more of." to mean that you had advertised salaries between 10k and 50k. I don't know if others would have misinterpreted it in the same way.

Does that statement instead mean "When we asked people who made it through to late interview stages what salary they required, candidates who asked for 50k salaries were not on average better qualified than those who asked for 10k salaries."? If it does, this suggests that of relatively well-qualified candidates who thought that CS would meet their salary requirements, salary didn't seem to affect quality between 10k and 50k. But you might be missing some better-qualified candidates who required a 45k salary, but thought that CS wouldn't be able to meet that requirement, or who felt uncomfortable asking for such a salary given that they knew other people at the organisation took lower salaries. So I worry that there will still be some effect of shrinking the applicant pool that you're not accounting for.

(Maybe you advertised the salary as a range (20k to 50k), then asked candidates where they wanted to be on the range. In that case, I think my worry is slightly weakened, but that people might still feel uncomfortable asking for the higher end, given CS's reputation for people taking low salaries.)

Comment by maxdalton on Rob Wiblin's top EconTalk episode recommendations · 2017-10-21T06:25:46.383Z · score: 3 (3 votes) · EA · GW

Watch this space. CEA is working on putting together a set of ~20 interesting articles and talks that have come out of EA in recent years.

Speaking for myself, not CEA, I'd also encourage you and others to use the EA forum as a place for linking to great EA content. I don't think we should just flood the forum with content - one of the great things about the forum is that it tends to have higher quality posts than e.g. Facebook. But linking to good content allows both for curation and discussion.

Comment by maxdalton on Effective Altruism Grants project update · 2017-10-09T17:15:53.909Z · score: 2 (2 votes) · EA · GW

With regards to animal welfare, we passed on several applications which we found promising, but couldn't fully assess, to the Open Philanthropy Project, so we may eventually facilitate more grants in this area.

I would not have predicted such an extreme resource split going in: we received fewer high quality, EA-aligned applications in the global development space than we expected. However, CEA is currently prioritising work on improving the long-term future, so I would have expected the EA community and long-term future categories to receive more funding than global development or animal welfare.

Comment by maxdalton on EAGx Relaunch · 2017-07-25T07:44:32.263Z · score: 10 (10 votes) · EA · GW

The main reason that we could not interview more people for EA Grants at this stage is that we had a limited amount of staff time to conduct interviews, rather than because of funding constraints.

I think you are right that the number of excellent EA Grants proposals suggests that small projects are often currently restricted by receiving funding. However, I think that this is less because there is not enough money, and more because there aren't good mechanisms for matching small projects up with money. You could say it is funding-mechanism-constrained. Of course, EA Grants is trying to address this. This was a smaller trial round, to see how promising the project is, and work out how to run it well. We will reassess after we've completed this round, but I think that it's very possible that we will scale the program up, to begin to address this issue.

[I'm working for CEA on EA Grants.]

Comment by maxdalton on EAGx Relaunch · 2017-07-25T07:26:32.933Z · score: 4 (4 votes) · EA · GW

[I work for CEA on EA Grants.] We received over 700 applications, and we only offered interviews to roughly the top 10% of applicants. (We'll do a more detailed writeup once the process is over.)

Comment by maxdalton on Announcing Effective Altruism Grants · 2017-06-13T13:58:46.537Z · score: 3 (3 votes) · EA · GW

You should write it briefly in the application. As the form mentions, the character limit is deliberately strict to encourage you to focus on the most important issues.

Comment by maxdalton on Announcing Effective Altruism Grants · 2017-06-13T11:14:21.230Z · score: 1 (1 votes) · EA · GW

Yes, this project is fully funded, from donations from a large donor given for this purpose.

Comment by maxdalton on Announcing Effective Altruism Grants · 2017-06-12T07:21:16.263Z · score: 1 (1 votes) · EA · GW

Thanks! Please use the resume.

Announcing Effective Altruism Grants

2017-06-09T10:28:15.441Z · score: 20 (22 votes)

Returns Functions and Funding Gaps

2017-05-23T08:22:44.935Z · score: 15 (15 votes)

Should we give to SCI or fund research into schistosomiasis?

2015-09-23T15:00:36.615Z · score: 10 (10 votes)