Open Thread #40

post by Peter Wildeford (Peter_Hurford) · 2018-07-08T17:51:47.777Z · EA · GW · Legacy · 88 comments

The last Open Thread was in October 2017, so I thought we were overdue for a new one.

Use this thread to post things that are awesome, but not awesome enough to be full posts. This is also a great place to post if you don't have enough karma to post on the main forum.

Consider giving your post a brief title to improve readability.


Comments sorted by top scores.

comment by remmelt · 2018-07-08T20:24:24.633Z · EA(p) · GW(p)

The EA Forum Needs More Sub-Forums

EDIT: please go to the recent announcement post on the new EA Forum to comment

The traditional discussion forum has sub-forums and sub-sub-forums where people in communities can discuss areas that they’re particularly interested in. The EA Forum doesn’t have these and this make it hard to filter for what you’re looking for.

On Facebook on the other hand, there are hundreds of groups based around different cause areas, local groups and organisations, and subpopulations. Here it’s also hard to start rigorous discussions around certain topics because many groups are inactive and moderated poorly.

Then there are lots of other small communication platforms launched by organisations that range in their accessibility, quality standards, and moderation. It all kind of works but it’s messy and hard to sort through.

It’s hard to start productive conversations on specialised niche topics with international people because

  • 1) Relevant people won’t find you easily within the mass of posts

  • 2) You’ll contribute to that mass and thus distract everyone else.

Perhaps this a reason why some posts on specific topics only get a few comments even though the quality of the insights and writing seems high.

Examples of posts that we’re missing out on now:

  • Local group organiser Kate tried X career workshop format X times and found that it underperformed other formats

  • Private donor Bob dug into the documents of start-up vaccination charity X and wants to share preliminary findings with other donors in the global poverty space

  • Machine learning student Jenna would like to ask some specific questions on how the deep reinforcement learning algorithm of AlphaGo functions

  • The leader of animal welfare advocacy org X would like to share some local engagement statistics on vegan flyering, 3D headset demos, before sending them off in a more polished form to ACE.

Interested in any other examples you have. :-)

What to do about it?

I don’t have any clear solutions in mind for this (perhaps this could be made a key focus in the transition to using the forum architecture of LessWrong 2.0). Just want to plant a flag here that given how much the community has grown vs. 3 years ago, people should start specialising more in the work they do, and that our current platforms are woefully behind for facilitating discussions around that.

It would be impossible for one forum to handle all this adequately and it seems useful for people to experiment with different interfaces, communication processes and guidelines. Nevertheless, our current state seems far from optimal. I think some people should consider tracking down and paying for additional thoughtful, capable web developers to adjust the forum to our changing needs.

UPDATE: After reading @John Maxwell IV's comments below, I've changed my mind from a naive 'we should overhaul the entire system' view to 'we should tinker with it in ways we expect would facilitate better interactions, and then see if they actually do' view.

Replies from: John_Maxwell_IV, MarekDuda, Denkenberger, Julia_Wise, saulius
comment by John_Maxwell (John_Maxwell_IV) · 2018-07-13T10:02:35.845Z · EA(p) · GW(p)

This sounds like it might be a bad idea to me. I just wrote a long comment about the difficulty the EA community has in establishing Schelling points. This forum strikes me as one of the few successful Schelling points in EA. I worry that if subforums are done in a careless way, dividing a single reasonably high-traffic forum into lots of smaller low-traffic ones, one of the few Schelling points we have will be destroyed.

Replies from: remmelt, remmelt
comment by remmelt · 2018-07-17T10:58:44.346Z · EA(p) · GW(p)

Another problem would be when creating extra sub-forums would result in people splitting their conversations up more between those and the Facebook and Google groups. Reminds me of the XKCD comic on the problem of creating a new universal standard.

I think you made a great point in your comment on that people need to do ‘intensive networking and find compromises’ before attempting to establish new Schelling points.

comment by remmelt · 2018-07-17T10:32:57.474Z · EA(p) · GW(p)

Hmm, would you think Schelling points would still be destroyed if it was just clearer where people could meet to discuss certain specific topics besides a ‘common space’ where people could post on topics that are relevant to many people?

I find the comment you link to really insightful but I doubt whether it neatly applies here. Personally, I see a problem with that we should have more well-defined Schelling points as the community grows but that currently the EA Forum is a vague place to go to ‘to read and write posts on EA’. Other places for gathering to talk about more specific topics are widely dispersed over the internet – they’re both hard to find and disconnected from each other (i.e. it’s hard to zoom in and out of topics as well as explore parallel topics that once can work on and discuss).

I think you’re right that you don’t want to accidentally kill off a communication platform that actually kind of works. So perhaps a way of dealing with this is to maintain the current EA Forum structure but then also test giving groups of people the ability to start sub-forums where they can coordinate around more specific Schelling points on ethical views, problem areas, interventions, projects, roles, etc. – conversations that would add noise for others if they did it on the main forum instead.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2018-07-17T22:43:12.389Z · EA(p) · GW(p)

Yeah. I feel like the EA community already has a discussion platform with very granular topic divisions in Facebook, and yet here were are. I'm not exactly sure why the EA forum seems to me like it's working better than Facebook, but I figure if it's not broken don't fix it. Also, I think something like the EA Forum is inherently a bit more fragile than Facebook... any Facebook group is going to benefit from Facebook's ubiquity as a communication tool/online distraction.

You made a list of posts that we’re missing out on now... those kinda seem like the sort of posts I see on EA facebook groups, but maybe you disagree?

Replies from: remmelt
comment by remmelt · 2018-07-18T21:39:56.058Z · EA(p) · GW(p)

Could you give a few reasons why you the EA Forum seems to works better than the Facebook groups in your view?

The example posts I gave are on the extreme end of the kind of granularity I'd personally like to see more of (I deliberately made them extra specific to make a clear case). I agree those kinds of posts tend to show up more in the Facebook groups (though the writing tends to be short there). Then there seems to be stuff in the middle that might not fit well anywhere.

I feel now that the sub-forum approach should be explored much more carefully than I did when I wrote the comment at the top. In my opinion, we (or rather, Marek :-) should definitely still run contained experiments on this because on our current platform it's too hard to gather around topics narrower than being generally interested in EA work (maybe even test a hybrid model that allows for crossover between the forum and the Facebook groups).

So I've changed my mind from a naive 'we should overhaul the entire system' view to 'we should tinker with it in ways we expect would facilitate better interactions, and then see if they actually do' view.

Thanks for your points!

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2018-07-20T02:48:53.316Z · EA(p) · GW(p)

Could you give a few reasons why you the EA Forum seems to works better than the Facebook groups in your view?

Lol, like I said, I'm not completely sure. Posts & comments seem to go into greater depth, posts sometimes get referenced long after they are written?

I'm not certain subfora are a terrible idea, I just wanted this risk to be on peoples' radar. One possible compromise is to let people tag their posts (perhaps restricted to a set of tags chosen by moderators) and allow users to subscribe to RSS feeds associated with particular tags.

comment by MarekDuda · 2018-07-11T15:08:14.210Z · EA(p) · GW(p)

As Julia mentions below, over the last few months we have been been putting a lot of thought into how to improve the Forum ahead of its re-launch later this year. The ‘sub-forum model’ was what we also arrived at as a desirable potential vision.

Due to hoping to relaunch the Forum in a relatively short timeframe, and the availability of the LW2 codebase for us to work with, our initial goal is to release a direct clone of LW2 rebranded for use as the EA Forum 2.0. The LW2 format already addresses some of the issues and feedback we have had about the current functionality. However, over the medium term (after we release the new version in the next few months) we expect to do further work on implementing various functionality improvements, including investigating the viability of a sub-forum model.

We will be publishing an official announcement regarding the EA Forum relaunch in the next few days, and I would hope we could use the comments section there to serve as the main schelling point for user feedback and ideas on what we should focus on after the initial release.

comment by Denkenberger · 2018-07-11T13:46:23.180Z · EA(p) · GW(p)

I like that the forum is not sorted so one can keep abreast of the major developments and debates in all of EA. I don't think there is so much content as to be overwhelming.

comment by Julia_Wise · 2018-07-10T15:34:40.192Z · EA(p) · GW(p)

CEA is thinking along these same lines for the new version of the Forum! The project manager is planning to reply with more detail in the next day or so.

Replies from: remmelt
comment by remmelt · 2018-07-10T16:59:51.235Z · EA(p) · GW(p)

Wow, nice! Would love to learn more.

comment by saulius · 2018-07-08T23:25:32.070Z · EA(p) · GW(p)

It seems that what we need in this forum is categories/subforums. What we currently have is one subreddit. Conceptually, there’s little difference between and this forum, people just use them differently. What I think we need is a whole new website like that would have subreddits like “AI policy” and “Community building”. Your homepage would be customised based on subreddits you subscribed to. Maybe there could even be subreddits like "Newcomer questions" and "Editing & Review" at the same website that do not contain novel thoughts like posts on this forum. And there would be a subreddit “Old EA forum” that would contain all posts in the current forum but no new posts. Perhaps that is too complicated, maybe we just need few categories that you could filter by (and webpage would remember you user’s filter). I haven’t thought much about this, these are just my first thoughts.

comment by Scott_Alexander · 2018-07-13T02:20:50.616Z · EA(p) · GW(p)

Vox is looking for EA journalists. This is an opportunity to publicize EA and help shape its public perception. Their ad hints that they want people who are already in the movement, so take a look if you have any writing or journalism related skills.

Replies from: Ro-bot-tens
comment by Ro-bot-tens · 2018-07-25T02:10:22.260Z · EA(p) · GW(p)

I think this is so huge. I was going to post it but saw you got to it first.

comment by [deleted] · 2018-07-09T13:35:34.155Z · EA(p) · GW(p)

Forum searching tip

Searching the forum by typing " [your search query]" into the URL bar gives you more results - including some very relevant ones - than using the built-in search bar on the top-right of the forum itself (at least for me).

comment by Milan_Griffes · 2018-07-10T15:15:37.863Z · EA(p) · GW(p)

Why I'm skeptical of cost-effectiveness analysis

Reposting as comment because mods told me this wasn't thorough enough to be a post.


  • The entire course of the future matters (more)
  • Present-day interventions will bear on the entire course of the future, out to the far future
  • The effects of present-day interventions on far-future outcomes are very hard to predict
  • Any model of an intervention's effectiveness that doesn't include far-future effects isn't taking into account the bulk of the effects of the intervention
  • Any model that includes far-future effects isn't believable because these effects are very difficult to predict accurately
Replies from: Peter_Hurford, John_Maxwell_IV, saulius
comment by Peter Wildeford (Peter_Hurford) · 2018-07-10T19:14:20.652Z · EA(p) · GW(p)

I'm glad you reposted this.

Any model of an intervention's effectiveness that doesn't include far-future effects isn't taking into account the bulk of the effects of the intervention

I'd argue we don't necessarily know yet whether this is true. It may well be true, but it may well be false.

Any model that includes far-future effects isn't believable because these effects are very difficult to predict accurately

This doesn't account for the fact that there's still gradients of relative believability here, even if the absolute believability is low. There's also an interesting meta-question of what to do when under various levels and kinds of uncertainty (and getting a better handle just how bad the uncertainty is).

Replies from: Milan_Griffes, Milan_Griffes
comment by Milan_Griffes · 2018-07-11T00:16:23.113Z · EA(p) · GW(p)

absolute believability is low. There's also an interesting meta-question...

I think the crux here is that absolute believability is low, such that you can't really trust the output of your analysis.

Agree the meta-question is interesting :-)

comment by Milan_Griffes · 2018-07-11T00:15:00.397Z · EA(p) · GW(p)

I'd argue we don't necessarily know yet whether this is true. It may well be true, but it may well be false.

I think it's almost certainly true (confidence ~90%) that far future effects account for the bulk of impact for at least a substantial minority of interventions (like at least 20%? But very difficult to quantify believably).

Also seems almost certainly true that we don't know for which interventions far future effects account for the bulk of impact.

Replies from: Peter_Hurford, Peter_Hurford
comment by Peter Wildeford (Peter_Hurford) · 2018-07-11T01:54:55.646Z · EA(p) · GW(p)

Separately, I'd wager that I feel pretty confident that taking into account all the possible long-term effects I can think of (population ethics, meat eating, economic development, differential technological development), that the effect of AMF is still net positive. I wonder if you really can model all these things? I previously wrote about five ways to handle flow-through effects in analysis and like this kind of weighted quantitative modeling.

Replies from: Milan_Griffes
comment by Milan_Griffes · 2018-07-11T04:17:51.088Z · EA(p) · GW(p)

I suspect it's basically impossible to model all the relevant far-future considerations in a way that feels believable (i.e. high confidence that the sign of all considerations is correct, plus high confidence that you're not missing anything crucial).

...the effect of AMF is still net positive.

I share this intuition, but "still net positive" is a long way off from "most cost-effective."

AMF has received so much scrutiny because it's a contender for the most cost-effective way to give money – I'm skeptical we can make believable claims about cost-effect when we take the far future into account.

I'm more bullish about assessing the sign of interventions while taking the far future into account, though that still feels fraught.

comment by Peter Wildeford (Peter_Hurford) · 2018-07-11T01:51:52.041Z · EA(p) · GW(p)

I recently played two different video games with heavy time-travel elements. One of the games heavily implied that choosing differently made small differences for a little while but ultimately didn't matter in the grand scheme of things. The other game heavily implied that even the smallest of changes could butterfly effect into dramatically different changes. I kind of find both intuitions plausible so I'm just pretty confused about how confused I should be.

I wish there was a way to empirically test this, other than with time travel.

Replies from: Milan_Griffes
comment by Milan_Griffes · 2018-07-11T04:12:44.890Z · EA(p) · GW(p)

A lot of big events of in my life have had pretty in-the-moment-trivial-seeming things in the causal chains leading up to them. (And the big events appear contingent on the trivial-seeming parts of the chain.)

I think this is the case for a lot of stuff in my friends' lives as well, and appears to happen a lot in history too.

It's not the far future, but the experience of regularly having trivial-seeming things turn out to be important later on has built my intuition here.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2018-07-16T00:47:29.831Z · EA(p) · GW(p)

It's surely true that trivial-seeming events sometimes end up being pivotal. But it sounds like you are making a much stronger claim: That there's no signal whatsoever and it's all noise. I think this is pretty unlikely. Humans evolved intelligence because the world has predictable aspects to it. Using science, we've managed to document regularities in how the world works. It's true that as you move "up the stack", say from physics to macroeconomics, you see the signal decrease and the noise increase. But the claim that there are no regularities whatsoever seems like a really strong claim that needs a lot more justification.

Anyway, insofar as this is relevant to EA, I tend to agree with Dwight Eisenhower: Plans are useless, but planning is indispensable.

Replies from: Milan_Griffes
comment by Milan_Griffes · 2018-07-16T15:20:02.603Z · EA(p) · GW(p) are making a much stronger claim: That there's no signal whatsoever and it's all noise. I think this is pretty unlikely.

I'm making the claim that with regard to the far future, it's mostly noise and very little signal.

I think there's some signal re: the far future. E.g. probably true that fewer nuclear weapons on the planet today is better for very distant outcomes.

But I don't think most things are like this re: the far future.

I think the signal:noise ratio is much better in other domains.

Humans evolved intelligence because the world has predictable aspects to it.

I don't know very much about evolution, but I suspect that humans evolved the ability to make accurate predictions on short time horizons (i.e. 40 years or less).

comment by John_Maxwell (John_Maxwell_IV) · 2018-07-13T19:40:21.957Z · EA(p) · GW(p)

Any model that includes far-future effects isn't believable because these effects are very difficult to predict accurately

"Anything you need to quantify can be measured in some way that is superior to not measuring it at all."

Replies from: Milan_Griffes, saulius
comment by Milan_Griffes · 2018-07-15T16:39:56.322Z · EA(p) · GW(p)

My post is basically contesting the claim that any measurement is superior to no measurement in all domains.

Replies from: WillPearson
comment by WillPearson · 2018-07-15T18:55:10.208Z · EA(p) · GW(p)

It might be worth looking at the domains where it might be less worthwhile (formal chaotic systems, or systems with many sign flipping crucial considerations). If you can show that trying to make cost-effectiveness based decisions in such environments is not worth it, that might strengthen your case.

Replies from: Milan_Griffes
comment by Milan_Griffes · 2018-07-15T19:14:45.896Z · EA(p) · GW(p) with many sign flipping crucial considerations

Yeah, I'm continuing to think about this, and would like to get more specific about which domains are most amiable to cost-effectiveness analysis (some related thinking here).

I think it's very hard to identify which domains have the most crucial considerations, because such considerations are unveiled over long time frames.

A hypothesis that seems plausible: cost-effectiveness is good for deciding about which interventions to focus on within a given domain (e.g. "want to best reduce worldwide poverty in the next 20 years? These interventions should yield the biggest bang for buck...")

But not so good for deciding about which domain to focus on, if you're trying to select the domain that most helps the world over the entire course of the future. For that, comparing theories of change probably works better.

Replies from: markus_over
comment by markus_over · 2018-07-19T10:06:13.316Z · EA(p) · GW(p)

Aren't there interventions that could be considered (with relatively high probability) robustly positive with regards to the long term future? Somewhat more abstract things such as "increasing empathy" or "improving human rationality" come to mind, but I guess one could argue how they could have a negative impact on the future in some plausible way. Another one certainly is "reduce existencial risks" - unless you weigh suffering risks so heavily that it's unclear whether preventing existential risk is good or bad in the first place.

Regarding such causes - given we can identify robust ones - it then may still be valuable to analyze cost-effectiveness, as there would likely be a (high?) correlation between cost-effectiveness and positive impact on the future.

If you were to agree with that, then maybe we could reframe your argument from "cost-effectiveness may be of low value" to "cause areas outside of far future considerations are overrated (and hence their cost-effectiveness is measured in a way that is of little use)" or something like that.

Replies from: Milan_Griffes
comment by Milan_Griffes · 2018-07-19T14:08:39.283Z · EA(p) · GW(p)

Aren't there interventions that could be considered (with relatively high probability) robustly positive with regards to the long term future?

I agree that interventions like this exist, and I think we identify them by making theoretical cases for & against.

Regarding such causes - given we can identify robust ones - it then may still be valuable to analyze cost-effectiveness

As above, I think cost-effectiveness can useful for determining which intervention to focus on within a specific domain (e.g. "which intervention most increases empathy?" could benefit from a cost-effect analysis).

But for questions about which domain to focus on, I don't think cost-effectiveness gives much lift (e.g. "is it better to focus on increasing empathy or improving nuclear security?" is the kind of question that seems intractable to cost-effect analysis).

comment by saulius · 2018-07-15T10:44:53.030Z · EA(p) · GW(p)

Another way of saying it is “Sometimes pulling numbers out of your arse and using them to make a decision is better than pulling a decision out of your arse.” It's taken from which is relevant here.

Replies from: Milan_Griffes
comment by Milan_Griffes · 2018-07-15T16:38:24.470Z · EA(p) · GW(p)

Sure, but I don't think those are the only options.

Possible alternative option: come up with a granular theory of change; use that theory to inform decision-making.

I think this is basically what MIRI does. As far as I know, MIRI didn't use cost-effectiveness analysis to decide on its research agenda (apart from very zoomed-out astronomical waste considerations).

Instead, it used a chain of theoretical reasoning to arrive at the intervention it's focusing on.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2018-07-16T00:30:48.435Z · EA(p) · GW(p)

I'm not sure I understand the distinction you're making. In what sense is this compatible with your contention that "Any model that includes far-future effects isn't believable because these effects are very difficult to predict accurately"? Is this "chain of theoretical reasoning" a "model that includes far-future effects"?

We do have a fair amount of documentation regarding successful forecasters, see e.g. the book Superforecasting. The most successful forecasters tend to rely less on a single theoretical model and more on an ensemble of models (hedgehogs vs foxes, to use Phil Tetlock's terminology). Ensembles of models are also essential for winning machine learning competitions. (A big part of the reason I am studying machine learning, aside from AI safety, is its relevance to forecasting. Several of the top forecasters on Metaculus seem to be stats/ML folks, which makes sense because stats/ML is the closest thing we have to "the math of forecasting".)

Replies from: Milan_Griffes
comment by Milan_Griffes · 2018-07-16T15:12:24.222Z · EA(p) · GW(p)

I'm not sure I understand the distinction you're making...

I'm trying to distinguish between cost-effectiveness analyses (quantitative work that takes a bunch of inputs and arrives at a output, usually in the form of a best-guess cost-per-outcome), and theoretical reasoning (often qualitative, doesn't arrive at a numerical cost-per-outcome, instead arrives at something like "...and so this thing is probably best").

Perhaps all theoretical reasoning is just a kind of imprecise cost-effect analysis, but I think they're actually using pretty different mental processes.

The most successful forecasters tend to rely less on a single theoretical model and more on an ensemble of models...

Sure, but forecasters are working with pretty tight time horizons. I've never heard of a forecaster making predictions about what will happen 1000 years from now. (And even if one did, what could we make of such a prediction?)

My argument is that what we care about (the entire course of the future) extends far beyond what we can predict (the next few years, perhaps the next few decades).

comment by saulius · 2018-07-15T11:15:37.202Z · EA(p) · GW(p)

I wanted to ask what kind of conclusions this line of reasoning leads you to make. But am I right to think that this is a very short summary of your series of posts exploring consequentialist cluelessness ( In that case the answer is in the last post of the series, right?

Replies from: Milan_Griffes
comment by Milan_Griffes · 2018-07-15T16:33:10.839Z · EA(p) · GW(p)

Yeah, my conclusions here definitely overlap with the cluelessness stuff. Here I'm thinking specifically about cost-effectiveness.

My main takeaway so far: cost-effect estimates should be weighted less & theoretical models of change should be weighted more when deciding what interventions have the most impact.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2018-07-16T01:02:38.912Z · EA(p) · GW(p)

Do you think you're in significant disagreement with this Givewell blog post?

Replies from: Milan_Griffes
comment by Milan_Griffes · 2018-07-16T05:39:54.053Z · EA(p) · GW(p)

I basically agree with that post, though GiveWell cost-effectiveness is about comparing different interventions within the domain of improving global health & development in the next 20-50 years.

As far as I know, GiveWell hasn't used cost-effectiveness analysis to determine that global health & development is a domain worth focusing on (perhaps they did some of this early on, before far-future considerations were salient).

The complication I'm pointing at arises when cost-effectiveness is used to compare across very different domains.

comment by Farhan · 2018-07-21T17:55:00.661Z · EA(p) · GW(p)

Just a newbie exploring the forum!

Hi, I'm new here!

By the looks of it, there is SO much to learn about effective altruism and I absolutely love that. I've really come to accept learning as a never ending process and it's liberating to look at learning that way.

I'm hoping to earn some Karma points so I can make my own posts here and interact with members of this lovely forum, to continuously learn and maybe contribute sometimes along the way.

I've totally bought into the concepts of effective altruism, with the ideals of working as a community to edge closer to a better society resonating with me. I'm so excited about EA that I've decided that I want to help host an Effective Altruism Global X in my home country, Bangladesh.

I know for a fact that not many people know about effective altruism in Bangladesh. I've seen there is a listed group from Bangladesh in Effective Altruism Hub and I've mailed them to get in touch, but I have not detected much activity from them prior to sending them an email.

I just feel that concepts such as "Earning To Give" and "Cause Neutrality" are ideas that more people should know about. So many people do not fully understand the potential for impact each individual holds, they underestimate their potential to do good and do not invest their time in finding out what they can do with their career to have more impact. So many incredibly intelligent people, due to lack of information that could have been easily available, prematurely decide that earning money is the best they can do with their lives.

I absolutely believe that coming across 80,000 Hours was one of the luckiest moments in my life. The way they use scientific evidence to make a person understand the sheer capacity in one's hands, whether through donating effectively, advocacy, or direct work as explained in parts 2 and 3 of the career guide, inspires people to go out there and dedicate their lives to learning and doing good better.

Bangladesh, a lower-middle income country, is an area that desperately needs more effective altruists. Also derived from the career guide, in identifying proper problem areas, they should be large enough in scale, neglected, and solvable. Dhaka, the capital of Bangladesh, is the most overcrowded city in the world according to a 2017 post by the Telegraph, and 5th in the world in population density according to Wikipedia. Dhaka is a regular in the Economist Intelligence Unit's annual rankings of the "Least Liveable Cities" in the world. Dhaka came 2nd in 2014. Whether the misfortune of the people of Bangladesh is due to a weak government, swayed by corruption, or a lack of unity by the people, one thing is clear, that spreading the ideals of effective altruism has potential for massive impact in Bangladesh. The facts are that even if an effective altruist group exists in Dhaka, they have not been active and there is a huge opportunity to turn some heads and galvanize the doers of the society with the effective altruism movement, so that we can stride towards the development of our society that has been overdue now.

I'm very excited to meet more wonderful people, and learn many new things. It keeps me awake at night to think of what a success an effective altruist community fostered in Bangladesh could prove to be. I have not yet applied to be an organizer, because I thought I should maybe get at least a little bit involved with the community.

Lots of things to look forward to, and that is always how life should be.



Replies from: Peter_Hurford
comment by Peter Wildeford (Peter_Hurford) · 2018-07-22T21:55:09.891Z · EA(p) · GW(p)

This is really sweet to hear. :) I wish you the best and hope you find a lot in the effective altruism community.

Replies from: Farhan
comment by Farhan · 2018-07-29T04:32:40.595Z · EA(p) · GW(p)

Thank you so much Peter! :)

comment by nongiga · 2018-07-24T20:04:19.318Z · EA(p) · GW(p)

I was terrified of pursuing an EA career

For 3 years after joining EA I was still set on going to medical school. I knew I could do more but I was just terrified of switching. Even when I got an opportunity presented to me I was very torn between pursuing it or staying in my comfort zone. Now I'm having the best summer of my life in a biosecurity internship. I'm more motivated, I'm more productive, I'm going on more adventures, and I have a lot more and better connections than before.

EA was amazing in that having this network made it easier to go into an effective field than any other option I have, and for the first time in my life I'm doing something I'm passionate about.

So if any of you reading this and you're on the fence about a big career change, just know that it might be harder than your current plan, but it might also be easier!

comment by RandomEA · 2018-07-09T00:34:23.004Z · EA(p) · GW(p)

Requesting Help for a Compilation of Top EA Facebook Posts

In December 2015, Claire Zabel posted links to all posts in the EA Facebook group with 50 or more likes or comments. I think it's time for a similar post. From what I understand, the most liked and most commented on posts can be found using the "My groups dashboard" feature on Facebook. Unfortunately, I do not have a Facebook account. I am posting in this thread to request that someone with a Facebook account post the most liked and most commented on posts as a reply to this comment. I can then go through each of them and extract the key information about each (see below) so people can see if there are any they want to read without clicking every single one. I would then post this information as its own forum post. Alternatively, you can do the extracting yourself and post it as a forum post yourself.


  • Author: Initials are used to prevent future employers from easily associating the post with the author (unless the person is a prominent EA who is likely to remain in EA, in which case the full name is used).
  • Year: This can give people context as various ideas have become more or less accepted over time.
  • Text: If the full text is too long, an excerpt is chosen that encapsulates the post.
  • URL: This allows people to read the post for themselves.
  • Link Title: This helps people decide whether to click on the link.
  • Link Author: This is included when the identity of the author is relevant (generally only when the author is an EA).
  • Link URL: This allows people to go directly to the link without having to go to the post first.

You can see examples of this formatting below.

Posts with the Most Likes as of December 2015 (based on Claire Zabel's comment)






Posts with the Most Comments as of December 2015 (based on Claire Zabel's comment)

1) Unable to access





Replies from: joel_duplicate0.5816669276037654, MichaelPlant
comment by joel_duplicate0.5816669276037654 · 2018-08-13T22:00:36.106Z · EA(p) · GW(p)

As a learning exercise, I've been working on a web scraper to compile this info from the FB group.

Doing this in spare time, so it will likely be another week or two before I have a post put together, but posting here in the meantime as an FYI and to potentially gain 5 karma so I can post once it's ready :)

Replies from: joel_duplicate0.5816669276037654
comment by joel_duplicate0.5816669276037654 · 2018-08-14T04:24:34.388Z · EA(p) · GW(p)

UPDATE: After scraping the initial post data, there are 200+ posts with 50 or more likes. (Obviously the group has gotten quite a bit more active over the past couple years!)

Not sure if there's a maximum length for a forum post, but regardless, this strikes me as probably too many "top posts" to feature. Would it be better to limit it to the top 50 posts? Top 100? Welcome any input on this.

Replies from: Peter_Hurford, joel_duplicate0.5816669276037654
comment by Peter Wildeford (Peter_Hurford) · 2018-08-14T05:50:57.130Z · EA(p) · GW(p)

Top 50 sounds good to me. Thanks for doing this.

comment by MichaelPlant · 2018-07-09T09:39:58.101Z · EA(p) · GW(p)

It seems you need the Grytics tool to do this. I can't work out to do it in facebook itself. Would also be interested to see this.

comment by RandomEA · 2018-07-12T16:03:49.203Z · EA(p) · GW(p)

Should EAs work on reducing food waste?

According to USDA statistics, a significant percent of food purchased by consumers goes uneaten (15% of chicken, 35% of turkey, 20% of beef, 29% of pork, and 23% of the edible portion of eggs). If consumers wasted less food, they would purchase less meat/eggs/dairy, which would lead to fewer animals suffering on factory farms.

One factor that could be driving food waste is confusing date labeling. For example, an egg container may have a 'Sell By' date meant to help retailers manage their inventory, but a consumer who sees the label and date some time after purchasing might throw the eggs away thinking they are no longer safe to eat. One possible solution is a federal labeling law that limits producers to listing the freshness date and the expiration date (and requires them to use specific easy to understand phrases when listing either). However, there are several reasons that working towards such a law may be a bad use of resources. First, legal change may be unnecessary as it appears the food industry may voluntarily adopt such a system. Second, it's unclear how much labeling reform reduces food waste (I was unable to find any studies in my brief search). Third, it may be that the primary benefits of reducing animal product consumption are the long term effects, in which case reductions in consumption driven by factors other than concern for animals may be much less impactful. Of course, there may also be other ways to reduce food waste (to which the first two concerns would not apply).

Replies from: saulius, DavidNash
comment by saulius · 2018-07-17T00:02:27.185Z · EA(p) · GW(p)

Interesting. It's strange that I've never heard anyone talking about decreasing animal suffering by decreasing food waste before. I wonder if anyone investigated such possibilities, I couldn’t find anything by googling. I happened to talk with an ACE researcher today and he didn’t know about any such research either. I think it's possible that there are some effective interventions in this area. Because there are many ways to reduce waste. For example:

  • Vacuum-packaging meat products. They can extended the life of some products by up to 9 days when compared to conventional packaging.
  • Getting rid of ‘buy one get one free’ promotions at groceries
  • Helping with redistribution of surplus food

It can be complicated though. For example, it's possible some people don’t by eggs because they look at the “Sell by” date and think that they will expire soon.

I wonder what could be next steps to increase the probability that someone looks into this. It could be added to but that would have a low probability of changing anything. EA Animal Welfare Fund may want to fund such research if there was someone to do it, but a more concrete topic would be needed.

comment by DavidNash · 2018-07-13T09:10:54.482Z · EA(p) · GW(p)

I think there was some data that showed the majority of waste happened before a product got to a supermarket, and that switching to plant based/clean meat would be more efficient than cutting waste between shop and bin.

On page 37 of this report it says, for poultry, 11% of feed energy gets converted into human food.

If 15% of the 11% gets wasted that seems less of priority than the original 89% that is lost, although maybe it would be a more tractable and neglected area to work on.

Replies from: RandomEA
comment by RandomEA · 2018-07-13T14:00:28.315Z · EA(p) · GW(p)

My comment was concerned with the impact of food waste on the number of animals suffering on factory farms. The report you cite seems to be discussing feed that is 'wasted' in the conversion process. But since this feed is likely to be mostly plants, improving the conversion ratio would probably not have a large effect on the number of animals on factory farms. (If anything, improving the conversion ratio might increase the number of factory farmed animals by reducing how much it costs to raise animals.)

comment by vegjosh · 2018-08-07T22:15:10.534Z · EA(p) · GW(p)

Seeking 5 karma so I can post about the recent WASR grant competition! :)

comment by Raltune · 2018-07-14T22:36:07.523Z · EA(p) · GW(p)

New here. Hoping to get some karma points so that I can ask specific questions for the local community development project I have planned.

I just finished reading "The Nobel Laureates' Guide To The Smartest Targets For The World" and can not find the specific methods that can be employed to achieve the proposed targets. For example: with regard to coral reef loss, if the research is accurate and there is a 24$ economic return for every 1$ spent, through what organizations or processes can this be achieved? The specific dollar figure must imply that the process is known. Is there a separate resource of footnotes that describe how to achieve those returns? The short book was very interesting as a navigation tool towards the initiatives that may have the greatest economic return and resultant prosperity for humankind.

Thanks for any insights if you get the chance. -Tom

Replies from: PeterMcCluskey, Milan_Griffes
comment by PeterMcCluskey · 2018-07-17T15:13:02.230Z · EA(p) · GW(p)

I don't trust the author (Lomborg), based on the exaggerations I found in his book Cool It.

I reviewed that book here.

comment by Milan_Griffes · 2018-07-15T16:43:42.272Z · EA(p) · GW(p)

New here.


For example: with regard to coral reef loss, if the research is accurate and there is a 24$ economic return for every 1$ spent

If there was a $24 total return to every dollar spent, and the actor could capture even a small fraction of this return, I'd expect that a for-profit enterprise would already be doing this.

But I'm not familiar with the domain, maybe there's no way for a for-profit to capture the return, or maybe the 24:1 ratio is incorrect.

Replies from: Raltune
comment by Raltune · 2018-07-16T15:21:31.943Z · EA(p) · GW(p)

Thanks, Milan. I think the economics are such that the return does not necessarily go to the person/org that donated the money. The 24$ return per 1$ invested is seen in sustainable fisheries and the taxes they generate; in generating tourism for that region and all the jobs and auxiliary benefits, taxes, decreased welfare spending, etc. So it's a great return but does not accrue to the donor, per se. But it's a great investment for governments and for charities that are looking to maximize well-being.

Other examples from the book have "family planning/sex education" as a 120$ return per 1$ invested. Campaigns against malaria as 36$:1. And these ideas are vetted, calculated by teams of economists trying to decide where the trillions of dollars that will be spent on aid over the next 15 years.

Does that make sense?

If anyone found this useful I could use a couple karma points to start threads in the regular forum. Thanks. :) -Tom

Replies from: Milan_Griffes
comment by Milan_Griffes · 2018-07-16T15:24:30.448Z · EA(p) · GW(p)

Hm, could you link to the place where you're getting these figures? I'm curious :-)

(Or give page numbers if it's a book.)

Replies from: Raltune
comment by Raltune · 2018-07-16T19:22:46.181Z · EA(p) · GW(p)

It's only 145 pages and very interesting imo. Well worth the short read. I love the concept of interventions that pay for themselves. An insecticide treated bed net for 5$ including delivery, on average, pays for itself by preventing malaria and fostering a culture with less societal burden down the road; less hospital costs for the sick, more taxes generated by healthy workers; healthy kids from those parents; etc. An economically virtuous circle.


comment by throwaway · 2018-08-03T04:51:27.343Z · EA(p) · GW(p)

Seeking 5 upvotes in order to make a post.

comment by Ruth_Freiling_duplicate0.9597937224729234 · 2018-08-01T09:36:53.935Z · EA(p) · GW(p)

Looking for Karma points.

Hi all, I would like to post a critical perspective on maximizing happiness. It includes an alternative approach, mental health issues and burnout. I would love to see a discussion about it, but not only on FB :) Anyone interested and willing to give me some karma to enable my post?

Cheers :)

comment by RandomEA · 2018-07-15T10:09:52.788Z · EA(p) · GW(p)

Frequency of Open Threads

What do people think would be the optimal frequency for open threads? Monthly? Quarterly? Semi-annually?

Replies from: Milan_Griffes, RyanCarey
comment by Milan_Griffes · 2018-07-15T16:40:59.357Z · EA(p) · GW(p)

Every 2-3 months seems good (weakly held).

comment by RyanCarey · 2018-07-16T06:53:29.938Z · EA(p) · GW(p)

Every 2-3 months seems good.

comment by Naryan · 2018-07-09T22:19:58.005Z · EA(p) · GW(p)

Impact Investing from an EA Perspective

This is just a teaser, since I don't have enough karma for a full post yet!

Picture a scale that has charity one one side (good social utility, -100% financial return) and Investing on the other (zero social utility, 7% financial return). Impact investing is a space that can give similar risk-adjusted market returns as traditional investments, but also provides social utility.

In my research, I've found several factors that make me excited about this area:

  • Impact investing is about 5% the size of charitable donations (22B vs 410B in 2016), and is growing much faster (17% vs 4% annually)

  • Impact investing makes up only 0.16% of the total capital markets - huge room for growth

  • Philanthropic enterprises with sustainable business models can use existing capital markets to get funded on a large scale

  • Due to the market's current inability to accurately value the 'social utility' provided, there are many greatly under-valued investment opportunities, providing similar social utility as comparable charities

I've got more detail, logic and sources in the full post, but in the mean time, I'll tell you about one example opportunity that I've zoomed-in on.

WorldTree is a company that lets you buy an acre of fast-growing Empress Splendor trees. It's goal is to generate income from the harvest of the trees, and offset the carbon footprint of investors:

  • $2500 CAD minimum investment, enough to plant 1 acre of trees
  • One acre is enough to offset your lifetime carbon footprint
  • The timber is sold after 10 years, conservative return to the investor is $20k

From an EA perspective, I compared the stated carbon cost of World Tree ($1.72/tonne) to Cool Earth ($1.34/tonne) and traditional carbon offset programs ($10/tonne). This investment could return a 23% annual return, while the Cool Earth 'investment' would be a loss of 100%. At it's surface, this example does look quite promising when counting both the social utility generated, and the future utility my $20k could do in 10 years time.

Looking forward to posting a more detailed write-up on the space once I'm able, and to hearing your feedback on these ideas!

Replies from: Peter_Hurford, Heteric
comment by Peter Wildeford (Peter_Hurford) · 2018-07-10T05:50:58.192Z · EA(p) · GW(p)

This investment could return a 23% annual return

That's insanely high... social arguments would be irrelevant if you could safely get that kind of return. Every investor would want in.

Replies from: Naryan
comment by Naryan · 2018-07-10T10:51:19.379Z · EA(p) · GW(p)

The key word is "safely". This kind of investment would be considered high risk - this company only started this program three years ago, and the first trees haven't yet produced profit. Additionally, the 10 year duration is unattractive for many investors, and there isn't really a market for this type of wood in North America yet. They need to offer a big reward in order to entice investors to fund their venture at this early stage.

I suspect other early stage ventures would have a similar high-risk, high potential return profile, which is why they are typically limited to accredited investors.

comment by Heteric · 2018-07-10T01:53:37.256Z · EA(p) · GW(p)

I'm a huge fan of this concept. Have you done a lot of research on this? Do you like WorldTree specifically, or are there other Impact Investing orgs you're aware of?

Replies from: Naryan
comment by Naryan · 2018-07-10T11:01:10.010Z · EA(p) · GW(p)

This field is really interesting, and there is a lot of research out there on it. The Global Impact Investing Network (GIIN) is a good starting place, but I've spent about a week pulling together stats from several sources to build my view on this space, and the Canadian options in particular.

I do like World Tree in particular, because it both produces high-impact social utility, has a high expected financial return, and I can actually buy-in without being accredited. Unfortunately for people with less than $1M, the options for impact investing are very slim at the moment.

Typical options include Green Bonds with a 4-5% return over 5 years, or investments in smaller community funds with a fairly small impact.

Check out a few Canadian options at OpenImpact

comment by Jared_Winslow · 2018-07-30T23:41:48.614Z · EA(p) · GW(p)

Hello everyone! I'm new to the EA Forum and it'd be great if I could get some karma so I can start contributing more. :)

This next fall I am running a university EA group. Is there anyone who has run an EA group that has any advice for me other than the basic information on EA Hub? What types of events were the most fun? What types of events were the most effective in gaining members or discussing issues?

Replies from: DavidNash
comment by DavidNash · 2018-07-31T11:15:20.570Z · EA(p) · GW(p)

Hey Jared, you may get more of a response in the group organisers group.

comment by Jonas Vollmer · 2018-07-09T06:50:16.663Z · EA(p) · GW(p)

Side note: I'd encourage commenters to put a title at the top of their comments (maybe this can be done in the OP).

Replies from: Peter_Hurford, remmelt
comment by Peter Wildeford (Peter_Hurford) · 2018-07-09T17:02:27.987Z · EA(p) · GW(p)

I edited the OP to mention it.

comment by remmelt · 2018-07-09T07:51:21.719Z · EA(p) · GW(p)

Thanks, done!

comment by nadia_mir-montazeri · 2018-08-06T16:08:15.061Z · EA(p) · GW(p)

Hello everyone, I thought about menstrual pain as a topic for EA-related biomedical research as there are so many people with a uterus who cannot go to work or feel less capable due to cramps and overall low state of well-being. It also seems neglected -- after a quick search I could find only one scientific journal dedicated to publishing research on PMS, menstrual pain and period blood itself (by the name periodical, no joke). What do you think?

comment by Ramiro · 2019-01-23T14:10:57.658Z · EA(p) · GW(p)

Has anyone reframed priorities choices (such as x-risk vs. poverty) as losses to check if they’re really biased?

I’m new here. Since I suspect someone has probably already made a similar question somewhere (but I couldn’t find it, sorry), I’m mostly trying to satisfy my curiosity; however, there’s a small probability that it touches an important unsolved dilemma about global priorities and the x-risk vs. safe causes.

I’ve read a little bit about the possibility that preferences for poverty reduction/global health/animal welfare causes over x-risk reduction may be due to some kind of ambiguity-aversion bias. Between donating U$3,000 for (A) saving a life (high certainty, presently) or (B) potentially saving 10^20 future lives (I know, this may be a conservative guess), by making something like a marginal 10^-5 contribution to reducing in 10^-5 some extinction risk, people would prefer the first safe option, despite the large pay-off of the second one. However, such bias is sensitive to framing effects: people usually prefer sure gains and uncertain losses. So, I was trying to find out, without success, if anyone had reframed this decision as matter of losses, to see if one prefers, e.g., (A’) reducing deaths by malaria from 478,001 to 478,000 or (B’) reducing the odds of extinction (minus 10^20 lives) in 10^-10.

Perhaps there’s a better way to reframe this choice, but I’m not interested in discussing one particular example (however, I’m concerned with the possibility that there’s no bias-free way of framing it). My point is that, if one chooses something like A-B’, then we have a strong case for the existence of a bias.

(I’m aware of other objections against x-risk causes, such as Pascal’s mugging and discount rates arguments – but I think they’ve received due attention, and should be discussed separately. Also, I’m mostly thinking about donation choices, not about policy or career decisions, which is a completely different decision; however, IF this experiment confirmed the existence of a bias, it could influence the latter, too.)

comment by zepedad · 2018-08-08T18:37:03.043Z · EA(p) · GW(p)

Animal v. Human Prioritization

Hi all,

A person involved with EA said I should get involved with the forum, so here I am.

Here is/are my question(s).

1) Is (and should) morality be based on a combination of biology and strict logical induction?

If yes to 1), then here’s my deal.

I have a preference for valuing human life over animal life. However, if some animal species are more likely to live longer than the human species will, then would I be doing more good by prioritizing helping those animals out first and foremost?

This article— — mentions crocodiles and sand sharks living under 1000 ppm CO2eq conditions. I’m not sure that humans can. Would I be doing more good trying to make sure that crocodiles and such animals can survive, and consider the human species a sunk cost at this point?

Let me know your thoughts as you’re able to, please. Thank you,

Donald Zepeda