Open Thread: January — March 2023

post by Lizka · 2023-01-09T11:13:15.118Z · EA · GW · 91 comments


  If you're new to the EA Forum:
  Other Forum resources


If you're new [EA · GW] to the EA Forum:


For inspiration, you can see the last open thread here [EA · GW]. 

Other Forum resources

  1. 🖋️  Write on the EA Forum [EA · GW]
  2. 🦋  Guide to norms on the Forum [EA · GW]
  3. 🛠️  Forum User Manual [EA · GW]
I personally like adding images to my Forum posts. Credit to DALL-E.


Comments sorted by top scores.

comment by Fergus Fettes · 2023-02-12T18:19:54.147Z · EA(p) · GW(p)

Hello all,

long time lurker here. I was doing a bunch of reading today about polygenic screening, and one of the papers was so good that I had to share it, in case anyone interested in animal welfare was unfamiliar with it. The post is awaiting moderation but will presumably be here [EA · GW] in due time.

So while I am making my first post I might as well introduce myself.

I have been sort of vaguely EA aligned since I discovered the movement 5ish years ago, listened to every episode of the 80k podcast and read a tonne of related books and blog posts.

I have a background in biophysics, though I am currently working as a software engineer in a scrappy startup to improve my programming skills. I have vague plans to return to research and do a phd at some point but lets see.

EA things I am interested in:

  • bio bio bio (everything from biorisk and pandemics to the existential risk posed by radical transhumanism)
  • ai (that one came out of nowhere! I mean I used to like reading Yudkowskys stuff thinking it was scifi but here we are. AGI timelines shrinking like spinach in frying pan, hoo-boy)
  • global development (have lived, worked and travelled extensively in third world countries. lots of human capital out there being wasted)
  • animal welfare! or I was until I gave up on the topic in despair (see my essay above) though I am still a vegan-aligned vegetarian
  • philosophy?
  • economics?
  • i mean its all good stuff basically

Recently I have also been reading some interesting criticisms of EA that have expanded my horizons a little, the ones I enjoyed the most were

But at the end of the day I think EAs own personal brand of minimally deontic utilitarianism is simple and useful enough for most circumstances. Maybe a little bit of Nietzschean spice when I am feeling in the mood.. and frankly I think fundamentally e/acc is mostly quite compatible, aside from the details of the coming AI apocalypse and [how|whether] to deal with it.

I also felt a little bit like coming out of the woodwork recently after all the twitter drama and cancellation shitstorms. Just to say that I think you folks are doing a fine thing actually, and hopefully the assassins will move on to the next campaign before too long.

Best regards! I will perhaps be slightly more engaged henceforth.

comment by · 2023-01-09T21:24:37.331Z · EA(p) · GW(p)

Hi all, I'm Vlad, 35, from Romania. I've been working in software engineering for 12 years. I have a bachelor's and master's degree in Physics.

I'm here because I read "What we owe the future", after it was recommended to me by a friend.

I got the book recommended to me because I had an idea which is a little unconfortable for some people, but I think this idea is extremely important, and this friend of mine instantly classified my thoughts as "a branch of long-termism". I also think my idea is extremely relevant to this group, and I'm interested in getting feedback about it.

Context for the  idea: Long-termism is concerned about people as far into the future as possible, up to the end of the universe.

The idea: ...what if we can make it so there doesn't have to be an end? If we had a limitless source of energy, there wouldn't have to be an end. Not only that, but we could make a lot of people very happy (like billions of billions of billions .....of billions of them? a literal infinity of them even)


It sounds crazy, I realize, but my best knowledge on this topic says this:

  • We know that we don't know all the laws of the universe
  • Even the known laws kind of have a loop-hole in them. Energy is supposed to be conserved, but we don't necessarily know how much energy exists out there - if an infinite amount exists, we can both use it, and conserve it
  • I received feedback from a few physicists already, none of them said that infinite energy is clearly impossible - just that we don't know how we could get it


So my conclusion is: some amount of effort into the topic of infinite energy should be invested. 


Is anyone interested in talking about this? I can show you what I have so far.


P.S. fusion is not a source of infinite energy, but merely a source of energy potentially far better than most others we know

P.P.S. I created this website for the initiative:

Replies from: Erin, DonyChristie
comment by Erin · 2023-01-11T16:41:52.421Z · EA(p) · GW(p)

Hi Vlad,

You're getting a lot of disagree votes. I wanted to explain why (from my perspective), this is probably not a useful way to spend your time.

Longtermists typically propose working on problems that impact the long run future and can't be solved in the future. X-risks is a great example - if we don't solve it now, there will be no future people to solve it. Another example is historical records preservation, which is something that is likewise easy to do now but could be impossible to do in the future.

This seems like a problem that future people would be in a much better position to solve than we are.

Obviously there's nothing wrong with pursuing an idea simply because you find it interesting. A good starting place for you might be Isaac Arthur on Youtube. He has a series called Civilizations at the End of Time which is related to what you are thinking about.

Replies from:
comment by · 2023-01-14T15:53:56.323Z · EA(p) · GW(p)

Hi Erin,

Thanks for your explanations of what likely is the issue regarding disagreement here. I appreciate it that you spent some time to shed light here, because feedback is important to me.

I knew about Isaac Arthur, I'm trying to reach out to him and his community as we speak.

I'd try to add some clarrifications, hoping I adress the concerns of those people that seemed to be in disagreement with my idea.

I find it quite surprising that people concerned with the long-term welfare of humanity seem to be against my idea.

If there are genuine arguments against my position, I'd totally be open to hear it - maybe indeed there's something wrong with my idea.

However I can't find a way to get rid of these points (I think this is philosophy)

  • Sure, investing more than 0 effort into this initiative, takes away from other efforts
  • The faster we reach this goal, the faster we can make tremendous improvements in peoples' lives
  • If we delay this for long enough, society might not be in such a state as to afford doing this kind of research (society might also be in a better position, but I'm more concerned about


Regarding viability:

  • I don't know how much effort must be invested into this initiative, in order to achieve its goals 
  • I don't know if this is possible (Though through my own expertise, and the expertise of 11 physicists out of which at least 4 are physics professors, this goal does not seem impossible to reach)


Framing in "What we owe the future" terms:

  • Contingency: I'd give it 3/5 because
    • 1 would be something obvious to everyone
    • 2 would be obvious to experts
    • 3 would be obvious to experts, but there would be cultural forces against it. William MacAskill talks about "cultural lock-in". I think science is in such kind of a situation today. You might have heard of issues such as "publish or perish" ( ). There's also the taboo created because of similarities with "perpetual motion machines".
  • Persistence: 5/5.  It's realistic we could lose access to this, but if we don't, then this in conceivably the most persistent thing possible (comparable to the death of all sentient beings, this is the other extreme)
  • Significance: 5/5 - Hard to imagine something more significant than the ability to literally give everyone every thing they want or need (not "everything" but "every thing", because you can't give them human slaves, or make other people their friends, if those other people disagree)


So if my points are correct, we basically have a tradeoff between:

  • Invest less in more concrete initiatives and
  • Risk losing eternal bliss for an infinity of people

This is a genuine dylema, I don't have the answer to it, but my intuition tells me that we should invest more than 0 effort in this goal.


@Erin, or others:

Do you have any other idea where I should take this problem? As said, I'm trying to reach out to Isaac Arthur and many other people. Do you think this would be interesting for William MacAskill?


Thanks a lot,

Vlad A.

Replies from: Erin
comment by Erin · 2023-01-17T14:22:52.710Z · EA(p) · GW(p)

I don't think I stated my core point clearly. I will be blunt for the purpose of clarity. Pursing this is not useful because, even if you could make a discovery, it would not possibly be useful until literally 100 quintillion years from now, if not much longer. To think that you could transmit this knowledge that far into future doesn't make any sense.

Perhaps you wish to pursue this as a purely theoretical question. I'm not a physicist, so I can not comment on whether your ideas are reasonable from that perspective. You say that physicists have told you that they are, but do not discount the possibility that they were simply being polite, or that your questions were misinterpreted.

Additionally, the reality is that people without PhDs in a given field rarely make significant contributions these days - if you seek to do so, your ideas must be exceptionally well communicated and  grounded in the current literature (e.g., you must demonstrate an understanding of the orthodox paradigm even if your ideas are heterodox). Otherwise, your ideas will be lumped in with perpetual motion machines and ignored. 

I genuinely think it would be a mistake to pursue this idea at all, even from a theoretical perspective, because there is essentially no chance that you are onto something real, that you can make progress on it with the tools available to you, and that you can communicate it so clearly that you will be taken seriously.

A better route to pursue might be writing science fiction. There is always demand for imaginative sci-fi with a clear grounding in real science or highly plausible imagined science. There is also a real need for sci-fi that imagines positive/desirable futures (e.g. solarpunk).

Replies from:
comment by · 2023-02-04T13:55:21.543Z · EA(p) · GW(p)

Hi @Erin [EA · GW] , thanks for your continued interest in this topic.

Thanks for being blunt. Bluntness is good for saving time.

Let me address some things you said:

Pursing this is not useful because, even if you could make a discovery, it would not possibly be useful until literally 100 quintillion years from now

That is simply just not true. If we had infinite energy tomorrow, very soon after that, we could  solve all problems solvable using resources. Let me present a list of stuff we could do very very soon (likely <10 years, extremely likely <100 years):

  1. solve climate change (trivially even!)
  2. solve all basic necesities of people (food, water, clothing, shelter)
  3. solve all non-basic necesities: cars, airplanes, mobile phones, laptops - you name it, we got it
  4. interstellar travel: Yes, people would already be flying to Alpha centauri and lots of other places. They would even reach them in "a few years/months" (a few years for them, but lots of years for us back on Earth)

There is lots of potential here, but I found that if I start talking about all the things that could be done, people are actually 


To think that you could transmit this knowledge that far into future doesn't make any sense.

Based on the refutation above, this point does not stand anymore.

You say that physicists have told you that they are, but do not discount the possibility that they were simply being polite, or that your questions were misinterpreted.

This is an awkward argument to address. Sure, everybody I ever met could be lying, and there's always solipsism. Same argument applies to everyone. I don't think this is a healthy way to continue a conversation - throwing doubt into what people say. It's not healthy compared to an alternative that fortunately enough, we have:

  • I am currently reaching out to more and more physicists, and asking them for their opinion on this. I am posting updates regularly on the discord server that you can find on . If you are interested, you'll find there how much physicists are interested in this.
  • If you have any idea of what I would need to show you, so you consider there's enough interest from the science community, I'm all ears. 

Please however let's avoid distrust-based arguments in the future, and let's replace them with data-based arguments. 

I'd avoid them first of all because, being from Eastern Europe, I am not aware of the existence of people who would not call an idea "stupid" right off the bat, instead of being polite, if they had the slightest distrust in it. Am I wrong? Not sure. Am I lying? You can't be sure. So let's let experiments decide :)


I genuinely think it would be a mistake to pursue this idea at all, even from a theoretical perspective, because there is essentially no chance that you are onto something real, that you can make progress on it with the tools available to you, and that you can communicate it so clearly that you will be taken seriously.

@Erin, I can't fight belief. If you believe this idea is wrong, there's not much point in talking. 

Sure, you said "think", not "believe" - taken, however thinking and reason means explanations, justifications, models, and logic. Do you care to justify:

  1. Why you think there's essentially no chance that I'm onto something real
  2. How I would not be able to make progress on it with the tools available to me (the internet is my preffered tool)
  3. That I wouldn't be able to communicate it well enough to be taken seriously
  4. That I won't find other people more capable than me in any of the points above

I understand that this might be a deep emotional backlash. Humans have emotions, yes, unfortunately at times.

I'm however looking for supporters, and there will be only so much time I will spend on arguing with non-supporters. If you don't want to believe that people are interested in this, feel free. If you want to see what actually is happening, check out 

It kind of feels like all I have so far was said. I don't have more data at this point to get you more toward "omg, this might be possible after all", but I am eager to hear your arguments, that might get me more toward "omg, this might actually not be possible" - as they say in startups: "negative feedback is the best kind of feeback".


Thanks for your feedback!

comment by DC (DonyChristie) · 2023-01-15T02:35:45.073Z · EA(p) · GW(p)

You would like Alexey Turchin's research into surviving the end of the universe.

Replies from: Guy Raveh
comment by Guy Raveh · 2023-01-15T14:54:49.259Z · EA(p) · GW(p)

I would not call it "research". Science fiction might be a better term. Which is also, I suspect, why Vlad's comment is very disagreed with. There's nothing to suggest surviving the end of the universe is any more plausible than any supernatural myth being true.

Replies from:
comment by · 2023-01-17T12:59:23.717Z · EA(p) · GW(p)

Hey Guy, thanks for your feedback.

I might be wrong on this, but the way I understand probability to work is that, generally:

  • if event A has probability P(A)
  • and if event B has probability P(B)
  • then the probability of both A and B to happen is P(A) * P(B)

What this means, is that technically:

  • The existence of supernatural beings, with personalities, and specific traits AND the power "to do anything they want" is at most equal to the possibility for an endless source of energy to exist

simply on the basis that more constraints make the probability of the event smaller.


The interesting point however is that I have found (so far) no physicist that says this is not possible.

I have also not found anyone yet who knows how to estimate the effort so far.


I would be very interested however if there are arguments against this position.


And I'd be even more interested in people who want to help me with this initiative :D Arguments are nice, but making progress is better! 

Replies from: Guy Raveh
comment by Guy Raveh · 2023-01-17T13:15:31.260Z · EA(p) · GW(p)

Given that empirical science cannot ever conclusively prove anything, you may never find a physicist to tell you that it isn't possible. But there's no reason to think that it is possible. Compare to Russell's Teapot.

Regarding your argument about probabilities - yes, the probability of an omnipotent god is necessarily smaller than that of any infinite source of energy (although it's not a product - that's just true for independent events). However I was not only talking about omnipotent gods, and anyway this probabilistic reasoning is the wrong way to think about this. When you do it, you get things like Pascal's wager (or Pascal's mugging, have your pick).

Replies from:
comment by · 2023-02-04T12:50:52.507Z · EA(p) · GW(p)

Hi Guy,

Thanks for your answer.

Given that empirical science cannot ever conclusively prove anything, you may never find a physicist to tell you that it isn't possible. But there's no reason to think that it is possible. Compare to Russell's Teapot.

We don't know whether this is possible. You are the only one to make the choice between:

  • so we shouldn't try to find out
  • so we should try to find out

Pascal's wager and oppotunity cost madness ensues thereafter. However, maybe I'm blindspotted, but I can't find a better topic to bet on - would solve all problems solvable with resources.

I don't think I can find a non-emotional way to convince people to switch from we should not search to we should search (for infinite energy). 

Addressing rationally (but it's not clear how reason can change values/emotions) :

  1. there's a big difference in the impact of Russel's teapot and infinite energy. One is irrelevant, the other is extremely relevant
  2. 2000 years ago, there was no reason to think that it would be possible to get to the moon or have mobile phones. The universe isn't obliged to respect human intuitions.
  3. True, there's at this point no clear reason to think this is possible 
    1. well except energy possibly not being conserved in general relativity - I can't tell if there's a consensus on this topic or not at this point - crazy! 
    2. Also, fundamentally because something exists (rather than nothing), some hope exists that there's arbitrarily more of this "something". Why would existence necesarily be constrained to a finite quantity?
  4. However, the impact of infinite energy, to me, seems high enough to require some serious research on the topic. The current times also leave a lot of gaps, where we can try to find infinite energy:
    1. quantum mechanics and relativity are incompatible with each other
    2. relativity itself is failing (dark energy vs dark matter clearly show we don't understand what happens in ~95% of the universe). Dark matter can explain some things but not others, modified gravity explains others, but not some.
    3. the big bang at t=0 possibly violates conservation of energy


Comparison to Pascal's wager is an interesting point. Sounds like it makes sense to some extent. I am not 100% certain though that the one could fundamentally boil down the infinite energy problem to Pascal's wager, because: 

  •  I am not certain if we can even talk about 
    • how many gods there are
    • and how compatible they are with one another
    • how many of them could be real at the same time
  • whereas science pretty much converged on very few ways to look at the world
    • and especially on the concept of energy - it is present in all the major theories of physics (at least to my knowledge)


So in a way, the infinite energy idea is at the very least more like a Pascal's wager, where there seem to be far fewer gods.


But ultimately, this is an emotional issue. It is very similar to climate change in this regard, just more abstract, further away, and with higher payoffs.

comment by Dzoldzaya · 2023-01-28T13:13:56.682Z · EA(p) · GW(p)

Hey everyone, I'm curious about the extent to which people in EA take (weak/strong) antinatalism/ negative utilitarianism seriously. I've read a bit around the topic and find some arguments more persuasive than others, but the idea that many lives are net-negative, and that even good lives might be worse than we think they are, has stuck with me. 

Based on my own mood diary, I'm leaning towards something around a 5.5/10 on a happiness scale being the neutral point, under which a life isn't worth living. 

This has made me a lot less enthusiastic about 'saving lives' for its own sake, especially those lives in countries/ regions with very poor quality of life. So I suspect that some 'life-saving' charities could be actively harmful and that we should focus way more on 'life-improving' charities/ cause areas. (There are probably very few charities that only save lives- preventing malaria/ reducing lead exposure both improves and saves lives- but we can imagine a 'pure-play life-saving charity'.)

I haven't come to any conclusions here, but the 'cost to save a life' framing, still common in EA, strikes me as probably morally invalid. I don't hear this argument mentioned much (you don't seem to get anyone actively arguing against 'saving lives'), so I'm just curious what the range of EA opinion is. 

Replies from: NunoSempere, Ian Turner
comment by Ian Turner · 2023-02-12T16:41:43.859Z · EA(p) · GW(p)

Regarding the question of the population ethics of donating to Givewell charities, a 2014 report commissioned by Givewell suggested that donating to AMF wouldn't have a big impact on total population, because fertility decisions are related to infant mortality. Givewell also wrote a lengthy blog post about their work in the context of population ethics.  I think the gist of it is that even if you don't agree with Givewell's stance on population ethics, you can still make use of their work because they provide a spreadsheet where one can plug in one's own moral weights.

comment by Barry Cotter · 2023-02-03T06:32:21.653Z · EA(p) · GW(p)

Just a warning on treating everyone as if they argue in good faith. They don’t. Émile P. Torres, aka @xriskology on Twitter doesn’t. He may say true honest things but if you find anything he says insightful check all the sources.

Émile P. Torres’s history of dishonesty and harassment An incomplete summary

Replies from: Erin, nathan
comment by Erin · 2023-02-03T21:04:54.000Z · EA(p) · GW(p)

Not trying to disagree with what you're saying - just want to point out that Emile goes by they/them pronouns.

comment by Nathan Young (nathan) · 2023-02-04T10:32:40.223Z · EA(p) · GW(p)

I think Émile is close to the line for me but I think we've had positive interactions. 

comment by emre kaplan · 2023-02-15T13:17:48.906Z · EA(p) · GW(p)

I have seen Sabine Hossenfelder claim that it will be very expensive to maintain superintelligent AIs. I also hear many people claiming that digital minds will use much less energy than human minds, so they will be much more numerous. Does anyone have some information or a guess on how much energy ChatGPT spends per hour per user?

Replies from: Felix Wolf
comment by Felix Wolf · 2023-02-15T17:28:34.692Z · EA(p) · GW(p)

Epistemic status: quick google search, uncertain about everything, have not read the linked papers. ~15 minutes of time investment.

Source 1
The Carbon Footprint of ChatGPT

[...] ChatGPT is based on a version of GPT-3. It has been estimated that training GPT-3 consumed 1,287 MWh which emitted 552 tons CO2e [1].

Using the ML CO2 Impact calculator, we can estimate ChatGPT’s daily carbon footprint to 23.04 kgCO2e.
[...] ChatGPT probably handles way more daily requests [compared to Bloom], so it might be fair to expect it has a larger carbon footprint.

Source 2
The carbon footprint of ChatGPT
3.82 tCO₂e per day

Also, maybe take a look into this paper about a different language model:

Quantifying the Carbon Emissions of Machine Learning 

You can play a bit with this calculator, which was also used in source 1:
ML CO2 Impact 


Replies from: Konstantin Pilz, emre kaplan
comment by constructive (Konstantin Pilz) · 2023-02-25T21:38:16.305Z · EA(p) · GW(p)

I think a central idea here is that superintelligence could innovate and thus find more energy-efficient means of running itself. We already see a trend of language models with the same capabilities getting more energy efficient over time through algorithmic improvement and better parameters/data ratios. So even if the first Superintelligence requires a lot of energy, the systems developed in the period after it will probably need much less. 



comment by emre kaplan · 2023-02-16T05:54:09.277Z · EA(p) · GW(p)

Thanks a lot, Felix! That's very generous and some links have even more relevant stuff. Apparently, ChatGPT uses around 11870 kWh per day whereas the average human body uses 2,4 kWh.

comment by Silas · 2023-01-20T03:18:29.275Z · EA(p) · GW(p)

Hi I’m Silas Barta. First comment here! I organize the Austin LessWrong group. I’m currently retired off of earlier investing (formerly software engineer) but am still looking for my next career to maximize my impact. I think I have a calling in either information security (esp reverse engineering) or improving the quality of explanations and introductions to technical topics.

I have donated cryptocurrency and contributed during Facebook’s Giving Tuesday, and gone to the Bay Area EA Globals in 2016 and 2017.

Replies from: agucova, Felix Wolf
comment by Agustín Covarrubias (agucova) · 2023-01-28T04:50:09.971Z · EA(p) · GW(p)

You might want to know that a few weeks ago, 80.000 hours updated their career path profile on information security.

comment by Felix Wolf · 2023-01-20T10:25:01.647Z · EA(p) · GW(p)

Hey Silas,

welcome to the Forum. I wish you the best of luck to find a fulfilling career. :)

If you have any kind of question on where to find resources or what not, feel free to ask.

With kind regards


comment by Lilin · 2023-03-18T19:59:13.747Z · EA(p) · GW(p)

Hello, my name  Lilin. Someone with little technical experience and overwhelmed with inspiration.  One of many who will benefit from the direction tech is headed in as long as it's in the hands of those who mean well to others rather than those primarily motivated by their self elevation. I grew up alongside the  internet's maturity under a father who was obbsessed with google's potential long before their IPO.  Given access to the web rather early I saw an explotion of information and culture given platform and resources to thrive until its encroaching commodification. Then In 2014, 2017 and 2021 I saw a similar explotion and demand for maturity inspired by the successes of Satoshi.  Now in 2023, at least from my perspective, an unthinkably complex demand for maturity will inevitably be seen thanks to openAI's approach to empowering the everyday person by giving them access to the kind of tools only previously only seen trickled down the hegemony of our digital foodchain.  The outcome of accelarating this maturity through carefully curated and prudent design causes me to work on what I care about. Inspired by the lack there of where there is suffering. 

I hope to learn more and contribute. I'm excited to be a part of this community 

comment by Carlos Ramírez · 2023-03-01T00:45:49.999Z · EA(p) · GW(p)

Hello everyone!  My name is Carlos. I recently realized I should be leading a life of service, instead of one where I only care about myself, and that has taken me here, to the place that is all about doing the most good.

I'm an odd guy, in that I have read some LessWrong and have been reading Slate Star Codex/Astral Codex Ten for years, but am for all intents and purposes a mystic. That shouldn't put me at odds here too much, since rationality is definitely a powerful and much needed tool in certain contexts (such as this one), it's just that it cannot do all things.

I wonder if there are others like me here, since after all, the decision to give to charity, particularly to far-off places, is not exactly rational.

Hoping to learn a lot, and to figure out a way to make my career (been a software developer for 11 years) high impact, or at least, actually helpful.

You guys are the Rebel Alliance from Star Wars, and I am ready to be an X-Wing pilot in it!

Replies from: Felix Wolf
comment by Felix Wolf · 2023-03-01T11:23:13.862Z · EA(p) · GW(p)

Hi Carlos,
welcome to the Forum! 

Moya is probably the most mystic person I know of, so nice to see that you already encountered her. :D

Here in the Forum, we really try to be nice and welcoming, if you follow along, I don't see any reason this couldn't work out. ;)

If you are open to suggestions, I want to recommend you looking into the Podcast Global Optimum from Daniel Gambacorta. He talks about how you can become a more effective altruist and has some good thinking about the pros and cons of different topics, for example the episode about how altruistic should you be?.

"[…] the decision to give to charity […] is not exactly rational." Can you please explain?

With kind regards


Replies from: Carlos Ramírez
comment by Carlos Ramírez · 2023-03-01T20:22:54.501Z · EA(p) · GW(p)

Hi Felix, thanks for the recs! What I mean by  giving to charity not being exactly rational, is that giving to charity doesn't help one in any way. I think it makes more sense to be selfish than charitable, though there is a case where charity that improves ones community can be reasonable, since an improved community will impact your life.

And sure, one could argue the world is one big community, but I just don't see how the money I give to Africa will help me in any way.

Which is perfectly fine, since I don't think reason has a monopoly on truth. There are such things as moral facts, and morality is in many ways orthogonal to reason. For example, Josef Mengele's problem was not a lack of reason, his was a sickness of the heart, which is a separate faculty that also discerns the truth.

comment by Aithir · 2023-01-25T15:49:51.291Z · EA(p) · GW(p)

I thought it might be helpful to share this article. The title speaks for itself.


How to Legalize Prediction Markets

What you (yes, you) can do to move humanity forward

comment by Carlos Ramírez · 2023-03-08T22:18:15.553Z · EA(p) · GW(p)

I'm looking for statistics on how doable it is to solve all the problems we care about. For example, I came across this: from the UN which says extreme poverty could be sorted out in 20 years for $175 billion a year. That is actually very doable, in light of the fact of how much money can go into war (in 1945, the US spent 40% of its GDP into the war). I'm looking for more numbers like that, e.g. how much money it takes to solve X problem. 


I intend to use them for a post on how there is no particular reason we can't declare total war on suffering. We can totally organize massively to do great things, and we have done it many times before. We should have a wartime mobilization for the goal of ending suffering.

Replies from: Brad West
comment by Brad West · 2023-03-09T22:37:33.200Z · EA(p) · GW(p)

I think I could help you in your total war. PM me if interested in learning more. [EA · GW]

comment by emre kaplan · 2023-03-07T08:46:18.259Z · EA(p) · GW(p)

Does anyone know why Singer hasn't changed his views on infanticide and killing animals after he had become a hedonist utilitarian? As far as I know, his former views were based on the following:

a. Creation and fulfilment of new preferences is morally neutral.

b. Thwarting existing preferences is morally bad.

c. Persons have preferences about their future.

d. Non-persons don't have a sense of the future, they don't have preferences about their future either. They live in the moment.

e. Killing persons thwarts their preferences about the future.

f. Killing non-persons doesn't thwart such preferences.

g. Therefore killing a person can't be compensated by creating a new person. Whereas when you kill a non-person, you don't thwart many preferences anyway so killing non-persons can be compensated.

I think after he had become a hedonist this person/non-person asymmetry should mostly disappear. But I haven't seen him updating Animal Liberation or other books. Why is that?

Replies from: NickLaing
comment by NickLaing · 2023-03-07T09:15:19.133Z · EA(p) · GW(p)

Thanks Emre - simple question what are his current views, I'm assuming from what you are saying he is still pro infanticide in rare circumstances soon after birth?

Replies from: emre kaplan
comment by emre kaplan · 2023-03-07T11:44:37.115Z · EA(p) · GW(p)

I think he's not commenting on it much anymore since this issue isn't really a major priority. But I think he used to advocate for infanticide in a larger set of circumstances(eg. when it's possible to have another child who will have a happier life). The part about infanticide isn't that relevant to any kind of work EA is doing. But his views are still debated in animal advocacy circles and I am not sure what exactly his position is.

Replies from: NickLaing
comment by NickLaing · 2023-03-07T12:32:28.807Z · EA(p) · GW(p)

Gotcha. It's true it's not immediately obvious from google or chatGPTx.

Replies from: Lorenzo Buonanno
comment by Lorenzo Buonanno · 2023-03-07T13:05:16.880Z · EA(p) · GW(p)

I think he writes a bit about it here: in the section: "You have been quoted as saying: "Killing a defective infant is not morally equivalent to killing a person. Sometimes it is not wrong at all." Is that quote accurate?"

comment by She's done it (Msaksena) · 2023-01-16T04:54:31.331Z · EA(p) · GW(p)

Qn: Where is the closest EA community base to the US? How accesible is the USA from it (US Consulate)?

Context: I am recently let go from my job while on a visa in the states. Which means I have to leave the US within the next 7 days. I would like to live somewhere close to the US where I can find community so that I don't loose momentum to do the intense work that job search needs. I tend to be really affected by the energy of where I am; I work best in cities, I tend to sleep most on a countryside.

This might also be a good resource for people who are not able to enter the US for any reason whatsoever. I am assuming a longterm housing community in a nomad friendly place like CDMX would do wonders for people wishing to be within +/- 3 hours of timezone of their American colleagues.

Replies from: Jordan Pieters
comment by jwpieters (Jordan Pieters) · 2023-01-20T23:51:12.825Z · EA(p) · GW(p)

There are some EAs hanging out in CDMX until the end of Jan (and maybe some after)

Agree that having a nomad friendly community near the US would be great

Replies from: Msaksena
comment by She's done it (Msaksena) · 2023-02-11T20:33:52.059Z · EA(p) · GW(p)

I did end up in Mexico City. I plan to continue the job search from here while exploring independent contracting for some supplemental income and diverse project experience. 

- If anyone is looking for expertise in biosecurity/global health to help with ongoing projects, please reach out and delegate to me! I am new here, so I haven't gathered any "EA karma" from well-written posts yet. I would love to change that!

- I am open to ideas in up-skilling for the most impactful work I can do as a physician-scientist. Open to ideas for skills to master and funds to apply for the same. 

- Also, EAs in the Americas, take a work-cation in CDMX! The weather is excellent, and the city is energetic and green. So far, a good group of EAs have been here after the fellowship ended. I would love to keep it up!

comment by Moya (Moya Schiller) · 2023-02-25T01:19:26.470Z · EA(p) · GW(p)

Hi all,

Moya here from Darmstadt, Germany. I am a Culture-associated scientist, trans* feminist, poly, kinky, and a witch.
I got into LessWrong in 2016 and then EA 2016 or 2017, don't quite remember. :)

I went to the University of Iceland, did a Master's degree in Computer Science / Bioinformatics there, then built software for the European Space Agency, and nowadays am a freelance programmer and activist in the Seebrücke movement in Germany and other activist groups as well. I also help organize local burn events (some but not all of them being FLINTA* exclusive safer spaces.)

Silly little confession: It took me so many years to finally sign up to the EA forum because my password manager is not great and I just didn't want to bother opening it and storing yet another password in there. But hey, finally overcame that incredibly-tiny-in-hindsight-obstacle after just a bit over half a decade and signed up. \o/

Replies from: Milena Canzler, Carlos Ramírez
comment by Milena Canzler · 2023-02-28T12:45:35.359Z · EA(p) · GW(p)

Hi Moya!
Welcome to the forum from another person in southern Germany. I'm curious: Are you connected to the Darmstadt local group? If so, hope to see you at the next event in the area (I live in Freiburg). Would love to connect and hear what your perspective on EA is!
Also, the password manager story is too relatable. ^^
Cheers, Mila

Replies from: Moya Schiller
comment by Moya (Moya Schiller) · 2023-03-01T22:00:28.437Z · EA(p) · GW(p)

Hi Mila,

Yeah, I am involved in the Darmstadt local group (when I have the time, many many things going on.)

And wheee, would be glad to meet you too :)

Replies from: Milena Canzler
comment by Milena Canzler · 2023-03-08T10:38:32.565Z · EA(p) · GW(p)

Sweet! I'm sure we'll meet sooner or later then :D

comment by Carlos Ramírez · 2023-03-01T00:48:41.316Z · EA(p) · GW(p)

Nice to meet you! Also a new guy. Good to see you're a witch, I'm a mystic! A burn event is a copy of Burning Man? Definitely would like to go to one of those.

Replies from: Moya Schiller
comment by Moya (Moya Schiller) · 2023-03-01T22:01:42.435Z · EA(p) · GW(p)

Hi there :)

Yes indeed, burn events are based on the same principles as Burning Man, but each regional burn is a bit different just based on who attends, how these people choose to interpret the (intentionally) vague and contradicting principles, etc. :)

comment by graceyroll · 2023-02-17T14:19:57.614Z · EA(p) · GW(p)

First time poster here.
I am currently doing my master's degree in design engineering at Imperial College London, and I am trying to create a project proposal around the topic of computational social choice and machine learning for ethical decision making. I'm struggling to find a "design engineering" take on this - what can I do to contribute in the field as a design engineer?

In terms of prior art, I've been inspired by MIT's Moral Machine, feeding ML models of aggregate ethical decisions from people. If anyone has any ideas on a des eng angle to approach this topic, please give me some pointers! 


Replies from: quinn, garymm
comment by quinn · 2023-02-21T16:50:30.507Z · EA(p) · GW(p)

I don't think it'll help you in particular but my thinking was influenced by Critch's comments about how CSC applies to existential safety [AF · GW

comment by garymm · 2023-02-26T17:13:56.606Z · EA(p) · GW(p)

Seems somewhat related to RadicalXChange stuff. Maybe look into that. They have some meetups and mailing lists.

comment by ChayBlay · 2023-02-06T21:11:26.064Z · EA(p) · GW(p)

Hi everyone,

I was close to becoming a statistic of someone who started reading 80,000 hours but never completed the career planning program. I am coming back now as I need some direction.

Of all the global priorities, I gravitate toward those that focus on improving physical and mental health. As someone who deals with chronic pain and is in between jobs, nothing consumes my attention more than alleviating physical and mental suffering.

I am curious if anyone in the community spends their work life thinking and working on increasing longevity, eliminating chronic pain, improving athletic performance, or improving individual reasoning or cognition.

As I am searching for jobs that align with my interests and considering going back to school, I would be grateful for any insight that the community has to offer with regard to pursuing these different fields.

I am 32 years old and unfortunately, 10 years of medical school is no longer appealing. I've thought about biotech and IT because of the limitless upside that tech generally can leverage in terms of health outcomes and even salary, but I'm overwhelmed about the best place to begin to get into those fields.

I'm also thinking about PA school, but I feel like that might place limits on making a larger impact given the connotations (implicit and otherwise) of being an "assistant".

Thank you for reading this far and for any advice you are willing to share!


Replies from: Erich_Grunewald
comment by Erich_Grunewald · 2023-02-06T21:25:09.850Z · EA(p) · GW(p)

Of all the global priorities, I gravitate toward those that focus on improving physical and mental health. As someone who deals with chronic pain and is in between jobs, nothing consumes my attention more than alleviating physical and mental suffering. I am curious if anyone in the community spends their work life thinking and working on increasing longevity, eliminating chronic pain, improving athletic performance, or improving individual reasoning or cognition.

Not sure how helpful this is to you, but the Happier Lives Institute does research on mental health and chronic pain. See e.g. this recent post on pain relief [EA · GW], and this one evaluating a mental health intervention [EA · GW] (but also this response [EA · GW], and this response to the response [EA · GW]).

comment by GoingCoast · 2023-02-02T03:20:39.171Z · EA(p) · GW(p)

Woof. This look’s exhausting. So I found out I’m on the autism spectrum. My energy for people saying things is… not a very high capacity. It’s been fun recently to stretch my curiosity with this AI But engaging with people is generally an overwhelming prospect.

I want to design a stupidly efficient system that revives public journalism and research, strengthens eco-conscious businesses challenged by competitors who manufacture unsustainable consumer goods, provides supplemental education for age groups to support navigating changing understanding and provide guidance for “better humaning and/or Earth/environmental custodianship”, and establish foundations for universal basic income. And probably design a functional healthcare system while I’m at it. And I want to burn targeted advertisement to the ground.

Thanks for giving me a space where I can say all these things.

comment by BrownHairedEevee (evelynciara) · 2023-01-14T21:08:39.277Z · EA(p) · GW(p)

What kind of lightbulb is Qualy? Incandescent or LED? probably not CFL given the shape

comment by Misha_Yagudin · 2023-02-25T16:13:50.127Z · EA(p) · GW(p)

Is there a way to only show posts with ≥ 50 upvotes on the Frontpage?

Replies from: HaukeHillebrandt
comment by Hauke Hillebrandt (HaukeHillebrandt) · 2023-02-25T16:45:07.425Z · EA(p) · GW(p)

Stop free-riding! voting on new content is a public good, Misha ;P [? · GW]

Replies from: Misha_Yagudin
comment by Misha_Yagudin · 2023-02-26T15:30:35.796Z · EA(p) · GW(p)

Thank you, Hauke, just contributed an upvoted to the visibility of one good post — doing my part!

Alternatively, is there a way to apply field customization (like hiding community posts and up-weighting/down-weighting certain tags) to [? · GW]?

Replies from: NunoSempere
comment by NunoSempere · 2023-02-26T18:50:32.356Z · EA(p) · GW(p)

Yes, ctrl+F on "customize tags"

Replies from: Lizka
comment by Lizka · 2023-02-26T20:14:08.952Z · EA(p) · GW(p)

Hi! On the All Posts page [? · GW], you can't filter by most tags, unfortunately, although we just added the option of hiding the Community tag:

Find the sorting options:

Hide community:

On the Frontpage, you can indeed filter by different topics [EA · GW]. 

comment by C_Axiotes · 2023-02-11T10:59:05.019Z · EA(p) · GW(p)

It’ll be my first time at a Bay Area EA Global at the end of this month - does anyone have any tips? Any things I should definitely do?

Also if you’re interested in institutional reform you might like my blog Rules of the Game:

Replies from: Felix Wolf, Ishan Mukherjee
comment by Felix Wolf · 2023-02-11T16:28:08.903Z · EA(p) · GW(p)

Hey Axiotes,
congratulations on your accepted EAG application! Here are three articles you may find interesting.

My personal tips are: take time for yourself and don't overwhelm yourself too much. Write down beforehand how the best EAG would look like to you and how a great EAG would look like. Take notes on what you want to accomplish and what to speak about in your 1o1s. Make 1o1s and have a good, productive time. After the EAG reevaluate what happened, what you have learned and write down next steps.

comment by Wubbles · 2023-01-29T22:28:18.051Z · EA(p) · GW(p)

Does anyone have estimates on the cost effectiveness of trachoma prevention? It seems as though mass antibiotic administration is effective and cheap, and blindness is quite serious. However room for funding might be limited. I haven't seen it investigated by many of the organizations, but maybe I just haven't found the right report.

Replies from: Ian Turner, Rafael Vieira
comment by Ian Turner · 2023-02-12T16:13:42.470Z · EA(p) · GW(p)

GiveWell looked at this in 2009 and decided that chemoprophylaxis is not cost effective.

GiveWell leans on a 2005 Cochrane study that concluded that "For the comparisons of oral or topical antibiotic against placebo/no treatment, the data are consistent with there being no effect of antibiotics".

However, it looks like Cochrane revisited this in 2019 and I'm not sure if Givewell took a second look.

comment by Rafael Vieira · 2023-02-12T12:33:38.294Z · EA(p) · GW(p)

Hey Wubbles,

I realise that my response is a bit late, but there is some peer-reviewed literature on this matter. The most relevant paper would be this one from 2005. The main results are:

(...) trichiasis surgery with 80% coverage of the population would avert more than 11 million DALYs per year globally, with cost effectiveness ranging from I$13 to I$78 per DALY averted, which is below the cost-effectiveness threshold of three times GDP per capita. Mass antibiotic treatment using azythromycin at prevailing market prices at 95% coverage level would avert more than 4 million DALYs per year globally and is most cost-effective among antibiotic interventions with ratio’s ranging between I$9,000 and I$65,000 per DALY averted. However, the cost per DALY averted exceeds the cost-effectiveness threshold.

Unfortunately, I am not aware of any more recent paper using updated azythromycin costs. It would be interesting for someone to perform a new cost-effectiveness study based on the 2015 International Medical Products Price Guide, as the price of azythromycin is known to have decreased since 2005.  There is, however, a recent study restricted to Malawi that suggests that mass treatment with azythromycin may be cost-effective.

comment by Matt Keene · 2023-01-26T00:42:24.018Z · EA(p) · GW(p)

How do folks! Stoked to have the opportunity to try and be a participant that contributes something meaningful here on the EA Forum. 

EA Forum Guidelines (and Aaron)...thank you for the guidance and encouraging me to write the bio. 

All, I'm new to the EA community. I'll hope to meet some of you soon. Please feel free to send a hello anytime. 

I see the "Commenting Guidelines". They remind me of the Simple Rules of Inquiry that I've used for many years. Are they a decent match for the spirit of this Forum?

  1. Turn judgment into curiosity
  2. Turn conflict into shared exploration
  3. Turn defensiveness into self-reflection
  4. Turn assumptions into questions

What do I care about?  I've been unanimously appointed to the post of lead head deputy associate administrator facilitator of my daughters' education (6 and 10) . I love them. Our educational praxis is designed to enable them to realize an evaluative evolution and create a future we want amidst the accelerating coevolution of Nature, Humans and AI. No presh. They spell great. Well, one out of two anyway.

I also care about the chill peeps sweeping the beach with metal detectors wearing headsets. I want to learn more but I don't want to be rude and interrupt what they are listening to. 

See you in the funny papers.


(I'm reading the commenting guidelines wondering which ones I violated. Like a historian, I'm not sure if I was explaining or pursuading. I certainly wasn't clear. I disagreed with almost everything I wrote...didn't I? Okay. So. How do I ask readers where they went after they got kicked off this Forum? Tbc, I want to stay.).

Replies from: Felix Wolf
comment by Felix Wolf · 2023-01-26T13:24:10.489Z · EA(p) · GW(p)

Hey Matt,

welcome to the EA Forum. :)

Your personal guidelines translate well into our community guidelines here in the forum. No worries on that front.

If you want any guidance on where to find more information or where to start, feel free to ask or write me a personal message. 

I was browsing your website/blog and found a missing page:
The presentation is offline atm. I hope this helps. :D

A suggestion for your work as lead head deputy associate administrator and facilitator could be to visit this website: 

Non-Trivial sponsors fellowships for student projects, which is something you could do in the future, but more importantly for now maybe take a look at their course:

"How to (actually) change the world" could be interesting.

With kind regards


Replies from: Matt Keene
comment by Matt Keene · 2023-01-26T18:06:29.104Z · EA(p) · GW(p)

Thank you Felix. Nice to feel welcome. 

Grateful for the new opportunities and resources you've shared. We will look into them and keep them handy. 

I appreciate the website feedack...It is a work in progress and I could do much better at tidying things up that I won't likely get to in the near term. On it!

Thank you for your service to educate our friends and peers about the environment.

Take good care of yourself Felix. 

comment by Leo Mansfield · 2023-03-17T21:40:18.951Z · EA(p) · GW(p)

I would like advice on writing a resume and applying to work in an effective career.  I will graduate with an economics bachelor's degree in April. I'm taking many statistics courses. I also took calculus and computer science courses. I live on the west coast of Canada and I am willing to move.

I believe I would be well suited to AI Governance but it may be better currently to find statistics/econometrics work or do survey design (to build general skills until I know more AI Governance people, or switch into a different effective cause area)

I am also open to recommendations for other effective careers. My degree is quite general and I have deliberately avoiding sinking much into AI Governance specifically. I think I have a comparative advantage in AI Governance because my father is a manager of a machine learning research team at Google, that I could potentially influence.

If I don't find an occupation immediately after graduation then I will do local community-building in Cryonics/Life-Extension and take these courses online:

  • Bayesian Statistics (Statistical Rethinking by Richard Mcelreath
  • AI Governance by BlueDot Impact
  •  In-Depth EA program

I would like to do these courses anyway but if I find an occupation then I can do one or two at a time. 

If I find an occupation outside EA then I will focus on learning statistics and other general skills. Then I can better apply these skills once I move into an AI Governance. If the occupation is in government or policy spaces then I will develop relevant social networks. The downside is that I'd be less poised to take opportunities in AI Governance.

I don't know much about finding an occupation in AI Governance. I applied to internships last summer and after being refused I asked the hiring staff what skills I should learn, and read all their recommendations. But I just don't really know what is going on in the AI Governance career path.

I'd appreciate a comment if you know of:

  • guides to writing resumes and job applications (EA-specific)
  • places I should apply to that aren't on the 80000 hours job board.
  • advice on what non-EA work would help me build relevant skills and networks. (or even volunteer projects I could do on my own! I'm not in immediate need of paid work)
Replies from: Ishan Mukherjee
comment by Ishan Mukherjee · 2023-03-23T08:19:24.228Z · EA(p) · GW(p)

The EA Opportunities Board and Effective Thesis' database (they also have a newsletter) might be useful. I expect they're listed on 80,000 Hours so you might already know them, but if not: ERA Cambridge [EA · GW]are accepting applications for AI governance research fellowships.

comment by William the Kiwi · 2023-03-14T21:22:08.842Z · EA(p) · GW(p)

Hi there everyone, I'm William the Kiwi and this is my first post on EA forums. I have recently discovered AI alignment and have been reading about it for around a month. This seems like an important but terrifyingly under invested in field. I have many questions but in the interest of speed I will involve Cunningham's Law and post my current conclusions.

My AI conclusions:

  1. Corrigiblity is mathematically impossible for AGI.
  2. Alignment requires defining all important human values in a robust enough way that it can survive near-infinite amounts of optimisation pressure exerted by a superintelligent AGI. Alignment is therefore difficult.
  3. Superintelligence by Nick Bostrum is a way of communicating the antimeme "unaligned AI is dangerous" to the general public.
  4. The extinction of humanity is a plausible outcome of unaligned AI.
  5. Eliezer Yudkowsky seems overly pessimistic but likely correct about most things he says.
  6. Humanity is likely to produce AGI before it produces fully aligned AI.
  7. To incentivize responses to this post I should offer a £1000 reward for a response that supports or refutes each of these conclusions and provides evidence for it.

I am currently visiting England and would love to talk more about this topic with people, either over the Internet or in person.

Replies from: Carlos Ramírez, robirahman
comment by Carlos Ramírez · 2023-03-16T21:13:14.193Z · EA(p) · GW(p)

You might want to read this is as a counter to AI doomerism: [LW · GW]

This for a way to contribute to solving this problem without getting into alignment: [LW · GW]

this too:

and this for the case that we should stop using neural networks:

comment by Robi Rahman (robirahman) · 2023-03-14T22:04:08.358Z · EA(p) · GW(p)

Hi William! Welcome to the Forum :)

Why do you think that corrigibility is mathematically impossible for AGI? Because you think it would necessarily have a predefined utility function, or some other reason?

Replies from: William the Kiwi
comment by William the Kiwi · 2023-03-15T10:35:51.347Z · EA(p) · GW(p)

Hi Robi Rahman, thanks for the welcome.

I do not know if has a predefined utility function, or if the functions simply have similar forms. If there is a utility function that provides utility for the AI to shutdown if some arbitrary "shutdown button" is pressed, then there exists a state where the "shutdown button" is being pressed at a very high probability (e.g. an office intern is in the process of pushing the "shutdown button") that provides more expected utility than the current state. There is therefore an incentive for the AI to move towards that state (e.g. by convincing the office intern to push the "shutdown button"). If instead there was negative utility in the "shutdown button" being pressed, the AI is incentivized to prevent the button from being pressed. If instead the AI had no utility function for whether the "shutdown button" was pressed or not, but there somehow existed a code segment that caused the shutdown process to happen if the "shutdown button" was pressed, then there existed a daughter AGI that has slightly more efficient code if this code segment is omitted. An AGI that has a utility function that provides utility for producing daughter AGIs that are more efficient versions of itself, is incentivized to produce such a daughter that has the "shutdown button" code segment removed.

There is a more detailed version of this description in

I could be wrong about my conclusion about corrigiblity (and probably am), however it is my best intuition at this point.

comment by Misha_Yagudin · 2023-03-13T17:34:18.830Z · EA(p) · GW(p)

The table here got all messed up. Could it be fixed? [EA · GW]

Replies from: Forum assistant, Forum assistant
comment by Dane Magaway (Forum assistant) · 2023-03-14T16:22:04.571Z · EA(p) · GW(p)

This has now been fixed. Our tech team has resolved the issue by using dummy bullet points to widen the columns. Thanks for reaching out! Let me know if you run into any issues on your end.

Replies from: Misha_Yagudin, Misha_Yagudin
comment by Misha_Yagudin · 2023-03-16T18:20:47.097Z · EA(p) · GW(p)

Thank you very much, Dane and the tech team!

comment by Misha_Yagudin · 2023-03-16T18:23:56.843Z · EA(p) · GW(p)

Hey, I think the fourth column was introduced somehow… You can see it by searching for "Mandel (2019)"

comment by Dane Magaway (Forum assistant) · 2023-03-13T17:42:28.010Z · EA(p) · GW(p)

Hi, Misha! Thanks for reaching out. We're on it and will let you know when it's sorted.

comment by graceyroll · 2023-03-01T14:13:49.641Z · EA(p) · GW(p)

Hi guys !

I posted about 2 weeks ago here asking for masters project ideas around the field of computational social choice and machine learning for ethical decision making.

To recap: I'm currently doing my master's project in design engineering at Imperial, where I need to find something impactful, implementable and innovative.

I really appreciated all the help I got on the post, however, I've hit a kind of dead end - I'm not sure I can find something within my scope with the time frame in the field I've chosen.

So now I'm asking for any project ideas which fit the above criteria. It can be in any field, and honestly, any points would really help. I want to take this project as the opportunity to really make something meaningful with the time I have here at uni.


Replies from: Lorenzo Buonanno
comment by Lorenzo Buonanno · 2023-03-01T14:49:24.883Z · EA(p) · GW(p)

Hi Grace!

I don't have any project ideas in mind, but I wonder if it would make sense to talk with the people at and maybe to have a look at this board for inspiration

Good luck with your project!

comment by Marc Wong · 2023-02-09T16:03:32.991Z · EA(p) · GW(p)

Hello, All!

I found EA via the New Yorker article about William MacAskill.
I am the author of "Thank You For Listening". 
I listen, therefore you are. We understand and respect, therefore we are. We bring out the best in each other, therefore we thrive.
Go beyond Can Do. We Can understand, respect, and bring out the best in others, often beyond our expectations.
We know how to cooperate on roads. We can cooperate at home, at work, and in society. Teach everyone to listen (yield), check biases (blind spots), and reject ideological rage (road rage).
Bringing Out  The  Best In Humanity [? · GW]

Replies from: Felix Wolf
comment by Felix Wolf · 2023-02-09T23:55:50.142Z · EA(p) · GW(p)

Hey Marc,

here is a workable link to your post from October: [? · GW

comment by BobMail · 2023-03-14T07:59:35.643Z · EA(p) · GW(p)

GiveWell traditionally has quarterly board meetings; were there ones in August and December 2022? If so, are notes available? (

comment by Jack FitzGerald · 2023-03-09T10:37:57.684Z · EA(p) · GW(p)

Hey everyone! First time poster here, but long time advocate for effective altruism.

I've been vegan for a couple of years now, mostly to mitigate animal suffering. Recently I've been wondering how a vegetarian diet would compare in terms of suffering caused. Of course I presume veganism would be better, but by how much?

With this in mind I'm wondering is there any resources that attempt to quantify how much suffering is caused by buying various animal products?  For example dairy cows produce about 40,000 litres of milk in their lifetime, which can be used to make about 4000kg of cheese. With this in mind one could consider how much suffering a dairy cow endures in their lifetime and then quantify how much suffering they are responsible for each time they purchase a kilo of cheese.

My calculations are of course very imprecise and probably quite flawed, but I'm curious if anyone else has taken a more robust attempt at comparing the suffering caused by various animal products? I realize this may be hard since suffering is hard to quantify.,more%20suffering%2C%20all%20else%20equal.

Replies from: emre kaplan, Lorenzo Buonanno, emre kaplan
comment by emre kaplan · 2023-03-09T10:58:13.178Z · EA(p) · GW(p)

I suspect most of the impact of veganism comes from its social/political side effects rather than the direct impact of the consumption. I believe it's better to mostly think about "what kind of meme and norm should I spread" as most of the impact is there.

Replies from: Jack FitzGerald
comment by Jack FitzGerald · 2023-03-10T20:10:54.182Z · EA(p) · GW(p)

I'm inclined to agree, although I was curious nonetheless. Also anecdotally  it seems like an increasing number of people are basing their diet on  calculated C02 emissions, so calculations based on suffering seem like they would be a useful counterpart.

Thanks for sharing the compilation!

comment by Lorenzo Buonanno · 2023-03-09T10:45:16.030Z · EA(p) · GW(p)

Hi Jack!

You might be interested in and .

In particular, eggs seem to cause a surprising amount of suffering per serving (compared to e.g. milk or cheese)

Replies from: Jack FitzGerald
comment by Jack FitzGerald · 2023-03-10T20:00:10.998Z · EA(p) · GW(p)

Both of those resources are excellent and exactly the sort of thing I was looking for. Thank you so much!  

comment by Brendan OHare 🐮 · 2023-02-09T00:23:16.361Z · EA(p) · GW(p)

Howdy everyone!


I'm Brendan O'Hare, and I was an arete fellow in college and I have been involved with EA since! I have recently decided to try and chart my own career path after striking out a couple of times in job application process post-graduation.  I have decided to start a newsletter/blog/media outlet focused on Houston and local issues, particularly focusing on urbanism. I want to become an advocate for better local policies that I understand quite a bit. 


If anyone has any tips with regards to writing, growing on twitter, etc. I would love to hear it! Thank you all so much for this platform. 

comment by Simon Sällström · 2023-02-02T05:33:03.651Z · EA(p) · GW(p)

Internship / board of trustees!

My name is Simon Sällström, after graduating with a masters in economics from Oxford in July 2022, I decided against going on the traditional 9-5 route in the City of London to move around money to make more money for people who already have plenty of money… Instead, I launched a charity

DirectEd Development Foundation is a charitable organisation whose mission is to propel economic growth and develop and deliver evidence-based, highly scalable and cost-effective bootcamps to under-resourced high-potential students in Africa, preparing them for remote employment by equipping them with the most sought-after digital and soft skills on the market and thereby realise their potential as leaders of Africa’s digital transformation.

I'm looking for passionate people in the EA community to join me and my team!

We are mainly looking for two unpaid positions to fill right now: interns and trustees. The latter is quite an important role

I am not entirely sure how to best go about this which is why I am writing this short comment here. Any advice? 

Here's what I have done so far in terms of information about the internship position and application form: 

Here is what we have for the trustees (work in progress): 

Happy to take any and all advice:)

comment by Trudy Beerman · 2023-03-11T20:27:16.064Z · EA(p) · GW(p)

I am new to EA. My name is Trudy Beerman. I am pursuing doctoral studies in strategic leadership at Liberty University. My business is legally registered as Profitable Stewardship Inc; however, we are active under the PSI TV brand. At PSI TV, we make you the star and deliver your content to our TV audience. We also build these Netflix-like TV channels for brands to have a presence on Roku TV, Amazon Fire TV, VIDAA TV, and inside a mobile app (which we also build for our clients). I am enjoying the posts I have read here and commented on. 

comment by Catherine Fist · 2023-01-31T18:41:13.943Z · EA(p) · GW(p)

What I thought about child marriage as a cause area, and how I've changed my mind


I have been working on a research project into the scale, tractability and neglectedness of child marriage. After 80 hours of research, I thought that there was a relatively strong case that effective altruist funding organisations that fund projects addressing international poverty should consider funding child marriage interventions. I then found a source that undermined a key premise: child marriage is clearly harmful across a number of health metrics. I describe in more detail my experience and findings below, and share some tips for those undertaking self-directed research projects to avoid making the mistakes I made (skip to ‘What I will do next time’ for these).


 I had no direct experience researching child marriage, but I was interested to learn about effective interventions and whether it had potential as a possible cause area. I studied Political Science and International Relations at University, as well as some subjects on development, gender and economics, I have also worked as a government evaluator. My goal was to do some preliminary research and determine if child marriage was large scale problem, tractable and neglected. If so, I would share this research with effective altruist funders.

My model:

In October last year, I started a self-directed research project into the scale, tractability and neglectedness of child marriage. I read and collected dozens of sources, analyzed data, contacted a top researcher, compared effective interventions, built a mental model of what the charitable space looked like and identified potential interventions for EA support.

I came to the following findings, based on around 80 hours of research:

  1. Scale/importance
    1. Child marriage is a widespread practice that affects around 12 millions girls per year (UNICEF, 2022)
    2. Child marriage is a harmful practice that increases the risk of negative maternal and sexual health outcomes, domestic and sexual violence, and reduces the likelihood that a girl will complete school. This is the consensus position held by global development institutions (see meeting report from leading global institutions on child marriage UNFPA, 2019). 
  2. Tractability 
    1. There are cost effective interventions that work to prevent child marriage, e.g. the ‘cost per marriage averted’ ranged between US$159 and US$732 in this study (Erulkar, Medhin and Weissman 2017).
    2. The effect of child marriage on quality adjusted or disability adjusted life years has not been quantified so it difficult to compare cost effectiveness with other interventions (EA Forum explainer on these metrics [EA · GW]).
  3. Neglectedness
    1. Population Council is a research body focussed on running quasi-experimental programs and creating scalable interventions (Population Council, date unknown). 
    2. The lead investigator into child marriage at Population Council informed me that it is not currently running programs to prevent child marriage because of lack of funding.
  4. Conclusion
    1. Effective altruist funding organisations focused on international health and poverty should consider funding effective interventions to prevent child marriage at a large scale.

What broke my model

Earlier this week, I decided it would be useful to try and quantify the harm of child marriage, or at least some of the harms, into commonly used metrics like quality adjusted or disability adjusted life years (QALYs or DALYs). I anticipated that this would be a key piece of information for EA funders, and it had not been done so far (finding 2b). In doing so, I came across a study that fundamentally challenged finding 1b: child marriage is an underlying cause of many harmful outcomes. Without strong evidence that child marriage causes harm, the other findings on its tractability and neglectedness are significantly less consequential.

Fan and Koski, 2022 examined the data from 58 studies on the effects of child marriage on a range of health outcomes (see my summary Table 1). They found that while data clearly shows that child marriage increases the risk of physical violence, results from other metrics, including impacts on contraceptive use, maternal health, nutrition and mental wellbeing are mixed (Fan and Koski, 2022). Some studies even showed that child marriage was associated with more positive outcomes, such as higher contraceptive use (Fan and Koski, 2022). It is unclear why results are so mixed, but worth noting that these studies take place over many different countries and cultural contexts across Sub-Saharan Africa and South Asia.

Additionally, Woden et al., 2017 reviewed a series of studies into decision-making power and found that it is unclear whether child marriage leads to lessened decision-making power.

This is not to say that child marriage is not harmful, it is linked to:

Girls married before the age of 15 are a less studied cohort than girls under 18. The few studies that look at girls married before 15 as a separate group indicate negative impacts. While limited, studies that show that the following health risks are greater for girls married before the age of 15 for: 


Table 1 below summarizes the information I have collected on the harms of child marriage so far.

Table 1

HarmGirls married before the age of 15Girls married before the age of 18Sources
Physical violence Increased riskIncreased riskFan and Koski, 2022 consistent results across eight studies. One study showed no effect but this study only looked at a period of three months (Erulkar, 2013).
Sexual violenceUnclearUnclearFan and Koski, 2022, citing seven studies. Some showed increased risk and some showed no effect.
Unwanted pregnancyUnclearUnclear

Fan and Koski, 2022, citing seven studies: some found increased risk, some found decreased risk, some found no effect and some found variable effects in different countries. 

Girls married before 15 studied separately by only one study. This study found there was increased risk for this group (Kamal, 2013). 

Stillbirth and miscarriageUnclear



Fan and Koski, 2022, citing 2 studies. One found increased risk, one found no increased risk for 15-17 year olds but increased risk for girls married before age of 15 (Paul, 2018).
Use of contraceptionUnclearUnclearFan and Koski, 2022, citing 15 studies. Some found increased use, some found decreased use, some found no effect. One study showed girls married under 15 less likely to use contraception (Habyarimana F, Ramroop, 2018).
Use of maternal health careUnclearUnclearFan and Koski, 2022, citing nine studies. Some showed no effect and some showed decreased likelihood of using maternal health care. Consistent across studies was the decreased likelihood of giving birth in a healthcare facility. However, this was not adjusted for the potentially confounding variable that most child marriage occurs in rural areas where health care facilities are often far away.
Nutritional statusUnclear



Fan and Koski, 2022, citing six studies. Some found increased likelihood of malnutrition, others found no effect. One study found decreased likelihood of malnutrition. One study found increased likelihood of malnutrition for girls married before 15 but no effect for those married between 15 and 18 (Yimer, 2016).
Completing less school school than peers Increased riskIncreased riskConsistent across three studies (Delprato et al., 2015Lloyd and Mensch, 2008Field and Ambrus, 2008.)


Updated positions and next steps

My updated views on child marriage after doing this research are that: 

  1. The causal model that child marriage leads to harmful outcomes, except for the metrics of increased risk of physical violence and decreased school completion, is not stable (see Table 1).
  2. I should direct time into measuring:
    1. How much child marriage leads to physical violence and decreased school years completed, and attempt to quantify this harm through the most appropriate metric (e.g. QALYs or DALYs).
    2. The harms to girls married before the age of 15, and attempt to quantify this harm into the most appropriate metric (e.g. QALYs or DALYs).
  3. If the findings of 2 or 3 are significant, I should research the tractability and neglectedness these problems.
  4. It may be better to look outside of the causal model of child marriage at more specific harms and attempt to address these directly. For example, this study found that no more than 20% of girls who dropped out of school in francophone Africa did so because of marriage or pregnancy (Lloyd and Mensch, 2008).

What I know so far about scale and tractability: marriage before 15

UNICEF estimates that around 5% of women aged 20-24 alive today were married before the age of 15 (UNICEF, 2021). The Population Council demonstrated that providing educational materials to girls at a cost of $20 per girl reduced the risk of girls being married. The adjusted risk ratio was 0.09 (81% decreased chance) with a 95% confidence interval of (0.01, 0.71) or 99% to 29% decreased chance and (significance: p<0.05). However, the results of this intervention were not consistent in Burkina Faso and Tanzania (Erulkar et al., 2017). I plan to review further studies that measure interventions to delay marriage targeted at girls under 15.

What I will do next time

I am glad I undertook this project. I learnt lots of things about child marriage, taught myself some statistical concepts, and things that I will do differently next time to save time and more quickly develop my model.


Before committing time to a research project:

  1. List my underlying assumptions (I likely would have listed 1b)
  2. Search for sources that disagree with these assumptions (I would likely have found Fan and Koski, 2022 within the first 5 hours of doing this project, and disrupted 1b)
  3. Consider whether those assumptions still hold. 

Consensus and data

It is important not to automatically trust a position made by large institutions: 

  1. As soon as I see a claim being made, no matter which authority is making this claim, trace the claim back to its underlying data (I would have traced UNICEF and UNFPA’s claims about 1b back to their source material).
  2. Thoroughly examine the methods and results sections of studies and come to my own conclusions based on the data before reading any discussion or conclusions (I would have found that 1b was weaker than I anticipated, and worked to refine my research questions). 

Find someone to report on my projects to 

I think the act of reporting my process and findings to someone else, and having them challenge my assumptions, would probably have led me to challenge my assumptions faster, and saved me lots of time. I will be reaching out to fellow EAs to fulfill a supervisor/challenger role for future self-directed projects. Also, if you have a project, I am happy to play this role for you. My email address is


If you found this post interesting, you may also like:

How science works and how best to read scientific papers

I really enjoyed listening to Spencer Greenberg and Christie Aschwanden speak about how slow and hard determining even the most simple things using scientific methods is in this podcast. They also spoke about how they read scientific papers, which I found useful.

EA Forum post on violence against women and girls 

If you are interested in the broader topic of reducing gender based violence, I recommend this recent post from Akhil: What you can do to help stop violence against women and girls - EA Forum [EA · GW]. 

A promising maternal health charity start-up 

Sarah Hough and Ben Williamson launched the Maternal Health Initiative (MHI) last year through Charity Entrepreneurship. MHI which aims to increase access to family planning in Sub-Saharan Africa. They have a robust theory of change and I am excited to see the impact they make.


Thanks :)

Thanks to Ranay Padarath and Tim Fist who reviewed this post for me and gave me wonderful feedback.