Posts

How to find *reliable* ways to improve the future? 2022-08-18T12:47:04.271Z
No Animals Were Harmed 2022-04-06T05:57:08.327Z

Comments

Comment by Sjlver on Who wants to be hired? (May-September 2022) · 2022-09-12T08:46:45.080Z · EA · GW

Hi, would you be interested in AMF's software engineer positions? We have Python-based data analysis tasks that you might find fun, and I bet you could pick up the rest of the tech stack quickly. I came to AMF from a similar background as you (Python/C++ @ Google) and found that many of the skills translated well into the new environment.

Comment by Sjlver on Who wants to be hired? (May-September 2022) · 2022-09-12T08:13:58.076Z · EA · GW

Maybe check out the Operations Manager positions at AMF?

Comment by Sjlver on Who wants to be hired? (May-September 2022) · 2022-09-12T08:09:01.701Z · EA · GW

AMF is hiring software engineers. Our tech-stack is a bit more Microsoft-centric than your skills (using dotnet core, SQL server, ...) but I guess you could quickly pick up new skills and be effective. I came from a quite similar background as you when I joined AMF.

Comment by Sjlver on Who's hiring? (May-September 2022) · 2022-09-12T07:57:10.455Z · EA · GW

The Against Malaria Foundation is hiring for several positions. We distribute bednets to protect people from Malaria. We aim to be one of the world's most effective charities and hold ourselves to high standards in transparency, accountability, and efficiency.

AMF is a small team of currently ten people, so this round of hiring represents a big milestone. Anyone who joins us will make a significant difference to the organization.

Open positions:

For questions about these roles, reach out to AndrewGarner@AgainstMalaria.com (software engineers) or RMather@AgainstMalaria.com (all roles).

Comment by Sjlver on Could it be a (bad) lock-in to replace factory farming with alternative protein? · 2022-09-11T05:15:19.424Z · EA · GW

Thanks a lot for writing this up! This post contains many good thoughts; for example, I was intrigued by the thought that how we treat animals today might matter to how AI treats us in the future.

This post reminded me strongly of "How to create a vegan world" by Tobias Leenaert. In that book, Tobias argues that all progress toward veganism matters, be it people who reduce their meat consumption, the availability of good meat alternatives, moral progress, etc. Tobias compares the road to a vegan world with a long, stony, uphill path to a mountaintop. He says that it's hard to get to the top directly or using just one motivation (e.g., an exclusively moral motivation). Instead, we need every small step on the way and all sources of support that we can get.

One reason that Tobias provides is that motivation often follows action[1]. It's a lot easier to think kindly of animals after you treat them kindly. I believe that this has been true for me personally -- I feel more morally responsible toward animals (and more critical of the other forms of exploitation that you mention in the post) after having changed my diet, even though that change was partially motivated by ecological reasons.

"How to create a vegan world" changed my thinking toward a more consequentialist approach. I'm putting more emphasis on the direct consequences that an action has on animals, and less on the motivation behind this action. In fact, the book made me a bit wary of people who think that moral reasons are the only valid way of helping animals.

Overall, I think that the intuitions expressed in this post, if true, should cause us to rethink our approach to meat alternatives. However, I'm currently leaning to think that meat alternatives support rather than hinder the moral case for stopping animal exploitation.


  1. The book backs this claim with several sources; I just don't have the book at hand at the moment and have to write this from memory. ↩︎

Comment by Sjlver on Preventing an AI-related catastrophe - Problem profile · 2022-09-05T19:11:21.625Z · EA · GW

Thanks for pointing out these two places!

You seem much more confident than I am that work on AI that is unrelated to AI safety is in fact negative in sign.

Work on AI drives AI risk. This is not equally true of all AI work, but the overall correlation is clear. There are good arguments that AI will not be aligned by default, and that current methods can produce bad outcomes if naively scaled up. These are cited in your problem profile. With that in mind, I would not say that I'm confident that AI work is net-negative... but the risk of negative outcomes is too large to feel comfortable.

It seems hard to conclude that the counterfactual where any one or more of "no work on AI safety / no interpretability work / no robustness work / no forecasting work" were true is in fact a world with less x-risk from AI overall.

A world with more interpretability / robustness work is a world where powerful AI arrives faster (maybe good, maybe bad, certainly risky). I am echoing section 2 of the problem profile, which argues that the sheer speed of AI advances is cause for concern. Moreover, because interpretability and robustness work advances AI, traditional AI companies are likely to pursue such work even without an 80000hours problem profile. This could be an opportunity for 80000hours to direct people to work that is even more central to safety.

As you say, these are currently just intuitions, not concretely evaluated claims. It's completely OK if you don't put much weight on them. Nevertheless, I think these are real concerns shared by others (e.g. Alexander Berger, Michael Nielsen, Kerry Vaughan), and I would appreciate a brief discussion, FAQ entry, or similar in the problem profile.

And now I'll stop bothering you :) Thanks for having written the problem profile. It's really nice work overall.

Comment by Sjlver on Preventing an AI-related catastrophe - Problem profile · 2022-08-31T19:19:47.864Z · EA · GW

I appreciate the response, and I think I agree with your personal view, at least partially. "AI capabilities are racing forward regardless" is a strong argument, and it would mean that AI safety's contribution to AI progress would be small, in relative terms.

That said, it seems that the AI safety field might be particularly prone to work that's risky or neutral, for example:

  • Interpretability research: interpretability is a quasi-requirement for deploying powerful models. Research in this direction is likely to produce tools that increase confidence in AI models and lead to more of them being deployed, earlier.
  • Robustness research: Similar to interpretability, robustness is a very useful property of all AI models. It makes them more applicable and will likely increase use of AI.
  • AI forecasting: Probably neutral, maybe negative since it creates buzz about AI and increases investments.

It's puzzling that there is much concern about AI risk, and yet little awareness of the dual-use nature of all AI research. I would appreciate a stronger discussion about how we can make AI actually more safe, as opposed to more interpretable, more robust, etc.

Comment by Sjlver on Preventing an AI-related catastrophe - Problem profile · 2022-08-30T10:51:07.936Z · EA · GW

Thanks a lot for this profile!

It leaves me with a question: what is the possibility that the work outlined in the article makes things worse rather than better? These concerns are fleshed out in more details in this question and its comment threads, but the TL;DR is:

  • AI safety work is difficult: there are lots of hypotheses, experiments are hard to design, we can't do RCTs to measure whether it works, etc. Thus, there is uncertainty even about the sign of the impact.
  • AI safety work could plausibly speed up AI development, create information hazards, be used for greenwashing regular AI companies... thereby increasing rather than decreasing AI risk.

I'd love to see a discussion of this concern, for example in the form of an entry under "Arguments against working on AI risk to which we think there are strong responses", or some content about how to make sure that the work is actually beneficial.

Final note: I hope this didn't sound too adversarial. My question is not meant as a critique of the article, but rather a genuine question that makes me hesitant to switch to AI safety work.

Comment by Sjlver on How to find *reliable* ways to improve the future? · 2022-08-29T21:15:42.536Z · EA · GW

Oh, society can delay death by a lot [1]. GiveWell computes that it only costs in the low 100s of dollars to delay someone's death by a year. I think this is something very meaningful to do, generates a lot of happiness, and eliminates a lot of suffering.

My original post is about how we could do even better, by doing work targeted at the far future, rather than work in the global health space.

But these abstract considerations aside: I'm sorry to hear about the death of your mother and the Parkinson in your family. It is good to read that you seem to be coping well and spend a lot of time in the forests. Thank you for your thoughts.


  1. Whether we can delay death indefinitely depends on many things, e.g., your belief in sentient digital beings, but it might also be possible. ↩︎

Comment by Sjlver on How to find *reliable* ways to improve the future? · 2022-08-29T21:03:48.374Z · EA · GW

It's an interesting question to what degree AI and related technologies will strengthen offensive vs defensive capabilities.

You seem to think that they strengthen offensive capabilities a lot more, leading to "ever larger threats". If true, this would be markedly different from other areas. For example, in information security, techniques like fuzz testing led to better exploits, but also made software a lot safer overall. In biosecurity, new technologies contribute to new threats, but also speed up detection and make vaccine development cheaper. Andy Weber discusses that bioweapons might become obsolete on the 80000hours.org podcast. Similar trends might apply to AI.

Overall, it seems this is not such a clear case as you believe it to be.

Comment by Sjlver on How to find *reliable* ways to improve the future? · 2022-08-29T12:36:47.458Z · EA · GW

Why do you think that?

Your philosophy implies (if I understand correctly) that we should be indifferent between being alive and dead, but I've never once encountered a person who was indifferent. That would have very strange implications. The concepts of happiness and suffering would be hard to define in such a philosophy...

If you want me to benefit from your answer, I think you'd need to explain a bit more what you mean, since the answer is so detached from my own experience. And maybe write more directly about the practical implications.

Comment by Sjlver on How to find *reliable* ways to improve the future? · 2022-08-29T12:29:57.144Z · EA · GW

Thanks!

It's clear to me that I want to help people. I think my problem isn't that help is abstract. My current work is in global health, and it's a great joy to be able to observe the positive effects of that work.

My question is about what would be the best use of my time and work. I consider the possibility that this work should target improving the far future, but that kind of work seems intractable, indirect, conditional on many assumptions, etc. I'd appreciate good pointers to concrete avenues for improving the future that don't suffer from these problems. Helping old ladies and introspection probably won't help me with that.

Comment by Sjlver on How to find *reliable* ways to improve the future? · 2022-08-29T12:18:35.827Z · EA · GW

Why do you think this is true?

Currently, only few organizations can build large AI models (it costs millions of dollars in energy, computation, and equipment). This will remain the case for a few years. These organizations do seem interested in AI safety research. A lot of things will happen before AI is so commonplace that small actors like "amateur civilian hacker boys" will be able to deploy powerful models. By that time, our capabilities for safety and defense will look quite different from today -- largely thanks to people working in AI safety now.

I think there is a case for defending against the use of AI by malicious actors. I just don't follow your argument that this would invalidate all of AI safety research.

Comment by Sjlver on How to find *reliable* ways to improve the future? · 2022-08-29T12:11:52.654Z · EA · GW

Cool! Thanks for the link to these papers. I'll study them.

Comment by Sjlver on How to find *reliable* ways to improve the future? · 2022-08-20T10:03:39.190Z · EA · GW

Thank you. This is valuable to hear.

Maybe my post simplified things too much, but I'm actually quite open to learn about possibilities for improving the long term future, even those that are hard to understand or difficult to talk about. I sympathize with longtermism, but can't shake off the feeling that epistemic uncertainty is an underrated objection.

When it comes to your linked question about how near-termist interventions affect the far future, I sympathize with Arepo's answer. I think the effect of many such actions decays towards zero somewhat quickly. This is potentially different for actions that explicitly try to affect the long-term, such as many types of AI work. That's why I would like high confidence in the sign of such an action's impact. Is that too strong a demand?

Comment by Sjlver on How to find *reliable* ways to improve the future? · 2022-08-20T09:47:14.424Z · EA · GW

I don't mean to set an unreasonably high bar. Sorry if my comment came across that way.

It's important to use the right counterfactual because work for the long-term future competes with GiveWell-style charities. This is clearly the message of 80000hours.org, for example. After all, we want to do the most good we can, and it's not enough to do better than zero.

Comment by Sjlver on How to find *reliable* ways to improve the future? · 2022-08-19T15:34:16.633Z · EA · GW

Thanks a lot for your responses!

I share your sentiment: there must be some form of alignment work that is not speeding up capabilities, some form of longtermist work that isn't risky... right?

Why are the examples so elusive? I think this is the core of the present forum post.

15 years ago, when GiveWell started, the search for good interventions was difficult. It required a lot of research, trials, reasoning etc. to find the current recommendations. We are at a similar point for work targeting the far future... except that we can't do experiments, don't have feedback, don't have historical examples[1], etc. This makes the question a much harder one. It also means that "do research on good interventions" isn't a good answer either, since this research is so intractable.


  1. Ian Morris in this podcast episode discusses to what degree history is contingent, i.e., past events have influenced the future for a long time. ↩︎

Comment by Sjlver on How to find *reliable* ways to improve the future? · 2022-08-19T14:28:06.966Z · EA · GW

My point was that the alignment goal, from the human perspective, is an enslavement goal, whether the goal succeeds or not.

Really? I think it's about making machines that have good values, e.g., are altruistic rather than selfish. A better analogy than slavery might be raising children. All parents want their children to become good people, and no parent wants to make slaves out of them.

Comment by Sjlver on How to find *reliable* ways to improve the future? · 2022-08-19T14:23:43.398Z · EA · GW

I'm coming back after thinking a bit more about improving human genes. I think there are three cases to consider:

  1. Improving a living person, e.g., stem cell treatments or improved gut bacteria: These are firmly in the realm of near-term health interventions, and so we should compare their cost-effectiveness to that of bednets, vaccines, deworming pills etc. There is no first-order effect on the far future.

  2. Heritable improvements: These are actually similar, since the number of people with a given gene stays constant in a stable population (women have two children, one of which gets the gene, so there is one copy in each generation[1]). Unless there's a fitness advantage; but human fitness seems increasingly disconnected from our genes. We also have a long generation time of ~30 years, so genes spread slowly.

  3. Wild stuff: Gene drives, clones, influencing the genes on a seed spaceship... I think these again belong to the intractable, potentially-negative interventions.

To sum up, I don't think human gene improvement is one of the reliable ways to improve the future that I'm looking for in this question :(


  1. Maybe that would be different for inheritable bacterial populations... I don't know how these work. ↩︎

Comment by Sjlver on How to find *reliable* ways to improve the future? · 2022-08-19T07:53:46.469Z · EA · GW

(This is a separate reply to the "AI enslavement" point. It's a bit of a tangent, feel free to ignore.)

It's clear to me that the AI alignment problem is a robot-enslavement problem as well, but it's a trope, fairly obvious.

I don't follow. In most theories of AGIs, the AGIs end up smarter than the humans. Because of this, they could presumably break out of any kind of enslavement (cf. AI containment problem). It seems to me that an AGI world works only if the AGI is truly aligned (as in, shares human values without resentment for the humans). That's why I find it hard to envision a world where humans enslave sentient AGIs.

Comment by Sjlver on How to find *reliable* ways to improve the future? · 2022-08-19T07:49:58.027Z · EA · GW

Thank you for this detailed reply. I really appreciate it.

I overall like the point of preventing harm. It seems that there are two kinds: (1) small harms like breaking a glass bottle. I absolutely agree that this is good, but I think that typical longtermist arguments don't apply here, because such actions do not have a lasting effect on the future. (2) large, irreversible harms like ocean pollution. Here, I think we are back to the tractability issues that I write about in the post. It is extremely difficult to reliably improve ocean health. Much of the work is indirect (e.g., write a book to promote veganism ⇒ fewer people eat fish ⇒ reduced demand causes less fishing ⇒ fish populations improve).

Projects that preserve knowledge for the future (like the Lunar Library) are probably net positive. I agree with you on this. However, the scenarios where these projects have a large impact are very exotic; many improbable conditions would need to happen together. So again, this is very indirect work, and it's quite likely to have zero benefit.

Improving human genes and physical experiences is intriguing. I haven't thought much about it before. Thank you for the idea. I'll do more thinking, but would like to mention that past efforts in this area have often gone horribly wrong, for example the eugenics movement in the Nazi era. There is also positive precedent, though: I believe GMO crops are probably a net win for agriculture.

In the last part of your answer, you mention coordination problems, misaligned incentives, errors... I think we agree 100% here. These problems are a big part of why I think work for improving the far future is so intractable. Even work for improving today's world is difficult, but at least this work has data, experiments, and fast feedback (like in the deworming case).

Comment by Sjlver on How to find *reliable* ways to improve the future? · 2022-08-18T20:06:10.106Z · EA · GW

I'm quite agnostic here (or maybe I don't fully understand your comment).

My question is about ways to improve the future. Presumably, improvement implies that people are treated morally. Depending on the ethical framework, "people" might include sentient AIs... but I see that debate as outside the scope of my question.

I'd be happy to receive responses with reliable ways to improve the future under any value framework, including frameworks where AIs are sentient (but I'd ask for more thorough explanations if the framework was unknown to me or particularly outlandish).

Comment by Sjlver on How to find *reliable* ways to improve the future? · 2022-08-18T18:11:38.879Z · EA · GW

Failed deworming is not causing direct harm. It is still better to give money to ineffective deworming than to do nothing.

Apologies in advance for being nitpicky. But you could consider the counterfactual where the money would instead go to another effective charity. A similar point holds for AI safety outreach: it may cause people to switch careers and move away from other promising areas, or cause people to stop earning to give.

Comment by Sjlver on How to find *reliable* ways to improve the future? · 2022-08-18T18:07:12.082Z · EA · GW

This is valuable, thank you. I really like the point on early warning systems for pandemics.

Regarding the bioweapons convention, I intuitively agree. I do have some concerns about how it could tip power balances (akin to how abortion bans tend to increase illegal abortions and put women at risk, but that's a weak analogy). There is also a historical example of how the Geneva Disarmament Conference inspired Japan's bioweapons program.

Predicting how fast powerful AI is going to be developed: That one seems value-neutral to me. It could help regular AI as much as AI safety. Why do you think it's 10x more likely to be beneficial?

AI alignment research and AI governance: I would like to agree with you, and part of me does... I've outlined my hesitations in the comment below.

Comment by Sjlver on How to find *reliable* ways to improve the future? · 2022-08-18T17:55:33.221Z · EA · GW

Several people whom I respect hold the view that AI safety might be dangerous. For example, here's Alexander Berger tweeting about it.

A brief list of potential risks:

  • Conflicts of interests: Much AI safety work is done by companies who develop AIs. Max Tegmark makes this analogy: What would we think if a large part of climate change research were done by oil companies, or a large part of lung cancer research by tobacco companies? This situation probably makes AI safety research weaker. There is also the risk that it improves the reputation of AI companies, so that their non-safety work can advance faster and more boldly. And it means safety is delegated to a subteam rather than being everyone's responsibility (different from, say, information security).

  • Speeding up AI: Even well-meaning safety work likely speeds up the overall development of AI. For example, interpretability seems really promising for safety, but at the same time it is a quasi-necessary condition to deploy a powerful AI system. If you look at (for example) the recent papers from anthropic.com, you will find many techniques that are generally useful to build AIs.

  • Information hazard: I admire work like the Slaughterbots video from the Future of Life Institute. Yet it has clear infohazard potential. Similarly, Michael Nielsen writes "Afaict talking a lot about AI risk has clearly increased it quite a bit (many of the most talented people I know working on actual AI were influenced to by Bostrom.)"

  • Other failure modes mentioned by MichaelStJules:

    1. creating a false sense of security,
    2. publishing the results of the GPT models, demonstrating AI capabilities and showing the world how much further we can already push it, and therefore accelerating AI development, or
    3. slowing AI development more in countries that care more about safety than those that don't care much, risking a much worse AGI takeover if it matters who builds it first.
Comment by Sjlver on Are AGI timelines ignored in EA work on other cause areas? · 2022-08-18T15:11:24.502Z · EA · GW

In my day-to-day work for AMF[1], AGI timelines don't matter. My work is about making bednet distributions more efficient and transparent, and AGIs simply don't help with that yet.

Of course, I'm open to suggestions regarding how my work should be influenced by AGI timelines (apart from the obvious "stop it and work on AI safety instead").

There are other technological developments that I think will affect AMF's work, for example vaccines, gene drives, and new types of insecticides. We follow these closely. AGI itself will certainly affect bednet distributions too, once it arrives. Until then, the right thing to do IMO is to continue working hard to help as many people as possible in their fight against malaria today.


  1. I'm speaking for myself here and not on behalf of AMF. ↩︎

Comment by Sjlver on Leaning into EA Disillusionment · 2022-07-27T16:28:58.022Z · EA · GW

This thread does not fit my view, to be honest: to talk about "the community" as a single body with an "official" stance, to talk about "EA being utilitarian"...

EA is, at least for me, a set of ideas much more than an identity. Certainly, these ideas influence my life a lot, have caused me to change jobs, etc.; yet I would still describe EA as a diverse group of people with many stances, backgrounds, religions, ethical paradigms, united by thinking about the best ways for doing good.

In my life, I've always been interested in doing good. I think most humans are. At some point, I've found out that there are people who have thought deeply about this, and found really effective ways to do good. This was, and still is, very welcome to me, even if some conclusions are hard to digest. I see EA ideas as ways to get better at doing what I always wanted, and this seems like a good way to avoid disillusionment.

(Charles_Guthmann, sorry for having taken your thread into a tangent. This post and many of the comments hinges somewhat on "EA as part of people's identity" and "EA as a single body with an official stance", and your thread was where this became most apparent to me.)

Comment by Sjlver on Open Philanthropy Shallow Investigation: Telecommunications in LMICs · 2022-07-25T08:17:06.603Z · EA · GW

Very interesting read!

I'd just like to add that all other humanitarian work would become easier if there were better network coverage and access to smartphones. In my work with AMF, we put a strong emphasis on electronic collection of mosquito net campaign data, and we often work around problems related to network coverage / device availability / ...

To give a concrete example, this year's mosquito net campaign in Guinea has faced delays because it was difficult to obtain the several thousand tablets needed for electronic data collection, and because the system did not work well in areas with no network coverage.

Comment by Sjlver on New cause area: Violence against women and girls · 2022-06-17T13:30:44.227Z · EA · GW

This is an updated version of my initial comment, hopefully more polite and fact-based.

I would agree with Question Mark that it is worth exploring opportunities to reduce violence against men, in addition to what the present post does for violence against women. Like Question Mark writes, the scale of this problem is large. Presumably, males experience violence more often than females, albeit for different reasons.

That said, I think the comparisons put forward by Question Mark are creating a biased impression. Here are a few points to keep in mind for a balanced picture:

  • While this post focuses on interventions to prevent intimate partner violence, the homicide statistics by gender look at a different type of violence. This is not an apples-for-apples comparison. If we instead consider sexual violence and intimate partner violence, we find that 90% of (US) adult rape victims are female, and that women are more affected than men in all categories of intimate partner violence.

    Keeping the focus on intimate partner violence rather than general violence also makes interventions more tractable. General violence / homicide are broad topics with complex reasons for why men are more affected, including reasons that have to do with male behavior.

  • Question Mark's comment also compares female genital mutilation (FGM) with male circumcision. My impression was that the comment considered them comparably harmful (but maybe this is just an uncharitable reading of my part; apologies in this case). I believe that there are good reasons to think of FGM as a larger problem, such as:

With these considerations in mind, I think that interventions focused specifically on violence against girls and women make sense. Girls and women often suffer from particularly gruesome forms of violence, which are also tractable to address, as shown by the interventions in this post.

Comment by Sjlver on The Strange Shortage of Moral Optimizers · 2022-06-16T05:24:37.790Z · EA · GW

Whether you'd enjoy the book and benefit from it depends strongly on your background, I think.

To me, this was a good read because I learned about a broad range of interventions for helping people -- graduation programs and child sponsorships being probably the most notable examples. The book really changed my mind on child sponsorships. I had thought of them as a rather high-overhead intervention that was popular because it appeals to emotion to get donors' money... but now I think they can be cost-effective when done well.

That said, if your goal is to learn about various effective interventions (beyond the few that GiveWell writes about), then a good and free resource would be the life you can save book.

The second reason to recommend the book is its good discussion on "flourishing", that is, a holistic view of health, wellbeing, and prosperity. Finally, a third reason to read it is to get a Christian perspective on the subject, or give the book to Christian friends.

Comment by Sjlver on New cause area: Violence against women and girls · 2022-06-09T12:55:10.626Z · EA · GW

These issues are of indeed difficult to talk about. And I admit that I haven't been very friendly in this discussion so far. Apologies for that.

Even with nuance, the difference between FGM and male circumcision seems staggering to me. Here's an example of a study that estimates a 3% life quality loss due to FGM. Over an entire life, that amounts to more than 1 QALY lost due to the mutilation. Granted, there are less severe forms... but I find 1 QALY a horrifying amount.

Male circumcision on the other hand has positive effects as well as negative. I don't want to downplay the negative effects... but circumcision is probably legal nearly everywhere because these effects are small.

Comment by Sjlver on New cause area: Violence against women and girls · 2022-06-09T05:55:28.680Z · EA · GW

... which arguably gives circumcised males the benefit of longer sex ;-)

More seriously: FGM can cause severe bleeding and problems urinating, and later cysts, infections, as well as complications in childbirth and increased risk of newborn deaths (WHO).

I stand by my point, even after all the downvotes: claiming that FGM is comparable in harm to male circumcision is an offense to all the FGM survivors out there.

Comment by Sjlver on New cause area: Violence against women and girls · 2022-06-08T14:32:34.609Z · EA · GW

Doesn't this get the burden of proof wrong? I find it awful that we so often ask victims to prove that they really are victims.

After this rant, here you go:

rainn.org has statistics for the US. They say for example that:

  • 82% of all juvenile rape victims are female.
  • 90% of adult rape victims are female.
  • About 3% of American men have experienced an attempted or completed rape in their lifetime, vs ~19% of women.

https://ncadv.org/STATISTICS has data specifically on intimate partner violence, that is, the subject of the interventions in this post. Women are more affected than men in all categories of violence listed there.

These are just two data sources... but I find it enough gruesome stats for a day.

Comment by Sjlver on New cause area: Violence against women and girls · 2022-06-08T08:57:56.617Z · EA · GW

Edit: see below for a version of this comment that is hopefully better, more balanced, and more charitable. I leave the original version here for posterity, and to remind myself to write less emotionally ;)

It is very clear that violence against men is less of an issue than violence against women. It is probably good for this question to be discussed, but let's not belittle violence against women by implying that it would be comparable in magnitude to violence against men. In the same vein, comparing female genital mutilation to forced circumcision is... let's say ignorant of the effects of FGM.

Akhil's post does a good job of explaining how big the problem of violence against women is, and I have never seen anything that comes close for violence against men. And among my own friends, the people who have experienced serious violence are all women.

Comment by Sjlver on The Strange Shortage of Moral Optimizers · 2022-06-08T08:49:35.470Z · EA · GW

A bit more generally: I think we can look at religions as a set of Alt-EA movements.

Most religions have strong prescriptions and incentives for their members to do good. Many of them also advocate for donating a part of one's income.

All these religions also have members that think hard about how to do the most good in a cost-effective way. Here, "good" follows the definition of the religion and might include aspects such as bringing people closer to God. However, it is usually correlated with EA notions of utility or wellbeing or freedom from suffering. And indeed one can find faith-based organizations with large positive effects: For example, AMF could not distribute its bednets without local partner organizations, and in that list are many faith-based ones like IMA or World Vision.

I'm not claiming that the effect of religion overall is robustly positive -- that's a very difficult question to answer -- but that EA-like intentions, and sometimes actions, can be found in many religious people and organizations.

Comment by Sjlver on The Strange Shortage of Moral Optimizers · 2022-06-08T08:27:55.036Z · EA · GW

I've read one alternative approach that is well written and made in good faith: Bruce Wydick's book "Shrewd Samaritan".

It's a Christian perspective on doing good, and arrives at many conclusions that are similar to effective altruism. The main difference is an emphasis on "flourishing" in a more holistic way than what is typically done by a narrowly-focused effective charity like AMF. Wydick relates this to the Hebrew concept of Shalom, that is, holistic peace and wellbeing and blessing.

In practical terms, this means that Wydick more strongly (compared to, say, GiveWell) recommends interventions that focus on more than one aspect of wellbeing. For example, child sponsorships or graduation approaches, where poor people get an asset (cash or a cow or similar) plus the ability to save (e.g., a bank account) plus training.

I believe that these approaches fare pretty well when evaluated, and indeed there are some RCTs evaluating them. These programs are more complex to evaluate, however, than programs that do one thing, like distributing bednets. That said, the rationale that "cash + saving + training > cash only" is intuitive to me, and so this might be an area where GiveWell/EA is a bit biased toward stuff that is more easily measurable.

Comment by Sjlver on New cause area: Violence against women and girls · 2022-06-07T18:19:50.605Z · EA · GW

A clarification, after having read more about the interventions:

The studies asked women whether they experienced various forms of intimate partner violence over the last year. If a woman reported any form of violence, that was coded as a "case of IPV". Multiple or repeated experiences within the last year do not change the coding, it is still just one "case of IPV". The "Unite for a better life" intervention averts one case per US$194.

This means one woman more who did not experience violence in the last year. Which probably also means that she is in a lower-risk relationship, and that this state will persist for some time in the future.

Comment by Sjlver on New cause area: Violence against women and girls · 2022-06-07T18:05:34.108Z · EA · GW

In addition to the difference in VAWG burden, there are also differences in implementation costs. Interventions will be cheaper in low-income countries than in high-income countries.

Comment by Sjlver on New cause area: Violence against women and girls · 2022-06-07T16:03:21.359Z · EA · GW

Thank you for the explanation! I sincerely appreciate, since I realized that my question could be perceived as trolling or nitpicking on the cost-effectiveness estimates. My intention is rather to understand better the impact of these interventions. Also, these DALY calculations are just hard (at least for me).

I think the answer makes sense.

Do you have a reference to the model that you've used (pardon if I missed the link)? I would be interested to look at it in a bit more detail. For example, my gut feeling is that a single or a few instances of IPV might already cause chronic damages; and so to avoid this damage we would be interested more in IPV-free lives than IPV-free years.

EDITED to add: On the other hand, it would seem likely that the effect of an intervention lasts for longer than a year, and thus that the beneficiaries would benefit from a reduced IPV risk for much of their lives.

Comment by Sjlver on New cause area: Violence against women and girls · 2022-06-07T14:47:52.468Z · EA · GW

Just a quick question: You write that the cost for 1 year free from IPV is about US$194, and that this means the intervention costs US$78.4/DALY

If I understand this correctly, it would mean that 1 year free from IPV is about 2.5 DALYs? Is that correct? It seems to imply that experiencing IPV is worse than death... which might well be true, but the more likely explanation is that I misunderstand the DALY conversion. Could you clarify?

Comment by Sjlver on Against immortality? · 2022-04-29T08:11:24.499Z · EA · GW

As a software engineer, I resonate with this post. In software engineering, I regularly have to make the decision of whether to improve existing software or replace it with a new solution.

Obvious caveats: humans are incomparable to pieces of software, and human genes evolve much more slowly than JavaScript frameworks.

I think software engineers succumb to the temptation to "start with a green field and clean slate" a bit too often. We tend to underestimate the value that lies in tried and tested software (and overestimate the difficulty of iterative improvements). Similarly, I think that I personally might underestimate the value, wisdom, and moral weight of existing people, particularly if we could solve most health problems. Yet I do believe that after some amount of life years, the value of a new life -- a newborn who can learn everything from scratch and benefit from all the goodness that humanity has accumulated before its birth -- exceeds the value of extending the existing life.


The trade-off is even more salient in animal agriculture. Obvious caveat: humans are incomparable to cows, and human genes evolve much more slowly than cow genes.

Because cows are bred to a quite extreme degree, a cow born today has substantially "better" genes than a cow born 10 years ago. This is one of the reasons why it is economically profitable to kill a dairy cow after 4-6 years rather than let it live to its full lifespan.

Comment by Sjlver on Ideal governance (for companies, countries and more) · 2022-04-15T11:02:52.082Z · EA · GW

There are a lot of approaches in software engineering that are really answers to governance problems. These got started, perhaps, with https://agilemanifesto.org/.

While the Manifesto only states a few generic principles, there exist more applied approaches like Scrum which mandate specific roles, processes, and decision mechanisms.

I'm far from being an expert here, so I won't add many more links... ask your favorite "Agile Coach"; they will point you to a lot of research into which of these approaches work and which ones don't, which exists because questions of software engineering governance have direct impact on the success of organizations. Software engineering is also a space where there is a lot more change and innovation than in political governments.

Comment by Sjlver on A tough career decision · 2022-04-11T11:55:09.310Z · EA · GW

Oh... and for some companies, all you need to do to start a community is get some EA-related stickers that people can put on their laptops ;-)

(It's a bit tongue-in-cheek, but I'm only half joking... most companies have things like this. At Google, laptop stickers were trendy, fashionable, and in high demand. I'm sure that after being at Xanadu for a while, you'll find an idea that works well for this particular company)

Comment by Sjlver on A tough career decision · 2022-04-11T11:50:25.828Z · EA · GW

At Google, most employees who came in touch with EA-related ideas did so thanks to Google's donation matching program. Essentially, Google has a system where people can report their donations, and then the company will donate the same amount to the same charity (there's an annual cap, but it's fairly high, like US$ 10k or so).

There is a yearly fundraising event called "giving week" to increase awareness of the donation matching. On multiple occasions during this week, we had people from the EA community come and give talks.

When considering starting an EA community, I might look for similar ideas to the ones mentioned above, in order to try making this part of company culture. There are selfish reasons for companies to do this sort of things (employee satisfaction, tax "optimization"). Also, there might be an existing culture of "tech talks" that you can leverage to bring up EA topics.

Comment by Sjlver on A tough career decision · 2022-04-11T05:01:14.116Z · EA · GW

A lot of what you have written resonates with me. I think it is amazing that you have thought so deeply about this decision and spoken with many people. In that sense, it looks like a great decision, and I hope that the outcome will be fulfilling to you.

After I've finished my PhD, I was torn between doing something entrepreneurial and altruistic, or accepting a "normal" well-paid job. In the end, I decided to accept an offer from Google for reasons similar to yours. It fit well with the growing relationship to my now-wife, and the geographic location was perfect.

I stayed at Google for three years and learned a lot during this time, both about software engineering and about effective altruism. After these three years, I felt ready for a job where I could have a more direct positive impact on the lives of people. I think I was also much better equipped for it; not only in terms of software engineering techniques... Google is also a great place to learn about collaboration across teams, best practices in almost any area, HR processes, communication, etc. One aspect that was absolutely fantastic: I had no pressure at all to leave Google. It was a good job that I could remain in for as long as I wanted, until the perfect opportunity came along.

I could imagine that a few years from now, you might similarly be in a good position to re-evaluate your decision. You will probably be much more stable financially, have a lot more negotiation power, and a lot less time pressure. Plus, I'm sure you'll be so good then that IBM/Google/Amazon can no longer ignore you ;-)

Comment by Sjlver on Tips for asking people for things · 2022-04-06T06:07:29.160Z · EA · GW

Just wanted to assure you that this has been really useful for me, and will improve the way I'll approach people at EA Global London.

To me, the message does not come across as entitled or status move at all. It is just plain helpful advice.

Comment by Sjlver on When did EA miss a great opportunity to do good? · 2022-03-17T07:06:17.693Z · EA · GW

My impression is that EA could do more to make existing philanthropy more effective.

There are many existing charities that process billions of dollars[^1] per year. Many of these do not focus on effectiveness or have only recently become interested. I believe that a lot of good could result from making these charities more effective at what they do, or slightly moving their cause area to one that has more proven benefits.

My feeling is that EA did not interact much with existing "classical" charities. Maybe there are differences in worldview that prevented this? For example, many existing charities are faith-based, whereas EA seems explicitly secular. I think it would be desirable to bridge these worldview gaps if it allows EA to leverage the existing resources and networks of classical charities.

Several classical charities that I know of have recently become interested in effectiveness (and efficiency) due to donors caring more about these values. This might be another way for EA to have a large effect: influence donors so that they demand more effectiveness from their charities of choice. Organizations like The Life You Can Save do this to some extent, but focus on a few existing good charities rather than expanding the scope to the big-but-not-necessarily-effective players.

Another way of achieving this goal might be to influence development spending of countries more strongly. I know several cases where countries give part of their development budget to classic charities (e.g., Helvetas in Switzerland, Brot für die Welt in Germany). EA might be able to exert more influence in this area, similar to what EAF did for Zurich.

[^1] Sorry for the sloppy imprecision here... I hope that this post conveys my idea even without real numbers.

Comment by Sjlver on Some thoughts on vegetarianism and veganism · 2022-02-16T19:28:53.981Z · EA · GW

My experience is similar to Luke's.

One of the main benefits of becoming vegan was that it removed a cognitive dissonance from my life -- a sadness at the back of my head because my actions had been different from my values. After becoming vegan, my lifestyle and my convictions were more aligned. This was quite a liberating and joyful feeling.

I think that becoming vegan should be a win-win decision. If a vegan diet feels like a burden, or distracts from more important issues, or causes health problems, then by all means stop and eat whatever you like. But chances are that, after a while, you become a happier and healthier person.

My last bit of advice is to not be too dogmatic about veganism. The animal industry is surprisingly elastic*, and so each egg not bought will reduce demand and cause some fraction of a statistical chicken to not be born and not suffer. You don't have to be 100% vegan to have an impact ;-)


* I wish I'd have good numbers to back this claim...

Comment by Sjlver on Should GMOs (e.g. golden rice) be a cause area? · 2022-02-03T08:41:56.367Z · EA · GW

For German speakers, there is a fantastic podcast episode that discusses GMOs and potential altruistic uses such as Golden Rice: https://erklärmir.at/2021/06/01/167-erklaer-mir-gentechnik-martin-moder/

I'm posting this here because the episode radically changed my mind. I used to be very cautious when it comes to GMOs, full of reservations about potential unforeseen consequences for ecosystems. After hearing this episode, I understood how the advantages of GMOs massively outweigh the risks that I was concerned of.

Comment by Sjlver on Global poverty questions you'd like answered? · 2022-01-28T22:22:57.826Z · EA · GW

At the Against Malaria Foundation, bednet distributions will soon start in regions where AMF has not worked before: Nigeria, various DRC provinces, and one other country to be announced. If you can choose the country for your papers, probably you could help AMF by focusing on one of these?

I think AMF would be particularly interested in questions about how poverty affects bednet distribution. For example, people might have seasonal jobs, which means they regularly migrate to another part of the country. This might cause them to miss the bednet distributions in their home area, or they might not be able to take their nets with them when moving. Another important aspect is housing: do people have houses with fixed sleeping spaces and solid walls where they can hang a net?