Comment by UriKatz on Alice Crary's philosophical-institutional critique of EA: "Why one should not be an effective altruist" · 2021-11-29T19:37:25.746Z · EA · GW

I am responding to the newer version of this critique found [here] (

Someone needs to steel man Crary's critique for me, because as it stands I find it very weak. The way I understand this article:

  1. The institutional critique - Basically claims 2 things: a) EA's are searching for their keys only under the lamppost. This is a great warning for anyone doing quantitate research and evaluation. EA's are well aware of it and try to overcome the problem as much as possible; b) EA is addressing symptoms rather than underlying causes, i.e. distributing bed-nets instead of overthrowing corrupt governments. This is fair as far as it goes, but the move to tackling underlying causes does not necessarily require abandoning the quantitative methods EA champions, and it is not at all clear that we shouldn't attempt to alleviate symptoms as well as causes.

  2. The philosophical critique - Essentially amounts to arguing that there are people critical of consequentialism and abstract conceptions of reason. More power to them, but that fact in itself does not defeat consequentialism, so in so far as EA relies on consequentialism, it can continue to do so. A deeper dive is required to understand the criticisms in question, but there is little reason for me to assume at this point that they will defeat, or even greatly weaken, consequentialist theories of ethics. Crary actually admits that in academic circles they fail to convince many, but dismisses this because in her opinion it is "a function of ideological factors independent of [the arguments'] philosophical credentials".

  3. The composite critique - adds nothing substantial except to pit EA against woke ideology. I don't believe these two movements are necessarily at odds, but there is a power struggle going on in academia right now, and it is clear which side Crary is on.

  4. EA's moral corruption - EA is corrupt because it supports global capitalism. I am guilty as charged on that count, even as I see capitalism's many, many flaws and the need to make some drastic changes. Still, just like democracy, it is the best of evils until we come up with something better. Working within this system to improve the lives of others and solve some pressing worldwide problems seems perfectly reasonable to me.

As an aside I will mention that attacking "earning to give" without mentioning the concept of replicability is attacking nothing at all. When doing good try to be irreplaceable, when earning money on Wall Street, make sure you are completely replaceable, you might earn a little less but you will minimize your harm.

Finally, it is telling that Crary does not once deal with longtermist ideas.

Comment by UriKatz on EA for Jews: Launch and Call for Volunteers · 2021-10-26T21:50:04.577Z · EA · GW

What would you say are the biggest benefits of being part of an EA faith group?

Comment by UriKatz on [deleted post] 2021-10-06T15:05:02.725Z

From a broad enough perspective no cause area EA deals with is neglected. Poverty? Billions donated annually. AI? Every other start up uses it. So we start narrowing it down: poverty -> malaria-> bednets.

There is every reason to believe mental health has neglected yet tractable and highly impactful areas, because of the size of the problem as you outline it, and because mental health touches all of us all the time in everything we do (when by health we don’t just mean the absence of disease but the maximization of wellbeing).

I think EA concepts are here to challenge us. Being a clinical psychiatrist is amazing, you can probably help hundreds of people. Could you do more? What’s going on in other parts of the globe, where is humanity headed towards in the future? This challenge does not have to be burdensome, it can be inspiring. It should certainly not paralyze you and prevent you from doing any good at all. Like a mathematician obsessed with proving a theorem, or a physicist relentlessly searching for the theory of everything, they also do other work, but never give up the challenge.

Comment by UriKatz on [deleted post] 2021-10-05T19:26:41.284Z

Hey @Dvir, mental health is a (not-professional) passion of mine so I am grateful for any attention given to it in EA. I wonder if you think a version 2.0 of your pitch can be written, which takes into account the 3 criteria below. Right now you seem to have nailed down the 1st, but I don't see the case for 2 & 3:

  1. Great in scale (it affects many lives, by a great amount)
  2. Highly neglected (few other people are working on addressing the problem)
  3. Highly solvable or tractable (additional resources will do a great deal to address it) (

I think that is what HLI is trying to do:

Comment by UriKatz on AMA: Jeremiah Johnson, Director/Founder of the Neoliberal Project · 2021-10-05T01:30:07.291Z · EA · GW

I am not sure about the etiquette of follow up questions in AMAs, but I’ll give it a go:

Why does being mainstream matter? If, for example, s-risk is the highest priority cause to work on, and the work of a few mad scientists is what is needed to solve the problem, why worry about the general public’s perception of EA as a movement, or EA ideas? We can look at growing the movement as growing the number of top performers and game-changers, in their respective industries, who share EA values. Let the rest of us enjoy the benefit of their labor.

Comment by UriKatz on Why I am probably not a longtermist · 2021-09-25T12:53:18.104Z · EA · GW

Well, it wouldn’t work if you said “I want a future with less suffering, so I am going to evaluate my impact based on how many paper clips exist in the world at a given time”. Bostrom selects collaboration, technology and wisdom because he thinks they are the most important indicators of a better future and reduced x-risk. You are welcome to suggest other parameters for the evaluation function of course, but not every parameter works. If you read the analogy to chess in the link I posted it will become much more clear how Bostrom is thinking about this.

(if anyone reading this comment knows of evolutions in Bostrom’s thought since this lecture I would very much appreciate a reference)

Comment by UriKatz on Why I am probably not a longtermist · 2021-09-24T17:06:42.934Z · EA · GW

Hi Khorton,

If by “decide” you mean control the outcome in any meaningful way I agree, we cannot. However I think it is possible to make a best effort attempt to steer things towards a better future (in small and big ways). Mistakes will be made, progress is never linear and we may even fail altogether, but the attempt is really all we have, and there is reason to believe in a non-trivial probability that our efforts will bear fruit, especially compared to not trying or to aiming towards something else (like maximum power in the hands of a few).

For a great exploration of this topic I refer to this talk by Nick Bostrom: The tl;dr is that we can come up with evaluation functions for states of the world that, while not yet being our desired outcome, are indications that we are probably moving in the right direction. We can then figure out how we get to the very next state, in the near future. Once there, we will jot a course for the next state, and so on. Bostrom signals out technology, collaboration and wisdom as traits humanity will need a lot of in the better future we are envisioning, so he suggests can measure them with our evaluation function.

Comment by UriKatz on Why I am probably not a longtermist · 2021-09-24T01:51:17.666Z · EA · GW

I am largely sympathetic to the main thrust of your argument (borrowing from your own title: I am probably a negative utilitarian), but I have 2 disagreements that ultimately lead me to a very different conclusion on longtermism and global priorities:

  1. Why do you assume we cannot effect the future further than 100 years? There are numerous examples of humans doing just that: in science (inventing the wheel, electricity or gunpowder), government (the US constitution), religion (the Buddhist Pali cannon, the Bible, the Quran), philosophy (utilitarianism), and so on. One can even argue that the works of Shakespeare have had an effect on people for hundreds of years.
  2. Though humanity is not inherently awesome, it does not inherently suck either. Humans have the potential to do amazing things, for good or evil. If we can build a world with a lot less war and crime and a lot more collaboration and generosity, isn't it worth a try? In Parfit's beautiful words: "Life can be wonderful as well as terrible, and we shall increasingly have the power to make life good. Since human history may be only just beginning, we can expect that future humans, or supra-humans, may achieve some great goods that we cannot now even imagine. In Nietzsche’s words, there has never been such a new dawn and clear horizon, and such an open sea ... Some of our successors might live lives and create worlds that, though failing to justify past suffering, would give us all, including some of those who have suffered, reasons to be glad that the Universe exists.
Comment by UriKatz on Against opposing SJ activism/cancellations · 2020-06-20T12:58:00.414Z · EA · GW

I thought it worth pointing out that this statement from one of your comments I mostly agree with, while I strongly disagree with your main post. If this was the essence of your message, maybe it requires clarification:

"Politics is the mind killer." Better to treat it like the weather and focus on the things that actually matter and we have a chance of affecting, and that our movement has a comparative advantage in.

To be clear, I think justice does actually matter, and any movement that would look past it to “more important” considerations scares me a little, but I strongly agree with the “weather” and “comparative advantage” parts of your statement. We should practice patience and humility. By patience I means not jumping into the hot topic conversation of the day, no matter how heated the debate. Humility means recognizing how much effort we spend learning about animal advocacy, malaria, X risk factors, etc. That is why we can feel confident to speak/act on them. But this doesn’t automatically transfer to other issues. Merely recognizing how difficult it is to get altruism right, compared to how much ineffective altruism there is, should be a warning signal when we wade out of our domains of expertise.

I think the middle ground here is not to allow people to bully you out of speaking, but to only speak when you have something worth saying that you considered carefully (preferably with some input from peers). So basically, as others have already mentioned: “what would Peter Singer do?”.

Comment by UriKatz on Cause Prioritization in Light of Inspirational Disasters · 2020-06-08T12:06:11.323Z · EA · GW

I have similar objections to this post as Khorton & cwbakerlee. I think it shows how the limits of human reason make utilitarianism a very dangerous idea (which may nevertheless be correct), but I don’t want to discuss that further here. Rather, let’s assume for the sake of argument that you are factually & morally correct. What can we learn from disasters, and the world’s reaction to them, that we can reproduce without the negative effects of the disaster? I am thinking of anything from faking a disaster (wouldn’t the conspiracy theorist love that) to increasing international cooperation. What are the key characteristics of a pandemic or a war that make the world change for the better? Is the suffering an absolute necessity?

Comment by UriKatz on Climate Change Is Neglected By EA · 2020-05-26T20:59:14.641Z · EA · GW

Yes, you are correct and thank you for forcing me to further clarify my position (in what follows I leave out WAW since I know absolutely nothing about it):

  1. EA funds, which I will assume is representative of EA priorities has these funds a) “Global Health and Development”; b) “Animal Welfare”; c) “Long-Term Future”; d) “EA Meta”. Let’s leave D aside for the purposes of this discussion.

  2. There is good reason to believe the importance and tractability of specific climate change interventions can equal or even exceed those of A & B. We have not done enough research to determine if this is the case.

  3. The arguments in favor of C being the only area we should be concerned with, or the area we should be most concerned with, are:

I) reminiscent of other arguments in the history of thought that compel us (humans) because we do not account for the limits of our own rationality. I could say a lot more about this another time, suffice it to say here that in the end I cautiously accept these arguments and believe x-risk deserves a lot of our attention.

II) are popular within this community for psychological as well as purely rational reasons. There is nothing wrong with that and it might even be needed to build a dedicated community.

III) For these reasons I think we are biased towards C, and should employ measurements to correct for this bias.

  1. None of these priorities is neglected by the world, but certain interventions or research opportunities within them are. EA has spent an enormous amount of effort finding opportunities for marginal value add in A, B & C.

  2. Climate change should be researched just as much as A & B. One way of accounting for the bias I see in C is to divert a certain portion of resources to climate change research despite our strongly held beliefs. I simply cannot accept the conclusion that unless climate change renders our planet uninhabitable before we colonize Mars, we have better things to worry about. That sounds absurd in light of the fact that certain detrimental effects of climate change are already happening, and even the best case future scenarios include a lot of suffering. It might still be right, but it’s absurdity means we need to give it more attention.

What surprises me the most from the discussion of this post (and I realize it’s readers are a tiny sample size of the larger community) is that no one has come back with: “we did the research years ago, we could find no marginal value add. Please read this article for all the details”.

Comment by UriKatz on Climate Change Is Neglected By EA · 2020-05-26T11:11:53.467Z · EA · GW

The assumption is not that people outside EA cannot do good, it is merely that we should not take it for granted that they are doing good, and doing it effectively, no matter their number. Otherwise, looking at malaria interventions, to take just one example, makes no sense. Billions have and will continue to go in that direction even without GiveWell. So the claim that climate change work is or is not the most good has no merit without a deeper dive into the field and a search for incredible giving / working opportunities. Any shallow dive into this cause reveals further attention and concern are warranted. I do not know what the results of a deeper dive might show, but am fairly confident we can at least be as effective working on climate change as working on some of the other present day welfare causes.

I do believe that there is strong bias towards the far future in many EA discussions. I am not unsympathetic to the rational behind this, but since it seems to override everything else, and present day welfare (as your reply implies) is merely tolerated, I am cautious about it.

Comment by UriKatz on Developing my inner self vs. doing external actions · 2020-05-26T09:59:59.761Z · EA · GW

This is a great question and one everyone struggles with.

TL;DR work on self improvement daily but be open to opportunities for acting now. My advice would indeed be to balance the two, but balance is not a 50-50 split. To be a top performer in anything you do, practice, practice, practice. The impact of a top performer can easily be 100x over the rest of us, so the effort put into self improvement pays off. Professional sports is a prime example, but research, engineering, academia, management, parenting, they all benefit from working on yourself.

The trap to avoid is not acting before you are perfect. Do not let opportunity for doing good slip you by. Your first job, relationship, child will all suffer from your inexperience, but how else do you gain experience? In truth, the more experience you gain the greater the challenges you will allow yourself to tackle, so being comfortable acting with some doubt of your ability is critical to great achievements.

Comment by UriKatz on Climate Change Is Neglected By EA · 2020-05-25T18:27:19.110Z · EA · GW

This seems a bit of an obvious point to make but there are many more people working on a) global poverty; b) animal welfare; c) wildlife conservation; d) nuclear proliferation; e) biosaftey and f) tech safety then there are EAs in the world. This movement’s claim is that it can find ways to 100x the impact of skill and funding. In every other field it does so by researching the field in as much detail as possible and encouraging risk tolernce to unproven interventions showing promise. It often finds neglected interventions / solutions / research areas, not causes. In climate change it counts the lawyers already engaged in changing the recycling laws of San Francisco as sufficent for the task at hand.

Comment by UriKatz on Climate Change Is Neglected By EA · 2020-05-25T14:33:31.590Z · EA · GW

I feel sometimes that the EA movement is starting to sound like heavy metalists (“climate change is too mainstream”), or evangelists (“in the days after the great climate change (Armageddon), mankind will colonize the galaxy (the 2nd coming), so the important work is the one that prevents x-risk (saves people’s souls)”). I say “amen” to that, and have supported AI safety financially in the past, but I remain skeptical that climate change can be ignored. What would you recommend as next steps for an EA ember who wants to learn more and eventually act? What are the AMF or GD of climate change?

Comment by UriKatz on Climate Change Is Neglected By EA · 2020-05-25T14:23:40.831Z · EA · GW

I wonder how much of the assessment that climate change work is far less impactful than other work relies on the logic of “low probability, high impact”, which seems to be the most compelling argument for x-risk. Personally, I generally agree with this line of reasoning, but it leads to conclusions so far away from common sense and intuition, that I am a bit worried something is wrong with it. It wouldn’t be the first time people failed to recognize the limits of human rationality and were led astray. That error is no big deal as long as it does not have a high cost, but climate change, even if temperatures only rise by 1.5 degrees, is going to create a lot of suffering in this world.

In an 80,000 hours podcast with Peter Singer the question was raised whether EA should split into 2 movements: present welfare and longtermism. If we assume that concern with climate issues can grow the movement, that might be a good way to account for our long term bias, while continuing the work on x-risk at current and even higher levels.

Comment by UriKatz on Choosing the Zero Point · 2020-05-22T13:01:14.443Z · EA · GW

In my own mind I would file this post under “psychological hacks”, a set of tools that can be extremely useful when used correctly. I am already considering how to apply this hack to some moral dilemmas I am grappling with. I share this because I think it highlights two important points.

First off, the post is endorsing the common marketing technique of framing. I am not an expert in the field, but am fairly confident this technique can influence people’s thoughts, feelings & behavior. Importantly, the framing exercise is not merely confined to the conclusion of the post: “choosing a new zero point“. A big part of the framing is the language the post employs. I am referring to the use of terms like “utility functions” and “positive affine transformations”, and, more broadly, explaining Rob Bensinger’s quote using a popular framework in economics & philosophy. I suspect this is just as significant to the behavioral effect the framing hack produces as the final recommendation the post makes.

Secondly, I wonder if you believe “choosing a new zero point“ is something we should do as often as possible, or whether there is a more limited scope of problems it applies to. Might we be normalizing the current state of the world, and suggesting a brighter future that we can, but do not have, to strive for. What if small incremental changes are not enough? One example of this would be climate change. Another would be problems like genocide or slavery. Is it enough to be slightly better than the average citizen in a society that permits slavery?

Comment by UriKatz on If you value future people, why do you consider near term effects? · 2020-04-09T22:13:33.285Z · EA · GW

Great post, thank you.

If one accepts your conclusion, how does one go about implementing it? There is the work on existential risk reduction, which you mention. Beyond that, however, predicting any long-term effect seems to be a work of fiction. If you think you might have a vague idea of how things will turn out in 1k year, you must realize that even longer-term effects (1m? 1b?) dominate these. An omniscient being might be able to see the causal chain from our present actions to the far future, but we certainly cannot.

A question this raises for me is whether we should adjust our moral theories in any way. Given your conclusions, classic utilitarianism becomes a great idea that can never be implemented by us mere mortals. A bounded implementation, as MichaelStJules mentions, is probably preferable to ignoring utilitarianism completely, but that only answers this question by side-stepping it. I have come across philosophical work on “The Nonidentity Problem” which suggests that our moral obligations more or less extend to our grandchildren, but personaly I remain unconvinced by it.

I think there might be one area of human activity that, even given your conclusion, it is moral and rational to pursue - education. Not the contemporary kind which amounts to exercising our memories to pass standardized tests. More along the lines of what the ancient Greeks had in mind when they thought about education. The aim would be somewhere in the ballpark of producing critical thinking, compassionate, and physically fit people. These people will then be able to face the challenges they encounter, and which we cannot predict, in the best possible way. There is a real risk that humanity takes an unrecoverable turn for the worst, and while good education does not promise to prevent that, it increases the odds that we achieve the highest levels of human happiness and fulfillment as we set out to discover the farthest reaches of our galaxy.

I would love to hear your thoughts.

Comment by UriKatz on [Linkpost] - Mitigation versus Supression for COVID-19 · 2020-03-17T15:30:30.525Z · EA · GW

I know there is a death toll associated with economic recessions. Basically, people get poorer and that results in worse mental and physical healthcare. Are there any studies weighing those numbers against these interventions? Seems like a classic QALY problem to me, but I am an amateur in any of the relevant fields.

Also, people keep suggesting to quarantine everyone above 50 or 60 and let everyone else catch the virus to create herd immunity. Is there any scientific validity behind such a course of action? Is it off the table simply because the ”agism” of the virus is only assumed at this point?

Comment by UriKatz on [deleted post] 2018-11-29T14:20:24.474Z


First of all great article.

I just wanted to point out that I am looking for a robo-advisor and having talked with WealthSimple, they wrote back the following:

"we do support the option to gift securities without selling the asset. There is a short form via docusign we'll send you anytime you'd like to take advantage of this option."

Comment by UriKatz on Working at EA organizations series: Effective Altruism Foundation · 2015-10-30T06:39:55.976Z · EA · GW


Could you by any chance use a few hours of software development each week from volunteers?

Comment by UriKatz on Effective Altruism and Religious Faiths: Mutually Exclusive Entities, or an Important Nexus to Explore? · 2015-09-21T10:58:52.973Z · EA · GW

I love the depth you went to with this post, and just wanted to share a bit of personal experience. In the past few years my religious practice has flourished, as has my involvement with EA. I doubt this is an accidental coincidence, especially since my highest aspirations in life are a combination I took from EA and religion (sometimes I refer to them as the guiding or organizing principles of my life). Religion gives me the emotional and spiritual support I need, EA fills in the intellectual side and provides practical advice I can implement here and now. As a side note, I also delve into general Western philosophy to fill in gaps from time to time.

Coming out of EA I heard some concern about the "eternal September" syndrome, i.e. the movement only appealing to the enthusiasm of youth, with the result that it replaces its members all the time. I also heard older members claim they have lost some of their passion and drive. I think we sure can look to religion and religious institutions to see how to avoid such pitfalls. My personal commitment keeps growing because I have a daily practice intended to do just that.

It is important to note, that religion might not be strictly necessary, we might just need to adopt some of its better practices, as some atheists do:

Comment by UriKatz on Maximizing long-term impact · 2015-03-11T09:15:09.440Z · EA · GW

For the sake of argument I will start with your definition of good and add that what I want to happen is for all sentient beings to be free from suffering, or for all sentient beings to be happy (personally I don't see a distinction between these two propositions, but that is a topic for another discussion).

Being general in this way allows me to let go of my attachment to specific human qualities I think are valuable. Considering how different most people's values are from my own, and how different my needs are from Julie's (my canine companion), I think our rationality and imagination are too limited for us to know what will be good for more evolved beings in the far future.

A slightly better, though still far from complete, definition of "good" (in my opinion) would run along the line of: "what is happening is what those beings it is happening to want to happen". A future world may be one that is completely devoid of all human value and still be better (morally and in many other ways) than the current world. At least better for the beings living in it. In this way even happiness, or lack of suffering, can be tossed aside as mere human endeavors. John Stuart Mill famously wrote:

"It is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied. And if the fool, or the pig, is of a different opinion, it is only because they only know their own side of the question."

And compared with the Super-Droids of tomorrow, we are the pigs...

Comment by UriKatz on Maximizing long-term impact · 2015-03-06T10:18:34.018Z · EA · GW

Great thought provoking post, which raises many questions.

My main concern is perhaps due to the limitations of my personal psychology: I cannot help but heavily prioritize present suffering over future suffering. I heard many arguments why this is wrong, and use very similar arguments when faced with those who claim that "charity begins at home". Nevertheless, the compassion I have for people and animals in great suffering overrides my fear of a dystopian future. Rational risk / reward assessments leave me unconvinced (oh, why am I not a superintelligent droid). Your post does offer me some comfort, despite my (possible) limitation. Cultivating generosity and compassion within me, and within my society, could be classified as "cultural change" and so might be a highly effective intervention. However, then the question becomes if the most effective ways to achieve this "cultural change" have anything to do with helping those in dire need today. Many attest that mediation and prayer improve their ability to be kind and loving, and I am one of those who are skeptical as to the effects of that on the life expectancy of infants in Africa.

My second concern is that you may be putting too much emphasis on the "human race". In the long-run, why is it bad if our race is superseded by more advanced life forms? Some of your scenarios do envision a human existence that can arguably be classified as "the next evolutionary step" (i.e. whole brain emulations), but their lives and interests still seem closely aligned to those of human beings. Significantly, if the transition from the current world to "Friendly Artificial Intelligence" or to "Unfriendly Artificial Intelligence" involves an equal amount of suffering, the end result seems equally good to me. After all, who is to say that our AI God doesn't wipe out the human race to make room for a universe full of sentient beings that are thousands of times more well off than we could ever be?

Comment by UriKatz on Outreaching Effective Altruism Locally – Resources and Guides · 2015-02-21T12:13:30.999Z · EA · GW

For anyone who might read this thread in the future I felt an update is in order. I revisited my numbers, and concluded that opening a local outreach EA chapter is very cost-effective. The reward/risk ratio is high, even when the alternative is entrepreneurship, assuming the time you invest in outreach does not severely hurt your chances of success and high profits.

Previously I wrote that: "Assuming after 1 year I get 10 people to take GWWC's pledge, which I consider phenomenal success, my guesstimates show the expected dollars given to charity will be more or less the same." My mistake was not factoring risk in correctly. When risk is factored in correctly, 1 lifetime pledge might be enough to tilt the balance in favor of investing time in outreach, and 3 - 5 pledges certainly do.

Comment by UriKatz on Open Thread 6 · 2015-01-09T11:14:42.798Z · EA · GW

I will start my reply from the end. Your intuition is right. My investment will simply go into another share holder's pocket, and the company, socially responsible or otherwise, will see none of it. However, this will also decrease the company's cost of capital: when they go to the markets for additional funds, investors will know there is a market for these stocks and will be willing to pay more for them. I have no data on the extent of this impact.

As for your AMF example, I have no way of quantifying the good my SRI (socially responsible investing) may do, unless I fall upon work that someone else did on this subject. My main concern, however, is more along the lines of facilitating harm. For example, am I endorsing, or even causing, suffering by buying stocks in a cosmetic company that does research on animals? My meager funds obviously have little effect, but there are good reasons to think that every penny counts, and besides the issue here is that of comparing different outcomes for these meager funds. At this moment, I think that for me the "do no harm" principle is a good enough reason to earn a little less. My main problem is that an SRI focused portfolio might require more attention and consume more of my time, time I may not have to spare.

Finally here are a few more useful links the subject: - a short academic paper on the subject (there must be more recent ones, but this gives a pretty good overview). - a chart with SRI mutual funds and thier policies. - A nonprofit dedicated to SRI

Comment by UriKatz on Open Thread 6 · 2015-01-08T20:11:28.131Z · EA · GW

I have a small amount of money I want to invest. If all goes well, I will eventually donate the appreciated stock, but there is a small chance I might need the money so I don't want to donate it now. I was wondering what would be more effective altruism: to focus on socially responsible investing at the possible cost of lower returns, or to maximize returns so I can donate a larger sum to the most effective charities in the end? I stumbled upon this article on the subject, which I find interesting, but wanted to hear more opinions: (the TL;DR is that for a $100,000 investment over 30 years, a socially responsible mutual fund will make $50,000 less for charity.)

Comment by UriKatz on Open Thread 6 · 2014-12-13T19:10:33.894Z · EA · GW

If you have a chance within the next 22 hours, you should go to the Project for Awesome website ( and vote for effective charities. Search for GD, DtW & AMF.

Project for Awesome is an annual YouTube project run by the Vlogbrothers, that raises awareness and money for charity. The participants (video creators, viewers, donors, etc.) are probably relatively young and this is a great way of introducing EA to them.

Comment by UriKatz on Open thread 5 · 2014-11-20T17:51:15.814Z · EA · GW

Should we try to make a mark on the Volgbrother's "Project 4 Awesome"? It can expose effective altruism to a wide and, on average, young audience.

I would love to help in any way possible, but video editing is not my thing...

Comment by UriKatz on Certificates of impact · 2014-11-11T14:31:01.771Z · EA · GW

Full disclosure: I fear I do not completely understand your idea. Having said that, I hope my comment is at list a little useful to you.

Think about the following cases: (1) I donate to an organization that distributes bednets in Africa and receive a certificate. I then trade that certificate for a new pair of shoes. My money, which normally can only be used for one of these purposes, is now used for both. (2) I work for a non-profit and receive a salary. I also receive certificates. So I am being paid double?

The second case is easily solved, just give the employee either or. But then, what is the benefit of a certificate over a dollar bill? The first case presents a bigger problem I think, since essentially something is created from nothing. Notice that donations are not investments the donor can expect a return on (even if they are an investment in others).

Comment by UriKatz on Outreaching Effective Altruism Locally – Resources and Guides · 2014-11-01T20:42:35.156Z · EA · GW

Thank you for your offer to help me further, but having reviewed the link posted by Vincent, I am certain I do not have the time to start a local chapter right now.

Comment by UriKatz on Outreaching Effective Altruism Locally – Resources and Guides · 2014-10-29T07:15:29.236Z · EA · GW

Hi Ilya, thanks for your reply. I may have misunderstood you, but your example seems not to take into account the overhead of managing a larger team, or the diminishing returns of each additional staff member. This goes to the heart of my question: what would be the most effective way for each individual to further EA causes? Should they work full time and donate more, or work part time and do other things (this question may only apply to those who are earning to give). This question can best be determined on a case by case basis of course. It relates to the current article, because I was wondering if anyone tried to analyze the potential returns of localized outreach. I can compare such an analysis to the estimates I have of my startup's risks and rewards. These are numbers I prefer not to mention, mainly because they are highly speculative.

Comment by UriKatz on Outreaching Effective Altruism Locally – Resources and Guides · 2014-10-28T10:26:06.033Z · EA · GW

Thank you for this very important post, this is something I have been wanting to do for a very long time.

Do you know of any work that has been done comparing the effectiveness of outreach to other activities effective altruism supporters can take? I refer specifically to the limited kind of outreach suggested here, such as opening a local chapter, and not the kind of outreach Peter Singer is capable of.

I will give you an example of what I am thinking about.

A year ago I changed my career plan and started a technology startup. If my startup succeeds, it will substantially increase the amount I am able to give throughout my life. I expect work on an outreach program to require significant time and effort which I do not have to spare, so it will slow down my startup's progress, and decrease its chances of success. Assuming after 1 year I get 10 people to take GWWC's pledge, which I consider phenomenal success, my guesstimates show the expected dollars given to charity will be more or less the same. I am aware of the concept of flow through effects, and the tiny probability that I convince the next billionaire to join the cause, but I do not know how to add that to my calculation at this time.

Any reference or help will be much appreciated.