Ideas to improve the Effective Altruism Movement
I bought a GPU some years ago. My belief is that it's consequences were negligible or a small evil, so mildly anti-altruistic.
- it was built by exploited labor. It's simply self-serving to suggest that exploiting labor is a necessary step in modernizing another country. In fact, the definition of exploitation lets me know that the labor is being treated harmfully. However, the actual suffering that the purchase itself caused is negligible in terms of encouraging additional exploitation because of how big the market is for GPU's. Notice I'm looking at the future from the point that I buy the GPU, not what the GPU cost in human suffering that went into it. I only think of that when I decide whether my purchase encourage continuation of that human suffering.
- it served no altruistic purpose, it turned out. I might as well not have bought the thing, for all the need I had for the GPU RAM. The CPU's ability to render graphics was more than sufficient for all my needs. I don't game much. Furthermore, nothing I did on the computer particularly benefited others. Me, mildly, but not others.
- it was or soon will be e-waste, and a large chunk of e-waste. E-waste is harmful to the environment and poisons people because of how it is handled on disposal. I knew that in advance. However, this was only a mild evil because one GPU is not that polluting or poisonous on its own.
If I were a gamer, my gaming would not contribute to the welfare of others. Again, gaming would be a selfish act or a small evil.
To establish the altruism of the consequences of the GPU purchase (and use), I score its consequences as I see them. I'm not that sophisticated, so I rely on a two axis analysis. The positive X-Axis is positive altruism. The negative Y-Axis is negative altruism, anti-altruism. So X is how good, minus Y is how evil. X goes to 100, Y to -100. Off the top of my head, I'm going with (0,-2) for the GPU purchase. There were no altruistic consequences but there were a few mildly evil consequences.
To compare the altruistic values of the consequences of the GPU purchase with those occurring if I do not purchase it, I calculate a distance between the two. I need to have some sense of what I do without the GPU. I assume that I simply went on with my lifestyle without the GPU purchase. The desktop computer that I purchased anyway becomes e-waste, has a similar origin and so contributes to similar exploitation, and again my use (it turns out) doesn't really benefit anyone else, so (0,-6), because it's 3x the ewaste of the GPU and again my purchase encouraged electronics manufacturing negligibly because I bought everything new but so did millions of others.
To compare apples to apples, I need to compare altruistic value of the computer purchase with the GPU to that of the computer purchase without the GPU. Relying on simple addition to calculate component values, (0,-8) is the score of the computer purchase with the GPU compared to (0,-6) without the GPU. I can calculate a Euclidian distance between them, it's just 2. The GPU purchase alone didn't change the consequences much between the two actions.
I can compare the two options in terms of scale, (0,-8) and (0,-6). Here I feel my math suffers for lack of options, so for now I'm going with a comparison of the distances of the two points from (0,0) to decide the scale of each action that I want to compare. The two actions are: computer purchase with GPU and computer purchase without GPU. I can say that the purchase of a computer with a GPU is 33% (8/6)more evil than just the purchase of a computer with the mobo, CPU, power supply, keyboard, and mouse.
Explaining this took a lot longer than writing down (0,-6), (0,-8), 2, and 33%. The numbers are relative, subjective, and controversial, and that's why I suggested this analysis for the EA community. The numbers might have more value to collective decision-making as intersubjective values. Remember, this is on a scale of magnitudes from 0-100 on each axis. For example, on the EA forum, someone might give me information that a chunk of e-waste independently raises risk of cancer in 3 people to 1/12, then I could factor that in, "Hmm, my new computer purchase with a GPU, at least 4 chunks of e-waste, causes cancer in some person later", so now the scores are (0,-60) and (0,-45) . It wouldn't matter so much what specific numbers I choose but more that the mathematics of my choice decide a very different altruistic value of GPU purchases (and electronics purchases in general) than previously. Armed with my new information, I might decide to buy a used computer and start dividing the consequences of its eventual turn to e-waste with its previous owner.
Or if I decided that my computer use was altruistic, "Hmm, I did some research with it that saved some people from some unnecessary suffering in their lives", then the scores might be (15,-8) and (15,-6), for example, with a distance of 2 between the points but a smaller scale difference of about 5% (17/16.16). Now the GPU purchase has less influence on the overall impact of my computer purchase because of how I used the computer. If the GPU purchase enabled some specific altruistic use of my computer, then that percentage difference in altruistic value scale (size) would start going up and so would the distance of altruistic value between the two purchase options. Interestingly, if I knew that my purchasing a new computer effectively gave someone else cancer later, then my altruistic use of the computer is obviously inadequate to justify the purchase. Food for thought.
Here's a few final thoughts:
- My model of the altruism of electronics hardware purchases considers only two evils (of which only the e-waste problem is impactful in a market of this scale) and one potential good (how I use the hardware).
- I had paid attention to the importance of the GPU purchase over time, that's how I knew the GPU did not contribute to my computer use. Another non-gamer might not be able to judge because they never gathered the information.
- I applied a hard test, "prove your use was altruistic" and really couldn't prove to myself that anything I did with my computer benefited anyone else. That the possibility exists doesn't bother me, but I can't include it in my calculations.
- I chose a two-dimensional representation for altruistic/anti-altruistic value because positive and negative consequences don't cancel each other out.
- This was some quick and boring computation, but notice how things started to change once I "found out" about the cancer risk caused by the e-waste or when I could identify altruistic consequences of my computer use in a two-dimensional space of altruistic value.
- a modified cosine distance would better represent the difference in closeness of altruistic value to good or evil. There are other possibilities too.
So Jackson, thanks for your interest and comments. I analyzed a GPU purchase, mine. I hope you found it interesting.
"Analyzing the ethical impact of everyday decisions (like about where to live, how to commute, what to eat, who to vote for, etc) is essentially a pitch for "microprojects", and would be more suited to a world where there were very many more people interested in EA but much less funding available."
Hmm, yes. Pragmatically, I wouldn't want to insult the ethics of wealthy charitable givers when their contributions can count for so much and they will earn their money however they do. I see that my suggestion is naive and possibly a poor fit to the EA community.
Ideas to improve the Effective Altruism Movement
Thank you, Karthik
I don't have much time and don't expect much attention regardless of my time input to writing about this topic. It is boring, frankly. I am a boring writer. The best that I can do is keep it short.
Altruistic value is not objectively measurable. If a creature like God existed, then she could judge the altruistic value of actions in terms of their consequences. Everyone else makes do with unreliable mental models that are bound by uncertain future circumstances.
As a brief thought experiment, if you have a sense that an action (for example, a large donation to a reliable effective charity) is altruistic, then you have made a judgement of the altruistic value of that donation. Other actions, in fact, all actions, are vulnerable to the same thought experiment. The only result is to make explicit what you already think.
I could offer my sense of true failings of the EA community to make better judgements among specific available options of behavior in certain situations, but those would be context bound, controversial, and with results that I don't think would be worth my time. Besides, I don't care, per se, whether the EA community continues to have blind spots about certain common evil actions and continues to perform them. It's a big world.
I just heard about this contest and thought, hmmm, how to summarize a helpful suggestion for improvement to EA, a little thought experiment of my own.
Sorry I could not put in the effort that I see others do here, but I promise you that my efforts are well-intended and sincere.