[Disclaimer, I have very little context on this & might miss something obvious and important]
In discussions of this post (the content of which I can’t predict or control), I’d ask that you just refer to me as Cathleen, to minimize the googleable footprint. And I would also ask that, as I’ve done here, you refrain from naming others whose identities are not already tied up in all this.
As there is some confusion on this point, it is important to be clear. The central complaint in the Twitter thread is that *5 days* after Cathleen’s post, the poster edited their comment to add the names of Leverage and Paradigm employees back to the comment, including Cathleen’s last name. This violates Cathleen’s request.
AFAICT the disagreement between Kerry and Ben stems from interpreting the second part of Cathleen's ask differently. There seem to be two ways of reading the second part:
1. Asking people to refrain from naming others who are not already tied into this in general.
2. Asking people to refrain from naming others who are not already tied into this in discussions of her post.
To me, it seems pretty clear that she means the latter, given the structure of the two sentences. If she was aiming for the first interpretation, I think she should have used a qualifier like "in general" in the second sentence. In the current formulation, the "And" at the beginning of the sentence connects the first ask/sentence very clearly to the second.
I guess this can be up for debate, and one could interpret it differently, but I would certainly not fault anyone for going with interpretation 2.
If we assume that 2 is the correct reading, Kerry's claim (cited above) does not seem relevant anymore / Ben's original remark (cited below) seems correct. The timeline of edits doesn't change things. Ben's original remark (emphasis mine):
The comment in question doesn’t refer to the former staff member’s post at all, and was originally written more than a year before the post. So we do not view this comment as disregarding someone’s request for privacy.
Even if she meant interpretation 1, it is unclear to me that this would be a request that I would endorse other people enforcing. Her request in interpretation 2 seems reasonable, in part because it seems like an attempt to avoid people using her post in a way she doesn't endorse. A general "don't associate others with this organisation" would be a much bigger ask. I would not endorse other organisations asking the public not to connect its employees to them (e.g. imagine a GiveWell employee making the generic ask not to name other employees/collaborators in posts about GiveWell), and the Forum team enforcing that.
You can indicate uncertainty in the form, so feel free to fill it out and state your probability :)
Thanks for pointing that out!
I agree that this is an important thing to keep in mind. Especially introductory events (talks, fellowships etc.) should be offered in German (or at least with a German option, i.e. one fellowship group which is in German).
Very strong upvote. Thanks for commenting this Simon.
(Meta: I am afraid that I am strawmaning your position because I do not understand it correctly, so please let me know if that is the case )
Personally, I am a pretty strong believer that the unique thinking style of effective altruism has been essential for its success so far, and that this thinking style is very closely related to certain skills & virtues common in STEM fields. So I am skeptical that there is much substance behind claims #1 or #2 in general.
I agree with you that it seems plausible that the unique thinking style of EA has been essential to a lot of the successes achieved by EA + that those are closely related to STEM fields.
- The "core" thinking tools of EA need to be improved by an infusion of humanities-ish thinking. Right now, the thinking style of EA is on the whole too STEM-ish, and this impairment is preventing EA from achieving its fundamental mission of doing the most good.
But it is unclear to me why this should imply that #1 is wrong. EA wants to achieve this massive goal of doing the most good. This makes it very important to get a highly accurate map of the territory we are operating in. Taking that into account, it is a very strong claim that we are confident that the “core” thinking tools we have used so far are the best we could be using and that we do not need to look at the tools that other fields are using before we decide that ours are actually the best. This is especially true since we do lack a bunch of academic disciplines in EA. Most EA ideas and thinking tools are from western analytic philosophy and STEM research. And that does not mean they are wrong - it could be that they all turn out to be correct - but they do encompass only a small portion of all knowledge out there. I dare you to chat to a philosopher who researches non-western epistemology - your mind will be blown by how different it is.
More generally: The fact that it is sometimes hard to understand people from very different fields is why it is so incredibly important and valuable to try to get those people into EA. They usually view the world through a very different lens and can check whether they see an aspect of the territory we do not see that we should incorporate into EA.
I am afraid that we are so confident in the tools we have that we do not spend enough time trying to understand how other fields think and therefore miss out on an important part of reality.
To be clear: I think that a big chunk of what makes EA special is related to STEM style reasoning and we should probably try hard to hold onto it.
2. The "core" thinking tools of EA are great and don't need to change, but STEM style is only weakly correlated with those core thinking tools. We're letting great potential EAs slip through the cracks because we're stereotyping too hard on an easily-observed surface variable, thus getting lots of false positives and false negatives when we try to detect who really has the potential to be great at the "core" skills. STEM style is more like an incidental cultural difference than a reliable indicator of "core" EA mindset.
Small thing: It is unclear to me whether we get a lot of false positives + this was also not the claim of the post if I understand it correctly.
Thank you very much for creating this list!
Related, see the contributions in this thread. Books recommended there which you did not mention:
RE: "Consider Phlebas, The Player of Games - Iain M. Banks"
I would second your ordering of PoG>CP. To add to the ordering: IMO "The Use of Weapons" is also not a really good book from a longtermist point of view, while "Excession" is a great read (so I guess: Excession>PoG>TUoW>CP).
I would be interested in what other people thought about the rest of the books of the culture series - are there some books that are much better than others?
"The mark of a civilised person is the ability to look at a column of numbers and weep."
“Recall the face of the poorest and weakest man you have seen, and ask yourself if this step you contemplate is going to be any use to him.”
“It is possible to believe that all the past is but the beginning of a beginning, and that all that is and has been is but the twilight of the dawn. It is possible to believe that all the human mind has ever accomplished is but the dream before the awakening.”
H. G. Wells
“Work like you were living in the early days of a better nation.”
"Every child saved with my help is the justification of my existence on this Earth, and not a title to glory."
“Why should costs and benefits receive less weight, simply because they are further in the future? When the future comes, these benefits and costs will be no less real. Imagine finding out that you, having just reached your twenty-first birthday, must soon die of cancer because one evening Cleopatra wanted an extra helping of dessert. How could this be justified?”
A few more wonderful quotes from HPMOR:
"And Harry remembered what Professor Quirrell had said beneath the starlight: Sometimes, when this flawed world seems unusually hateful, I wonder whether there might be some other place, far away, where I should have been… But the stars are so very, very far away… And I wonder what I would dream about, if I slept for a long, long time.
Right now this flawed world seemed unusually hateful. And Harry couldn’t understand Professor Quirrell’s words, it might have been an alien that had spoken, or an Artificial Intelligence, something built along such different lines from Harry that his brain couldn’t be forced to operate in that mode.
You couldn’t leave your home planet while it still contained a place like Azkaban.
You had to stay and fight."
"Every time you spend money in order to save a life with some probability, you establish a lower bound on the monetary value of a life. Every time you refuse to spend money to save a life with some probability, you establish an upper bound on the monetary value of life. If your upper bounds and lower bounds are inconsistent, it means you could move money from one place to another, and save more lives at the same cost. So if you want to use a bounded amount of money to save as many lives as possible, your choices must be consistent with some monetary value assigned to a human life; if not then you could reshuffle the same money and do better. *How very sad, how very hollow the indignation, of those who refuse to say that money and life can ever be compared, when all they’re doing is forbidding the strategy that saves the most people, for the sake of pretentious moral grandstanding...*"
Another one from this post:
"There may well come a day when humanity would tear apart a thousand suns in order to prevent a single untimely death.
That is the value of a life."
Another quote by MacFarquhar on Parfit:
"As for his various eccentricities, I don’t think they add anything to an understanding of his philosophy, but I find him very moving as a person. When I was interviewing him for the first time, for instance, we were in the middle of a conversation and suddenly he burst into tears. It was completely unexpected, because we were not talking about anything emotional or personal, as I would define those things. I was quite startled, and as he cried I sat there rewinding our conversation in my head, trying to figure out what had upset him. Later, I asked him about it. It turned out that what had made him cry was the idea of suffering. We had been talking about suffering in the abstract. I found that very striking.
Now, I don’t think any professional philosopher is going to make this mistake, but nonprofessionals might think that utilitarianism, for instance (Parfit is a utilitarian), or certain other philosophical ways of think about morality, are quite unemotional, quite calculating, quite cold; and so because as I am writing mostly for nonphilosophers, it seemed like a good corrective to know that for someone like Parfit these issues are extremely emotional, even in the abstract.
The weird thing was that the same thing happened again with a philosophy graduate student whom I was interviewing some months later. Now you’re going to start thinking it’s me, but I was interviewing a philosophy graduate student who, like Parfit, had a very unemotional demeanor; we started talking about suffering in the abstract, and he burst into tears. I don’t quite know what to make of all this but I do think that insofar as one is interested in the relationship of ideas to people who think about them, and not just in the ideas themselves, those small events are moving and important."
I would recommend this short essay on the topic: Human Extinction, Asymmetry, and Option Value
Abstract: "How should we evaluate events that could cause the extinction of the human species? I argue that even if we believe in a moral view according to which human extinction would be a good thing, we still have strong reason to prevent near-term human extinction."
(Just to clarify: this essay was not written by me)
For everyone who is also looking up the books right now. Here are the links:
Thanks a lot for writing this.
One thing I started recently is collecting instances of people acting altruistic and courageous and reading about these when I need motivation (their Wikipedia articles or a text I wrote myself etc.). These examples can go from very small acts to big ones. Reading about actual examples of people standing up to the social norms or laws of their time to do the right thing gives me a lot of motivation to keep pursuing an altruistic path even in the face of difficulty. One example I came across recently is a farther who supported his daughter when she refused to marry a man who raped her (a so called “rehabilitating marriage” which was the custom (and law!) in mid 20th century Sicily). He did so although their town ostracised them and even burned down their farm.
Also there is a good amount of great blog posts which I find really motivating. The ones I can think of from the top of my head are:
Something I enjoy doing and that really lifts my mood is editing Wikipedia. One reason why I enjoy doing this is the feeling of contributing to a community and "adding" to this huge collection of knowledge. It is also plausibly at least somewhat impactful. Wikipedia is often one of the first sources people read on a topic if they want to know more about it. There are a Lott of low hanging fruits, especially, if you are a non-english speaker since the articles relevant to EA's are often quite bad in the non-english Wikis. Here and here are overviews of articles that could be improved or created. For the warm glow of adding to humanities knowledge you can also add to non-EA related articles (e.g. your favourite not-super-famous-Band, a topic related to a term paper you recently wrote etc.). This sometimes feels easier since you don't have to think so much about framing the topic perfectly.