Sorry about the downvotes. (My guess is that they were meant to signal disapproval of 'abuse' as an adequate subject for a Wiki entry rather than express a negative opinion about the article's quality; FWIW, I didn't downvote you.)
I guess I still feel that "physical abuse" is too vague. Do you have examples of physical abuse in mind that would be of special interest to EA not already covered elsewhere in the Wiki?
I also don't think it's an expression of the article's quality, I think EAs (or at least it's forum users) are uncomfortable with abuse. The likes/upvotes of posts and comments that mention it are often polarized and I think that acknowledging that abused happened in the EA-community probably (and understandably) made people defensive. I wasn't trying to smear EA by including a Kathy Forth post, I was trying to be open and honest.
I think it's fine to have broad tags that encompass other, more specific tags. My concern about "abuse" isn't merely that it is broad, but that it doesn't seem to capture a "natural kind" of special interest to EA: compare with "global health and development", which despite being broad, it singles out a particularly promising focus area (from a neartermist, human-centric perspective).
Things like "domestic violence", "child abuse", etc. are in principle appropriate candidates for a Wiki entry, I think, though whether they are in practice depends on whether these are topics that have attracted sufficient EA attention. There are all sorts of serious issues in the world that the Wiki doesn't cover, because the EA community hasn't devoted enough attention to them.
As for abuse within the EA community, perhaps this would warrant a "community health" entry (which we don't have at the moment). Would you be in favor of creating such an entry?
Don't you think physical abuse captures a natural kind? If I start asking people on the street to picture physical abuse and picture global health and development, I think that basically everyone will have a clearer picture of what physical abuse entails than global health and development.
I have just proven myself to be bad at writing about sensitive topics, so you should probably ask someone who speaks English as a native language and who is more integrated in the wider EA-community (there are hardly any EA-organizations/projects where I live, so the small group I run is pretty isolated from the larger EA-community). Otherwise the Community experiences / Diversity and inclusion / Criticism of effective altruist organizations might be enough to cover it.
I saw that some tags now have banners (and icons). Since I made card images and banners for a bunch of sequences, shall I make some for the tags too? I can't add them via the edit function, so if you want me to add them I would need some other mechanism.
I think the image option is available only for "core" tags (which have a white rather than a grey background), although I'm not entirely sure since this was done by the tech team. I believe all the core tags already have images associated with them, but if that isn't the case, or if you think you can produce better images, it may be worth exploring this further. Would you mind messaging JP Addison, who is leading this? Thanks.
I created a poll to decide what to do about the 'Abuse' article and it looks like people are not in favor of keeping it. Just wanted to let you know that I'll probably delete it if the vote doesn't substantially change by the end of today.
Thanks again for contributing to the EA Wiki, and I hope this doesn't dissuade you from contributing more in the future!
I don't appreciate the sarcasm. Lots of tags have been deleted in the past. This was the first tag deleted after trying out a democratic process of decision-making, that doesn't rely on my judgment alone.
The conversation had ended with my argument. No one had refuted it, no one provided a counterargument, no one left a request for the deletion of the tag. It's also not democratic if some people get more votes than others. If there was any need for a poll at all, it would be about whether the name should be changed to "physical abuse". Should I just upload the same article but under the name "physical abuse" (but this time without mentioning the EA-community and without adding a post by Kathy Forth)?
If you want to nominate an article, you can do so here.
EDIT: Pablo's response is fair so I will upvote it, I didn't think of it because most of it was already released and none contained personal info, but I should have asked. I do think the context is important since Pablo's comment is misleading, the objections were addressed and the deletion process started without request. The fact that my tag proposal stood a day without getting downvoted, but did get a downvote immediately after this comment mentioned the EA-community and Kathy Forth again, is evidence that these factors were indeed what caused the downvote of the original tag. I do think some of the posts I linked in the proposal worked better than others, which is why I used the word 'potentially'. Currently 48 tags have only 1 article and 55 tags only have 2 articles. I feared that if I only linked a few the tag would be dismissed with the response that the EA-community doesn't engage with this topic (which is demonstrably not true). Feel free to not include the articles you think are irrelevant, but there are definitely more than two that remain.
The problem of harassment is what the section 'Banning, flagging and blocking' is about. You can substitute the word "malignant actor" with "harasser". As stated, there are some mechanisms to minimize this, but more suggestions are always appreciated. The option to self-accuse was pointed out in 'banning, flagging and blocking' ("people who pretend to be someone they’re not") and 'tagging' ("Let’s say Barry MacBadguy only assaulted one person and now uses this site to pretend to be his own victim"). The self-accuser can see that someone else accused them, but if the victim opts to stay completely anonymous the self-accuser can't see who accused them. The flagging and blocking mechanisms are there to punish pretenders, though without a third party verification system pretenders will still get in. Third party verification is possible, but does hamper anonymity and makes the project less scalable.
EDIT: You can't stay completely anonymous if you use the chatroom to share experiences, but you can stay anonymous if you use it to coordinate actions (e.g. On January 6th 13:00 we post our stories to our facebook pages). Obviously after you've come forward you can't stay completely anonymous, but you can't do that with any other method either. This project will not completely protect victims, merely improve upon the current situation.
There are some that are focussed more on pollution, such as lead exposure:
And air pollution:
GWWC uses bednets, medice and healthspan to show how much your donation could do. Some people might like concrete numbers on what they can buy. E.g a 1000 dollar could do:
If you really want to hammer home how much more your donation could do overseas you can show graphs such as:
As much as I think considering new ideas is important, I recommend you don't include things like the value of potential people and geo-engineering in the graphs, since these concepts may be controversial/distracting for people.
I saw that this sequence didn't have a banner image yet, and since I made those for other sequences I decided to make one for this sequence too.
As card image you have a network over the United States:
I'm guessing this is a flight network? In any case, this is a good idea to visualize the idea of sociology and network science. But the colors contain red and purple, while the EA logo is purely blue. And more importantly, this image only depicts the USA, while EA is a global movement.
So I decided to look for images of flight networks that were in the creative commons. I found a flightpath visualizer that depicted the whole world, but didn't show the borders, just like in your image (which may or may not be a philosophical statement about borders). I began photoshopping the visualizations to include the exact shade of blue the EA logo uses, but that isn't always aesthetically pleasing. I made a couple of images that you can use as banner and card image, but if you have specific wishes about how you want them to look, or if you have a totally different idea about how you want your card and banner images to look, please let me know and I'll make one to your liking.
The forum compresses the images, so if you like any of these, let me know and I'll send you the high quality version of them.
EDIT: I was looking through all the sequences and messaging everyone who's missing a banner with the offer to make one for them. I saw that your "AI Alignment Literature Review and Charity Comparison" sequence doesn't have one either. You could use your card image as banner, or I can make one for you, if you want.
Comment by email@example.com on [deleted post]
I too would like to know whether this is resolved and who the winner is. Also, I see that since I submitted my entry you've edited your post to talk about symbols instead of flags and added the phrase: "Avoid mathematical symbols since these are less suitable for broader outreach". But I had already submitted my flag with a mathematical symbol on it. Does this mean that my work is now retroactively made ineligible?
Comment by firstname.lastname@example.org on [deleted post]
I already worked on a project like this previously:
Yellow stands for happiness, that which utilitarianism pursues
White stands for morality, that which utilitarianism is
The symbol is a sigma, since utilitarians care about the sum of all utility
The symbol is also an hourglass, since utilitarians care about the (longterm) future consequences
If you don't like the rounded design I can also make it more angular:
The size of the symbol, the angles, the proportions of the flags etc can all be changed if you have specific preferences. The main idea is the sigma that also functions as an hourglass.
I do however worry whether it's wise to make symbols for philosophical ideas. I like designing these things, but you run the risk that these symbols can be used to make people strongly identify themselves with these ideas, instead of them being things that people can dispassionately examine and perhaps reject. I would advise everyone to say things like: "I like utilitarianism" or "I believe in utilitarianism", instead of "I am a utilitarian". Let's make sure ethical ideas don't become as rigidly polarized as political ideas.
EDIT: If you want to redesign this flag, go right ahead! I'm planning to donate the prize money if I win, so if you improve on my design and also donate the prize money, that would actually make me very happy.
Do you know similar voting methods that worked on a small scale?
I haven't seen the use of categories and columns before, but the voting systems I used have already seen a bunch of analysis and real world use (the electo-wiki I linked to is a good starting point if you want to look into it). If with "small scale" you mean "you and a bunch of friends need to find a place to eat" I wouldn't use columns and categories (takes too long), but would instead use a simple Approval Vote. If you have a specific scenario in mind, feel free to message me and maybe I can help you out.
What are the next steps in terms of research / action?
I'm not a professional voting theorist, so I'm going to wait and see if someone finds a flaw in the idea of using columns, categories or departments. If not, I might be able to publish it in a couple years if my university/a journal is interested. I think from an activism perspective we should first focus on introducing a better voting system. Something like Approval Voting would be easier to explain/get the public on board with than this more complex electoral reform. If I run into some people that are passionate about voting reform I will certainly share this idea with them, but for now I don't really have an audience for it beyond this forum. If you have a project in mind, feel free to message me.
Good question! I asked the mods if they could put Lukas' name underneath the sequence, since he wrote it. The sequence does show up on the sequences-page, so I'm guessing that when they changed my name they simultaneously removed it from my page, but forgot to subsequently add it to Lukas' page. That's just a guess though. Should I message the mods about it?
Vaidehi_agarwalla and I thought it might be a good idea to have sequences within sequences. For example: Vaidehi created sequences for the ea-survey results per year, because sometimes you want to only look at the survey results for that one year. Other times you want to look at all the survey results. If we add a new survey sequence every year it will clutter up the sequence page, but if you put them in one larger sequence it will take up less space and it will allow people to either read everything in one go, or select the "sub-sequence" they want to read and stop there.
1) Creating a stunningly beautiful world that is uninhabited and won’t influence sentient beings in any way or 2) Not creating it.
In addition, both the genie’s and your memories of this event are immediately erased once you make the choice, so no one knows about this world and you cannot derive happiness from the memory.
Would you choose option one or two?
I would choose option one, because I prefer a more beautiful universe to an uglier one (even if no one experiences it). This forms an argument against classic utilitarianism.
Classic Utilitarianism says that I’m wrong. The choice doesn’t create any happiness, only beauty. This means that, according to classic utilitarianism, I should have no preferences between the two options.
There are several moral theories that do allow you to prefer option one. One of which is preference utilitarianism which states that it’s okay to have preferences that don’t bottom out in happiness. For this reason, I find preference utilitarianism more persuasive than classic utilitarianism.
A possible counterargument would be that the new world isn't really beautiful since no one experiences it. Here we have a disagreement over whether beauty needs to be experienced to even exist.
A third way of looking at this thought experiment would be through the lens of value uncertainty. Through this lens, it does make sense to pick option one. Even if you have a thousand times more credence in the theory that happiness is the arbiter of value, the fact that no happiness is created either way leaves the door open for your tiny credence that beauty might be the arbiter of value. Value uncertainty suggests that you take the first option, just in case.
You might be interested to know that these things were already discussed on this platform. I made a post about Making a crowdaction website and gabcoh also wrote a post on this topic. On Less Wrong there is even a whole sequence about this idea. I don't have much programming skills, but I might be able to help in other ways (graphic design, mechanism design). I can also bring you in contact with other people that are working on this project like: Ron (co-creator of collaction), DonyChristie (a programmer who already build a prototype website) and gabcoh.
Hey Aaron, great post. Maybe this isn't the best place to ask this, but should posts like these be tagged with the 80,000 Hours tag? We've discussed the tagging system in my recent post, but I'm still not sure when certain tags should be used. When I look at the 80,000 hours tag, almost all posts are from the 80,000 hours account. But the two bottom ones aren't. So should this tag to be used exclusively by the 80,000 hours account, or should it be used when people talk about 80,000 hours in general? And is mentioning them/their research in the post enough, or should the post be about 80,000 hours before you can tag it?
Of course, I'm just using 80,000 hours as an example. What I'm really asking is if we should create a tag guideline. Something like the New Tag Guidelines from Less Wrong. (I would be willing to write it if you want)
Also, for a totally unrelated comment. Congratulations on your satisfying karma-score milestone:
Oh wow, that's fantastic! I now feel like the tone of this post seems way too harsh. Seeing that most of my points are already being addressed by the mod-team makes me think I should have reached out to you before posting this. I'll make it up to you by winning that upcoming tagging event :) Thank you mod-team, for your continuing work on this amazing site!
Hey Derek, once again a great post! You might not know this, but the EA forum allows you to store a sequence of posts into an ordered sequence. If you go to https://forum.effectivealtruism.org/sequences you can see all the other sequences. When you click on the button on the right:
you can create your own sequence. Once you click on it you will be asked to write a short introduction and add a banner and card image. I haven't written any sequences, but as you can see I have designed most of them. I'm sure you can write a good introduction, but if you want a snazzy card image and banner, I've made some for you:
Here's how it will look among the other sequence cards:
Hope you like them.
Comment by email@example.com on [deleted post]
I don't think we need a separate tag for this. Especially since there aren't any posts about J-PAL.
Thank you! The cropping in photoshop only takes five minutes at most, so it isn't a big deal. All of the images are made with creative commons images, except for "moral anti-realism" (which I took from Lukas' own page, so I assume he has the rights) and "rwas library" which I found on a bunch of websites with no indication of it's status (if it does get copyrightstriked I'll photoshop a similar looking image).
Btw, could you add an "EA Forum (meta)" tag to this post? I can't add tags at the moment.
I love that sequence, but it's specifically about motivation and how to cultivate it. An "Introduction to EA" sequences would ideally focus on introducing some of the key concepts and organizations. Something like Doing Good Better, but with a little more focus on the movement.
No problem! As a non-native english speaker this was an extremely difficult post to write, hence why I leaned so heavily on images. If you (or anyone) have any suggestions for how I could reword this post to make it clearer, please let me know.
EDIT: I've changed the word "standard utilitarianism" into "moment utilitarianism", I hope this clears up some of the confusion.
EDIT 2: I've realized that other moral theories (outside of consequentialism) might have other ways of solving this. It has become a big project which I call "timespan ethics", since some theories reject the possibility of different timelines. I'm planning to make this project my masters thesis.
I think so too, because you can't really talk about ethics without a timeframe. I wasn't trying to argue that people don't use timeframes, but rather that people automatically use total timeline utilitarianism without realizing that other options are even possible. This was what I was trying to get at by saying:
Usually when people talk about different types of utilitarianism they automatically presuppose "total timeline utilitarianism". In fact, the current debate between total and average utilitarianism is actually a debate between "total total utilitarianism" and "total average utilitarianism".
Please, let me know about any source discussing this.
If with "this" you mean timeline utilitarianism, then there isn't one unfortunately (I haven't published this idea anywhere else yet). Once I've finished university I hope some EA institution will hire me to do research into descriptive population ethics. So hopefully I can provide you with some data on our intuitions about timelines in a couple years.
I suspect that people more concerned with the quality of life will tend to favor average timeline utilitarianism, and all the people in this community that are so focused on x-risk and life-extension might be a minority with their stronger preference for the quantity of life (anti-deathism is the natural consequence of being a strong total timeline utilitarian). If you want to read something similar to this then you could always check out the wider literature surrounding population ethics in general.
Yes, (total) total utilitarianism is both across time and space, but you can aggregate across time and space in many different ways. E.g median total utilitarianism is also both across time and space, but it aggregates very differently.
The reason I find the definition not very useful is because it can be interpreted in so many different ways. The aim of this post was to show the four main ways you could interpreted it. When I read the definition my first interpretation was “hinge broadness”, while I suspect your interpretation was “hinge reduction”. I’m not saying that hinge broadness is the ‘correct’ definition of hingeyness, because there is no ‘correct’ definition of hingeyness until a community of language users has made it a convention. There is no convention yet so I’m purposefully splitting the concept into more quantifiable chunks in the hope that we can avoid the confusion that comes from multiple people using the same terms for different concepts. Since I failed to convey this I will slightly edit this post to clear it up for the next confused reader. I added one sentence, and tweaked another sentence and a subtitle. The old version of the post can be found on LessWrong.
Here the "range of possible utility in endings" tick 1 has (the first 10) is [0-10] and the "range of possible utility in endings" the first 0 has (tick 2) is [0-10] which is the same. Of course the probability has changed (getting an ending of 1 utility is not even an option anymore), but the minimum and maximum stay the same.
But we don't care about just the endings, we care about the rest of the journey too. The width of the "range of the total amount of utility you could potentially experience over all branches (not just the endings)" can shrink or stay the same. But the range itself can shift. For example the lowest possible utility tick 1 can experience is 10->0->0 = 10 utility and the highest possible utility that it can experience is 10->0->10 = 20 utility. The difference between the lowest and highest is 10 utility. The lowest total utility that the 0 on tick 2 can experience is 0->0 = 0 utility and the highest is 0->10 = 10 utility, which is once again a difference of 10 utility.
The probability has changed: Ending with a weird number like 19 is impossible for the '0 on tick 2'. The probability for a good ending has also become much more favorable (50% chance to end with a 10 instead of 25% it was before). Probability is important for the precipiceness.
But while the width of the range stayed the same, the range itself has shifted downwards from [10-20] to [0-10]. Maybe this also an important factor in what some people call hingeyness? Maybe call that 'hinge shift'?
This will effect the probability that you end up in certain futures and not others. I used the word precipiceness in my post to refer to high-risk high-reward probability distributions. Maybe it's also important to have a word for a time in which the probability that we will generate low amounts of utility in the future is increasing. We call this "increase in x-risk" now because going extinct is most of the time a good way to ensure you will generate low amounts of utility. But as I showed in my post, you can have an awesome extinction and a horrible long existence. Maybe I shouldn't be trying to attach words to all the different variants of probability distributions and just draw them instead.
To recap "the range of total amount of utility you can potentially generate" aka "hinge broadness" can:
1) Shrink by a certain amount (aka hinge reduction) this can be because the most amount of utility you can potentially generate is decreasing (I'll call this "top-reduction") or because the least amount of utility you can potentially generate is increasing (I'll call this "bottom-reduction"). Top-reduction is bad, bottom-reduction is good.
2) Shift upward or downward in utility by a certain amount (aka hinge shift) Upward shift is good, downward shift is bad.
Some other examples of things you could coordinate with this website:
Leaving the current social media giants en masse for a more privacy concerned/bubble-breaking/fact-checking alternative. Everyone hates facebook/twitter/tiktok etc and yet everyone uses them because everyone uses them. By coordinating the switch you can effectively take away their biggest driving force: that everyone uses them.
Switching to a different language. Many of the spelling rules are dumb, yet we use them because we are expected to use them. If we all collectively switched to simplified spelling rules everyone wouldn't need to keep those unnecessarily complex rules in mind.
Organizing boycotts of unethical companies. Your single action will not effect the supply chain which makes people unmotivated to act. This website would change that.
Switching from cars to other modes of transportation.
Doing illegal things in very large groups so you can't be arrested (e.g not wearing a burqa)
Starting a local project (e.g exercise group)
Redefining/reclaiming a word.
Organizing a strike against your exploitative employer.
Wearing no/less clothes during hot summer months.
Switching to a different currency.
Having one person pick up groceries for everyone in the local community instead of everyone driving separately.
Organizing/attending an event.
Starting a crowdsourced project (e.g a wiki)
In short; the list of things I only do because everyone else does them is gigantic, but that list is tiny compared to all the things I would do if more people started doing them. I could keep going but I hope this gives some idea as to why this site might be useful.
This post is a crosspost from Less Wrong. Below I leave a comment by Lukas Gloor that explained the implications of that post way better than I did:
This type of procedure may look inelegant for folks who expect population ethics to have an objectively correct solution. However, I think it's confused to expect there to be such an objective solution. In my view at least, this makes the procedure described in the original post here look pretty attractive as a way to move forward.
Because it includes some very similar considerations as are presented in the original post here, I'll try to (for those who are curious enough to bear with me) describe the framework I've using to think about population ethics:
Ethical value is subjective in the sense that if someone's life goal is to strive toward state x, it's no one's business to tell them that they should focus on y instead. (There may be exceptions, e.g., in case someone's life goals are the result of brain washing).
For decisions that do not involve the creation of new sentient beings, preference utilitarianism or "bare minimum contractualism" seem like satisfying frameworks. Preference utilitarians are ambitiously cooperative/altruistic and scale back any other possible life goals at the expense of getting maximal preference satisfaction for everyone, whereas "bare-minimum contractualists" obey principles like do no harm while still mostly focusing on their own life goals. A benevolent AI should follow preference utilitarianism, whereas individual people are free to decide for anything on the spectrum between full preference utilitarianism and bare-minimum contractualism. (Bernard William's famous objection to utilitarianism is that it undermines a person's "integrity" by alienating them from their own life goals. By focusing all their actions on doing what's best from everyone's point of view, people don't get to do anything that's good for themselves. This seems okay if one consciously chooses altruism as a way of life, but it seems overly demanding as an all-encompassing morality).
When it comes to questions that affect the creation of new beings, the principles behind preference utilitarianism or bare-minimum contractualism fail to constrain all of the possibility space. In other words: population ethics is underdetermined.
That said, it's not the case that "anything goes." Just because present populations have all the power doesn't mean that it's morally permissible to ignore any other-regarding considerations about the well-being of possible future people. A bare-minimum version of population ethics could be conceptualized as a set of appeals or principles by which newly created beings can hold accountable their creators. This could include principles such as:
All else equal, it seems objectionable to create minds that lament their existence.
All else equal, it seems objectionable to create minds and place them in situations where their interests are only somewhat fulfilled, if one could have easily provided them with better circumstances.
All else equal, it seems objectionable to create minds destined to constant misery, yet with a strict preference for existence over non-existence.
(While the first principle is about which minds to create, the second two principles apply to how to create new minds.)
Is it ever objectionable to fail to create minds – for instance, in cases where they’d have a strong interest in their existence?
This type of principle would go beyond bare-minimum population ethics. It would be demanding to follow in the sense that it doesn't just tell us what not to do, but also gives us something to optimize (the creation of new happy people) that would take up all our caring capacity.
Just because we care about fulfilling actual people's life goals doesn't mean that we care about creating new people with satisfied life goals. These two things are different. Total utilitarianism is a plausible or defensible version of a "full-scope" population ethical theory, but it's not a theory that everyone will agree with. Alternatives like average utilitarianism or negative utilitarianism are on equal footing. (As are non-utillitarian approaches to population ethics that say that the moral value of future civilization is some complex function that doesn't scale linearly with increased population size.)
So what should we make of moral theories such as total utilitarianism, average utilitarianism or negative utilitarianism? They way I think of them, they are possible morally-inspired personal preferences, rather than personal preferences inspired by the correct all-encompassing morality. In other words, a total/average/negative utilitarian is someone who holds strong moral views related to the creation of new people, views that go beyond the bare-minimum principles discussed above. Those views are defensible in the sense that we can see where such people's inspiration comes from, but they are not objectively true in the sense that those intuitions will appeal in the same way to everyone.
How should people with different population-ethical preferences approach disagreement?
One pretty natural and straightforward approach would the proposal in the original post here.
Ironically, this would amount to "solving" population ethics in a way that's very similar to how common sense would address it. Here's how I'd imagine non-philosophers to think approach population ethics:
Parents are obligated to provide a very high standard of care for their children (bare-minimum principle).
People are free to decide against becoming parents (principle inspired by personal morality).
Parents are free to want to have as many children as possible (principle inspired by personal morality), as long as the children are happy in expectation (bare-minimum principle).
People are free to try to influence other people’s stances and parenting choices (principle inspired by personal morality), as long as they remain within the boundaries of what is acceptable in a civil society (bare-minimum principle).
For decisions that are made collectively, we'll probably want some type of democratic compromise.
I get the impression that a lot of effective altruists have negative associations with moral theories that leave things underspecified. But think about what it would imply if nothing was underspecfied: As Bernard Williams has noted, if the true morality left nothing underspecified, then morally-inclined people would have no freedom to choose what to live for. I no longer think it's possible or even desirable to find such an all-encompassing morality.
One may object that the picture I'm painting cheapens the motivation behind some people's strongly held population-ethical convictions. The objection could be summarized this way: "Total utilitarians aren't just people who self-orientedly like there to be a lot of happiness in the future! Instead, they want there to be a lot of happiness in the future because that's what they think makes up the most good."
I think this objection has two components. The first component is inspired by a belief in moral realism, and to that, I'd reply that moral realism is false. The second component of the objection is an important intuition that I sympathize with. I think this intuition can still be accommodated in my framework. This works as follows: What I labelled "principle inspired by personal morality" wasn't a euphemism for "some random thing people do to feel good about themselves." People's personal moral principles can be super serious and inspired by the utmost desire to do what's good for others. It's just important to internalize that there isn't just one single way to do good for others. There are multiple flavors of doing good.