Posts

Victim Coordination Website 2022-03-06T17:21:56.602Z
Department Voting 2021-08-02T09:42:26.618Z
Bob Jacobs's Shortform 2021-02-13T19:35:49.561Z
Reworking the Tagging System 2021-01-24T18:03:15.104Z
Making More Sequences 2020-10-20T12:36:47.388Z
Timeline Utilitarianism 2020-10-09T20:12:28.831Z
Sortition Model of Moral Uncertainty 2020-10-04T20:58:31.119Z
A Toy Model of Hingeyness 2020-09-07T17:34:52.821Z
Making a Crowdaction website 2020-08-19T15:18:14.140Z
Meta-Preference Utilitarianism 2020-02-05T19:14:22.652Z
[Crosspost] Reducing Risks of Astronomical Suffering: A Neglected Priority 2016-09-14T15:21:17.369Z

Comments

Comment by Bob Jacobs (bmjacobs@telenet.be) on Propose and vote on potential EA Wiki articles / tags [2022] · 2022-05-29T11:27:42.353Z · EA · GW

For context, here is the discussion:

Pablo

Hi,

Thanks for your contributions to the EA Wiki. This is to let you know that I left a comment here.

Bob Jacobs

Would physical abuse be better? People downvoted me for writing this entry so feel free to change it, I'm not touching it anymore.

Pablo

Sorry about the downvotes. (My guess is that they were meant to signal disapproval of 'abuse' as an adequate subject for a Wiki entry rather than express a negative opinion about the article's quality; FWIW, I didn't downvote you.)

I guess I still feel that "physical abuse" is too vague. Do you have examples of physical abuse in mind that would be of special interest to EA not already covered elsewhere in the Wiki?

Bob Jacobs

I also don't think it's an expression of the article's quality, I think EAs (or at least it's forum users) are uncomfortable with abuse. The likes/upvotes of posts and comments that mention it are often polarized and I think that acknowledging that abused happened in the EA-community  probably (and understandably) made people defensive. I wasn't trying to smear EA by including a Kathy Forth post, I was trying to be open and honest.

You could split physical abuse up into things like; Domestic violence, child abuse, workplace aggression, sexual abuse... However, this would make the number of posts for each tag very sparse and many tags can be seen as subsections of other tags, e.g Global health and development ->Global health and wellbeing -> Burden of disease -> Medicine -> Malaria -> Mass distribution of long-lasting insecticide-treated nets ->Against Malaria Foundation
so having a broader tag isn't really out of the ordinary.

Pablo

Hi,

I think it's fine to have broad tags that encompass other, more specific tags. My concern about "abuse" isn't merely that it is broad, but that it doesn't seem to capture a "natural kind" of special interest to EA: compare with "global health and development", which despite being broad, it singles out a particularly promising focus area (from a neartermist, human-centric perspective).

Things like "domestic violence", "child abuse", etc. are in principle appropriate candidates for a Wiki entry, I think, though whether they are in practice depends on whether these are topics that have attracted sufficient EA attention. There are all sorts of serious issues in the world that the Wiki doesn't cover, because the EA community hasn't devoted enough attention to them.

As for abuse within the EA community, perhaps this would warrant a "community health" entry (which we don't have at the moment). Would you be in favor of creating such an entry?
 

Bob Jacobs

Don't you think physical abuse captures a natural kind? If I start asking people on the street to picture physical abuse and picture global health and development, I think that basically everyone will have a clearer picture of what physical abuse entails than global health and development.

I have just proven myself to be bad at writing about sensitive topics, so you should probably ask someone who speaks English as a native language and who is more integrated in the wider EA-community (there are hardly any EA-organizations/projects where I live, so the small group I run is pretty isolated from the larger EA-community). Otherwise the Community experiences / Diversity and inclusion / Criticism of effective altruist organizations might be enough to cover it.

Pablo

Would you be happy for me to post this conversation here? This would allow others to chime in.

Bob Jacobs

Yeah, go right ahead

Bob Jacobs

Hi Pablo,

I saw that some tags now have banners (and icons). Since I made card images and banners for a bunch of sequences, shall I make some for the tags too? I can't add them via the edit function, so if you want me to add them I would need some other mechanism.

Cheers,
Bob

Pablo

Hi Bob,

I think the image option is available only for "core" tags (which have a white  rather than a grey background), although I'm not entirely sure since this was done by the tech team. I believe all the core tags already have images associated with them, but if that isn't the case, or if you think you can produce better images, it may be worth exploring this further. Would you mind messaging JP Addison, who is leading this? Thanks.

Bob Jacobs

Will do!

Pablo

Hi Bob,

I created a poll to decide what to do about the 'Abuse' article and it looks like people are not in favor of keeping it. Just wanted to let you know that I'll probably delete it if the vote doesn't substantially change by the end of today.

Thanks again for contributing to the EA Wiki, and I hope this doesn't dissuade you from contributing more in the future!

Bob Jacobs

The first tag to get nominated for deletion is one which included a criticism of the EA-community? Interesting coincidence.

Pablo

I don't appreciate the sarcasm. Lots of tags have been deleted in the past. This was the first tag deleted after trying out a democratic process of decision-making, that doesn't rely on my judgment alone.

Bob Jacobs

The conversation had ended with my argument. No one had refuted it, no one provided a counterargument, no one left a request for the deletion of the tag. It's also not democratic if some people get more votes than others. If there was any need for a poll at all, it would be about whether the name should be changed to "physical abuse". Should I just upload the same article but under the name "physical abuse" (but this time without mentioning the EA-community and without adding a post by Kathy Forth)?

Pablo

If you want to nominate an article, you can do so here.

EDIT:  Pablo's response is fair so I will upvote it, I didn't think of it because most of it was already released and none contained personal info, but I should have asked. I do think the context is important since Pablo's comment is misleading, the objections were addressed and the deletion process started without request. The fact that my tag proposal stood a day without getting downvoted, but did get a downvote immediately after this comment mentioned the EA-community and Kathy Forth again, is evidence that these factors were indeed what caused the downvote of the original tag. I do think some of the posts I linked in the proposal worked better than others, which is why I used the word 'potentially'. Currently 48 tags have only 1 article and 55 tags only have 2 articles. I feared that if I only linked a few the tag would be dismissed with the response that the EA-community doesn't engage with this topic (which is demonstrably not true). Feel free to not include the articles you think are irrelevant, but there are definitely more than two that remain.

Comment by Bob Jacobs (bmjacobs@telenet.be) on Propose and vote on potential EA Wiki articles / tags [2022] · 2022-05-28T12:50:03.993Z · EA · GW

A physical abuse tag. I've already written something that could be used as the article:

 

The physical abuse tag covers posts that discuss the prevention of physical abuse as a cause area.

For posts about animal abuse see: Animal Welfare
For posts about the abuse of statistics see: Statistical methods
For posts about the effects of abuse see: Pain and suffering and Mental health

Further reading

Macpherson, Michael Colin  (1985) The psychology of abuse, R & E Publishers.

McCluskey, Una; Hooper, Carol-Ann (2000) Psychodynamic Perspectives on Abuse: The Cost of Fear, Jessica Kingsley Publishers.

Related entries

Pain and suffering | Mental health

 

Posts it could potentially apply to:

https://forum.effectivealtruism.org/posts/C5diBK7sJmoYWdrCs/is-preventing-child-abuse-a-plausible-cause-x

https://forum.effectivealtruism.org/posts/o4HX48yMGjCrcRqwC/what-helped-the-voiceless-historical-case-studies

https://forum.effectivealtruism.org/posts/qv5dRAGkTjxs9vrv9/small-and-vulnerable

https://forum.effectivealtruism.org/posts/pW7w5mcbKaWGi9vez/victim-coordination-website

https://forum.effectivealtruism.org/posts/Xjo23zhn6CPoijLSo/a-love-letter-to-civilian-osint-and-possibilities-as-a-tool

https://forum.effectivealtruism.org/posts/nqgE6cR72kyyfwZNL/making-discussions-in-ea-groups-inclusive

https://forum.effectivealtruism.org/posts/hYh6jKBsKXH8mWwtc/a-contact-person-for-the-ea-community

https://forum.effectivealtruism.org/posts/oXLoevuopqE3hdeis/link-cutting-through-spiritual-colonialism

https://forum.effectivealtruism.org/posts/j9toMRy2LAsHfwHN5/improving-local-governance-in-fragile-states-practical-1

https://forum.effectivealtruism.org/posts/ME4zE34KBSYnt6hGp/new-cause-proposal-international-supply-chain-accountability

https://forum.effectivealtruism.org/posts/bqvanr9qz6dg2f5Tw/why-is-the-amount-of-child-porn-growing

Comment by Bob Jacobs (bmjacobs@telenet.be) on Victim Coordination Website · 2022-03-06T18:45:59.325Z · EA · GW

The problem of harassment is what the section 'Banning, flagging and blocking' is about. You can substitute the word "malignant actor" with "harasser". As stated, there are some mechanisms to minimize this, but more suggestions are always appreciated.
The option to self-accuse was pointed out in 'banning, flagging and blocking' ("people who pretend to be someone they’re not") and 'tagging' ("Let’s say Barry MacBadguy only assaulted one person and now uses this site to pretend to be his own victim"). The self-accuser can see that someone else accused them, but if the victim opts to stay completely anonymous the self-accuser can't see who accused them. The flagging and blocking mechanisms are there to punish pretenders, though without a third party verification system pretenders will still get in. Third party verification is possible, but does hamper anonymity and makes the project less scalable.

EDIT: You can't stay completely anonymous if you use the chatroom to share experiences, but you can stay anonymous if you use it to coordinate actions (e.g. On January 6th 13:00 we post our stories to our facebook pages). Obviously after you've come forward you can't stay completely anonymous, but you can't do that with any other method either. This project will not completely protect victims, merely improve upon the current situation.

Comment by Bob Jacobs (bmjacobs@telenet.be) on What are the most important charts, graphs, statistics, or other data visualizations related to effective giving? · 2022-03-02T23:30:25.560Z · EA · GW

Hey Julian, great initiative!

There are some that are focussed more on pollution, such as lead exposure:

Polluted water:

And air pollution:

GWWC uses bednets, medice and healthspan to show how much your donation could do. Some people might like concrete numbers on what they can buy. E.g a 1000 dollar could do:

If you really want to hammer home how much more your donation could do overseas you can show graphs such as:

and:

 

As much as I think considering new ideas is important, I recommend you don't include things like the value of potential people and geo-engineering in the graphs, since these concepts may be controversial/distracting for people.

The site informationisbeautiful.net/beautifulnews used to upload beautiful hopeful graphs everyday, and while they have stopped it might be worth checking out their archives.

If you need help with the graphic design of these graphs, feel free to reach out to me.

Comment by Bob Jacobs (bmjacobs@telenet.be) on Ideas from network science about EA community building · 2022-02-18T12:48:12.544Z · EA · GW

Hi Vaidehi,

I saw that this sequence didn't have a banner image yet, and since I made those for other sequences I decided to make one for this sequence too.

As card image you have a network over the United States:

I'm guessing this is a flight network?
In any case, this is a good idea to visualize the idea of sociology and network science. But the colors contain red and purple, while the EA logo is purely blue. And more importantly, this image only depicts the USA, while EA is a global movement.

So I decided to look for images of flight networks that were in the creative commons. I found a flightpath visualizer that depicted the whole world, but didn't show the borders, just like in your image (which may or may not be a philosophical statement about borders). I began photoshopping the visualizations to include the exact shade of blue the EA logo uses, but that isn't always aesthetically pleasing. I made a couple of images that you can use as banner and card image, but if you have specific wishes about how you want them to look, or if you have a totally different idea about how you want your card and banner images to look, please let me know and I'll make one to your liking.

The forum compresses the images, so if you like any of these, let me know and I'll send you the high quality version of them.

EDIT: I was looking through all the sequences and messaging everyone who's missing a banner with the offer to make one for them.  I saw that your "AI Alignment Literature Review and Charity Comparison" sequence doesn't have one either. You could use your card image as banner, or I can make one for you, if you want.

Comment by bmjacobs@telenet.be on [deleted post] 2021-09-23T10:28:58.060Z

I too would like to know whether this is resolved and who the winner is. Also, I see that since I submitted my entry you've edited your post to talk about symbols instead of flags and added the phrase: "Avoid mathematical symbols since these are less suitable for broader outreach". But I had already submitted my flag with a mathematical symbol on it. Does this mean that my work is now retroactively made ineligible?

Comment by Bob Jacobs (bmjacobs@telenet.be) on Maximizing impact during consulting: building career capital, direct work and more. · 2021-08-13T16:34:39.837Z · EA · GW

Nice post and nice sequence, although I think you forgot to put this post in the sequence itself? I don't see it in there: forum.effectivealtruism.org/s/LTT73ENYmZRfjMq25

Comment by bmjacobs@telenet.be on [deleted post] 2021-08-04T10:33:34.475Z

I already worked on a project like this previously:

Flag utilitarianism

Yellow stands for happiness, that which utilitarianism pursues

White stands for morality, that which utilitarianism is

The symbol is a sigma, since utilitarians care about the sum of all utility

The symbol is also an hourglass, since utilitarians care about the (longterm) future consequences

 

If you don't like the rounded design I can also make it more angular:

The size of the symbol, the angles, the proportions of the flags etc can all be changed if you have specific preferences. The main idea is the sigma that also functions as an hourglass.
 

I do however worry whether it's wise to make symbols for philosophical ideas. I like designing these things, but you run the risk that these symbols can be used to make people strongly identify themselves with these ideas, instead of them being things that people can dispassionately examine and perhaps reject. I would advise everyone to say things like: "I like utilitarianism" or "I believe in utilitarianism", instead of "I am a utilitarian". Let's make sure ethical ideas don't become as rigidly polarized as political ideas.

EDIT: If you want to redesign this flag, go right ahead! I'm planning to donate the prize money if I win, so if you improve on my design and also donate the prize money, that would actually make me very happy.

Comment by Bob Jacobs (bmjacobs@telenet.be) on Department Voting · 2021-08-02T17:09:29.221Z · EA · GW

Do you know similar voting methods that worked on a small scale?


I haven't seen the use of categories and columns before, but the voting systems I used have already seen a bunch of analysis and real world use (the electo-wiki I linked to is a good starting point if you want to look into it). If with "small scale" you mean "you and a bunch of friends need to find a place to eat" I wouldn't use columns and categories (takes too long), but would instead use a simple Approval Vote. If you have a specific scenario in mind, feel free to message me and maybe I can help you out.

What are the next steps in terms of research / action?

I'm not a professional voting theorist, so I'm going to wait and see if someone finds a flaw in the idea of using columns, categories or departments. If not, I might be able to publish it in a couple years if my university/a journal is interested. I think from an activism perspective we should first focus on introducing a better voting system. Something like Approval Voting would be easier to explain/get the public on board with than this more complex electoral reform. If I run into some people that are passionate about voting reform I will certainly share this idea with them, but for now I don't really have an audience for it beyond this forum. If you have a project in mind, feel free to message me.

Comment by Bob Jacobs (bmjacobs@telenet.be) on Making More Sequences · 2021-04-18T15:33:22.653Z · EA · GW

No problem!

Good question! I asked the mods if they could put Lukas' name underneath the sequence, since he wrote it. The sequence does show up on the sequences-page, so I'm guessing that when they changed my name they simultaneously removed it from my page, but forgot to subsequently add it to Lukas' page. That's just a guess though. Should I message the mods about it?

Comment by Bob Jacobs (bmjacobs@telenet.be) on Making More Sequences · 2021-04-18T09:46:05.312Z · EA · GW

Thanks! I actually already made that sequence, you can find it here: https://forum.effectivealtruism.org/s/R8vKwpMtFQ9kDvkJQ

Comment by Bob Jacobs (bmjacobs@telenet.be) on EA Forum feature suggestion thread · 2021-03-13T19:47:49.252Z · EA · GW

Vaidehi_agarwalla and I thought it might be a good idea to have sequences within sequences. For example: Vaidehi created sequences for the ea-survey results per year, because sometimes you want to only look at the survey results for that one year. Other times you want to look at all the survey results. If we add a new survey sequence every year it will clutter up the sequence page, but if you put them in one larger sequence it will take up less space and it will allow people to either read everything in one go, or select the "sub-sequence" they want to read and stop there.

Comment by Bob Jacobs (bmjacobs@telenet.be) on Bob Jacobs's Shortform · 2021-02-13T19:35:49.750Z · EA · GW

Beauty vs Happiness Thought Experiment


Say a genie were to give you the choice between:

1) Creating a stunningly beautiful world that is uninhabited and won’t influence sentient beings in any way or 2) Not creating it.

In addition, both the genie’s and your memories of this event are immediately erased once you make the choice, so no one knows about this world and you cannot derive happiness from the memory.

Would you choose option one or two?
 

I would choose option one, because I prefer a more beautiful universe to an uglier one (even if no one experiences it). This forms an argument against classic utilitarianism.

Classic Utilitarianism says that I’m wrong. The choice doesn’t create any happiness, only beauty. This means that, according to classic utilitarianism, I should have no preferences between the two options.
 

There are several moral theories that do allow you to prefer option one. One of which is preference utilitarianism which states that it’s okay to have preferences that don’t bottom out in happiness. For this reason, I find preference utilitarianism more persuasive than classic utilitarianism.


A possible counterargument would be that the new world isn't really beautiful since no one experiences it. Here we have a disagreement over whether beauty needs to be experienced to even exist.


A third way of looking at this thought experiment would be through the lens of value uncertainty. Through this lens, it does make sense to pick option one. Even if you have a thousand times more credence in the theory that happiness is the arbiter of value, the fact that no happiness is created either way leaves the door open for your tiny credence that beauty might be the arbiter of value. Value uncertainty suggests that you take the first option, just in case.

Comment by Bob Jacobs (bmjacobs@telenet.be) on Creating A Kickstarter for Coordinated Action · 2021-02-03T12:47:10.641Z · EA · GW

Hey Lumpy, good post and I support the project.

You might be interested to know that these things were already discussed on this platform. I made a post about Making a crowdaction website and gabcoh also wrote a post on this topic. On Less Wrong there is even a whole sequence about this idea.
I don't have much programming skills, but I might be able to help in other ways (graphic design, mechanism design). I can also bring you in contact with other people that are working on this project like: Ron (co-creator of collaction), DonyChristie (a programmer who already build a prototype website) and gabcoh.

Comment by Bob Jacobs (bmjacobs@telenet.be) on 80,000 Hours: Where's the best place to volunteer? · 2021-01-25T15:31:42.422Z · EA · GW

Hey Aaron, great post.
Maybe this isn't the best place to ask this, but should posts like these be tagged with the 80,000 Hours tag? We've discussed the tagging system in my recent post, but I'm still not sure when certain tags should be used. When I look at the 80,000 hours tag, almost all posts are from the 80,000 hours account. But the two bottom ones aren't. So should this tag to be used exclusively by the 80,000 hours account, or should it be used when people talk about 80,000 hours in general? And is mentioning them/their research in the post enough, or should the post be about 80,000 hours before you can tag it?

Of course, I'm just using 80,000 hours as an example. What I'm really asking is if we should create a tag guideline. Something like the New Tag Guidelines from Less Wrong. (I would be willing to write it if you want)

Also, for a totally unrelated comment. Congratulations on  your satisfying karma-score milestone: 

Comment by Bob Jacobs (bmjacobs@telenet.be) on Reworking the Tagging System · 2021-01-24T21:53:01.644Z · EA · GW

Oh wow, that's fantastic! I now feel like the tone of this post seems way too harsh. Seeing that most of my points are already being addressed by the mod-team makes me think I should have reached out to you before posting this. I'll make it up to you by winning that upcoming tagging event :)
Thank you mod-team, for your continuing work on this amazing site!

Comment by Bob Jacobs (bmjacobs@telenet.be) on Health and happiness research topics—Part 2: The HALY+: Improving preference-based health metrics · 2020-12-25T21:36:56.223Z · EA · GW

Hey Derek, once again a great post!
You might not know this, but the EA forum allows you to store a sequence of posts into an ordered sequence. If you go to https://forum.effectivealtruism.org/sequences you can see all the other sequences. When you click on the button on the right:

you can create your own sequence. Once you click on it you will be asked to write a short introduction and add a banner and card image. I haven't written any sequences, but as you can see I have designed most of them. I'm sure you can write a good introduction, but if you want a snazzy card image and banner, I've made some for you:

Health and happiness research, image card
Health and happiness research, banner

Here's how it will look among the other sequence cards:

Hope you like them.

Comment by bmjacobs@telenet.be on [deleted post] 2020-12-16T21:17:02.949Z

I don't think we need a separate tag for this. Especially since there aren't any posts about J-PAL.

Comment by Bob Jacobs (bmjacobs@telenet.be) on Making More Sequences · 2020-10-21T09:51:19.329Z · EA · GW

Thank you! The cropping in photoshop only takes five minutes at most, so it isn't a big deal. All of the images are made with creative commons images, except for "moral anti-realism" (which I took from Lukas' own page, so I assume he has the rights) and "rwas library" which I found on a bunch of websites with no indication of it's status (if it does get copyrightstriked I'll photoshop a similar looking image).

Btw, could you add an "EA Forum (meta)" tag to this post? I can't add tags at the moment.

Comment by Bob Jacobs (bmjacobs@telenet.be) on Making More Sequences · 2020-10-20T20:54:40.201Z · EA · GW

I love that sequence, but it's specifically about motivation and how to cultivate it. An "Introduction to EA" sequences would ideally focus on introducing some of the key concepts and organizations. Something like Doing Good Better, but with a little more focus on the movement.

Comment by Bob Jacobs (bmjacobs@telenet.be) on Timeline Utilitarianism · 2020-10-11T07:16:38.069Z · EA · GW

No problem! As a non-native english speaker this was an extremely difficult post to write, hence why I leaned so heavily on images.  If you (or anyone) have any suggestions for how I could reword this post to make it clearer, please let me know.

EDIT: I've changed the word "standard utilitarianism" into "moment utilitarianism", I hope this clears up some of the confusion.

EDIT 2: I've realized that other moral theories (outside of consequentialism) might have other ways of solving this. It has become a big project which I call "timespan ethics", since some theories reject the possibility of different timelines.  I'm planning to make this project my masters thesis.

Comment by Bob Jacobs (bmjacobs@telenet.be) on Timeline Utilitarianism · 2020-10-10T13:59:27.761Z · EA · GW

I think so too, because you can't really talk about ethics without a timeframe. I wasn't trying to argue that people don't use timeframes, but rather that people automatically use total timeline utilitarianism without realizing that other options are even possible. This was what I was trying to get at by saying:

Usually when people talk about different types of utilitarianism they automatically presuppose "total timeline utilitarianism". In fact, the current debate between total and average utilitarianism is actually a debate between "total total utilitarianism" and "total average utilitarianism".

Comment by Bob Jacobs (bmjacobs@telenet.be) on Timeline Utilitarianism · 2020-10-10T07:48:15.286Z · EA · GW

Please, let me know about any source discussing this.

If with "this" you mean timeline utilitarianism,  then there isn't one unfortunately (I haven't published this idea anywhere else yet). Once I've finished university I hope some EA institution will hire me to do research into descriptive population ethics. So hopefully I can provide you with some data on our intuitions about timelines in a couple years.

I suspect that people more concerned with the quality of life will tend to favor average timeline utilitarianism, and all the people in this community that are so focused on x-risk and life-extension might be a minority with their stronger preference for the quantity of life (anti-deathism is the natural consequence of being a strong total timeline utilitarian).
If you want to read something similar to this then you could always check out the wider literature surrounding population ethics in general.

Comment by Bob Jacobs (bmjacobs@telenet.be) on Timeline Utilitarianism · 2020-10-09T21:22:19.186Z · EA · GW

Yes, (total) total utilitarianism is both across time and space, but you can aggregate across time and space in many different ways. E.g median total utilitarianism is also both across time and space, but it aggregates very differently.

Comment by Bob Jacobs (bmjacobs@telenet.be) on Suggestions for Online EA Discussion Norms · 2020-09-28T14:01:50.332Z · EA · GW

I made two visual guides that could be used to improve online discussions. These could be dropped into any conversation to (hopefully) make the discussion more productive.

The first is an update on Grahams hierarchy of disagreement

I improved the lay-out of the old image and added a top layer for steelmanning. You can find my reasoning here and a link to the pdf-file of the image here.

The second is a hierarchy of evidence:

I added a bottom layer for personal opinion. You can find the full image and pdf-file here.

Lastly I wanted to share the Toulmin method of argumentation, which is an excellent guide for a general pragmatic approach to arguments

Comment by Bob Jacobs (bmjacobs@telenet.be) on A Toy Model of Hingeyness · 2020-09-10T22:55:03.168Z · EA · GW

The reason I find the definition not very useful is because it can be interpreted in so many different ways. The aim of this post was to show the four main ways you could interpreted it. When I read the definition my first interpretation was “hinge broadness”, while I suspect your interpretation was “hinge reduction”. I’m not saying that hinge broadness is the ‘correct’ definition of hingeyness, because there is no ‘correct’ definition of hingeyness until a community of language users has made it a convention. There is no convention yet so I’m purposefully splitting the concept into more quantifiable chunks in the hope that we can avoid the confusion that comes from multiple people using the same terms for different concepts.
Since I failed to convey this I will slightly edit this post to clear it up for the next confused reader. I added one sentence, and tweaked another sentence and a subtitle. The old version of the post can be found on LessWrong.

Comment by Bob Jacobs (bmjacobs@telenet.be) on A Toy Model of Hingeyness · 2020-09-09T10:42:34.891Z · EA · GW

That's a very useful link, thank you.

Also mod-team, this comment isn't visible underneath my post in any of my browsers. Is there any way to fix that?

EDIT: Thank you mod-team!

Comment by Bob Jacobs (bmjacobs@telenet.be) on A Toy Model of Hingeyness · 2020-09-08T10:30:14.285Z · EA · GW

It just occurred to me that some people may find the shift in range also important for hingeyness. I'll illustrate what I mean with a new image:

(I can't post images in comments so here is a link to the image I will use to illustrate this point)

Here the "range of possible utility in endings" tick 1 has (the first 10) is [0-10] and the "range of possible utility in endings" the first 0 has (tick 2) is [0-10] which is the same. Of course the probability has changed (getting an ending of 1 utility is not even an option anymore), but the minimum and maximum stay the same.

But we don't care about just the endings, we care about the rest of the journey too. The width of the "range of the total amount of utility you could potentially experience over all branches (not just the endings)" can shrink or stay the same. But the range itself can shift. For example the lowest possible utility tick 1 can experience is 10->0->0 = 10 utility and the highest possible utility that it can experience is 10->0->10 = 20 utility. The difference between the lowest and highest is 10 utility. The lowest total utility that the 0 on tick 2 can experience is 0->0 = 0 utility and the highest is 0->10 = 10 utility, which is once again a difference of 10 utility.

The probability has changed: Ending with a weird number like 19 is impossible for the '0 on tick 2'. The probability for a good ending has also become much more favorable (50% chance to end with a 10 instead of 25% it was before). Probability is important for the precipiceness.

But while the width of the range stayed the same, the range itself has shifted downwards from [10-20] to [0-10]. Maybe this also an important factor in what some people call hingeyness? Maybe call that 'hinge shift'?

This will effect the probability that you end up in certain futures and not others. I used the word precipiceness in my post to refer to high-risk high-reward probability distributions. Maybe it's also important to have a word for a time in which the probability that we will generate low amounts of utility in the future is increasing. We call this "increase in x-risk" now because going extinct is most of the time a good way to ensure you will generate low amounts of utility. But as I showed in my post, you can have an awesome extinction and a horrible long existence. Maybe I shouldn't be trying to attach words to all the different variants of probability distributions and just draw them instead.

To recap "the range of total amount of utility you can potentially generate" aka "hinge broadness" can:

1) Shrink by a certain amount (aka hinge reduction) this can be because the most amount of utility you can potentially generate is decreasing (I'll call this "top-reduction") or because the least amount of utility you can potentially generate is increasing (I'll call this "bottom-reduction"). Top-reduction is bad, bottom-reduction is good.

2) Shift upward or downward in utility by a certain amount (aka hinge shift) Upward shift is good, downward shift is bad.

Comment by Bob Jacobs (bmjacobs@telenet.be) on Making a Crowdaction website · 2020-08-19T16:34:51.782Z · EA · GW

Some other examples of things you could coordinate with this website:

Leaving the current social media giants en masse for a more privacy concerned/bubble-breaking/fact-checking alternative. Everyone hates facebook/twitter/tiktok etc and yet everyone uses them because everyone uses them. By coordinating the switch you can effectively take away their biggest driving force: that everyone uses them.

Switching to a different language. Many of the spelling rules are dumb, yet we use them because we are expected to use them. If we all collectively switched to simplified spelling rules everyone wouldn't need to keep those unnecessarily complex rules in mind.

Organizing boycotts of unethical companies. Your single action will not effect the supply chain which makes people unmotivated to act. This website would change that.

Switching from cars to other modes of transportation.

Doing illegal things in very large groups so you can't be arrested (e.g not wearing a burqa)

Starting a local project (e.g exercise group)

Redefining/reclaiming a word.

Organizing a strike against your exploitative employer.

Wearing no/less clothes during hot summer months.

Switching to a different currency.

Having one person pick up groceries for everyone in the local community instead of everyone driving separately.

Organizing/attending an event.

Starting a crowdsourced project (e.g a wiki)

...

In short; the list of things I only do because everyone else does them is gigantic, but that list is tiny compared to all the things I would do if more people started doing them. I could keep going but I hope this gives some idea as to why this site might be useful.

Comment by Bob Jacobs (bmjacobs@telenet.be) on Meta-Preference Utilitarianism · 2020-02-05T19:47:23.667Z · EA · GW

This post is a crosspost from Less Wrong. Below I leave a comment by Lukas Gloor that explained the implications of that post way better than I did:

This type of procedure may look inelegant for folks who expect population ethics to have an objectively correct solution. However, I think it's confused to expect there to be such an objective solution. In my view at least, this makes the procedure described in the original post here look pretty attractive as a way to move forward.
Because it includes some very similar considerations as are presented in the original post here, I'll try to (for those who are curious enough to bear with me) describe the framework I've using to think about population ethics:
Ethical value is subjective in the sense that if someone's life goal is to strive toward state x, it's no one's business to tell them that they should focus on y instead. (There may be exceptions, e.g., in case someone's life goals are the result of brain washing).
For decisions that do not involve the creation of new sentient beings, preference utilitarianism or "bare minimum contractualism" seem like satisfying frameworks. Preference utilitarians are ambitiously cooperative/altruistic and scale back any other possible life goals at the expense of getting maximal preference satisfaction for everyone, whereas "bare-minimum contractualists" obey principles like do no harm while still mostly focusing on their own life goals. A benevolent AI should follow preference utilitarianism, whereas individual people are free to decide for anything on the spectrum between full preference utilitarianism and bare-minimum contractualism. (Bernard William's famous objection to utilitarianism is that it undermines a person's "integrity" by alienating them from their own life goals. By focusing all their actions on doing what's best from everyone's point of view, people don't get to do anything that's good for themselves. This seems okay if one consciously chooses altruism as a way of life, but it seems overly demanding as an all-encompassing morality).
When it comes to questions that affect the creation of new beings, the principles behind preference utilitarianism or bare-minimum contractualism fail to constrain all of the possibility space. In other words: population ethics is underdetermined.
That said, it's not the case that "anything goes." Just because present populations have all the power doesn't mean that it's morally permissible to ignore any other-regarding considerations about the well-being of possible future people. A bare-minimum version of population ethics could be conceptualized as a set of appeals or principles by which newly created beings can hold accountable their creators. This could include principles such as:
All else equal, it seems objectionable to create minds that lament their existence.
All else equal, it seems objectionable to create minds and place them in situations where their interests are only somewhat fulfilled, if one could have easily provided them with better circumstances.
All else equal, it seems objectionable to create minds destined to constant misery, yet with a strict preference for existence over non-existence.
(While the first principle is about which minds to create, the second two principles apply to how to create new minds.)
Is it ever objectionable to fail to create minds – for instance, in cases where they’d have a strong interest in their existence?
This type of principle would go beyond bare-minimum population ethics. It would be demanding to follow in the sense that it doesn't just tell us what not to do, but also gives us something to optimize (the creation of new happy people) that would take up all our caring capacity.
Just because we care about fulfilling actual people's life goals doesn't mean that we care about creating new people with satisfied life goals. These two things are different. Total utilitarianism is a plausible or defensible version of a "full-scope" population ethical theory, but it's not a theory that everyone will agree with. Alternatives like average utilitarianism or negative utilitarianism are on equal footing. (As are non-utillitarian approaches to population ethics that say that the moral value of future civilization is some complex function that doesn't scale linearly with increased population size.)
So what should we make of moral theories such as total utilitarianism, average utilitarianism or negative utilitarianism? They way I think of them, they are possible morally-inspired personal preferences, rather than personal preferences inspired by the correct all-encompassing morality. In other words, a total/average/negative utilitarian is someone who holds strong moral views related to the creation of new people, views that go beyond the bare-minimum principles discussed above. Those views are defensible in the sense that we can see where such people's inspiration comes from, but they are not objectively true in the sense that those intuitions will appeal in the same way to everyone.
How should people with different population-ethical preferences approach disagreement?
One pretty natural and straightforward approach would the proposal in the original post here.
Ironically, this would amount to "solving" population ethics in a way that's very similar to how common sense would address it. Here's how I'd imagine non-philosophers to think approach population ethics:
Parents are obligated to provide a very high standard of care for their children (bare-minimum principle).
People are free to decide against becoming parents (principle inspired by personal morality).
Parents are free to want to have as many children as possible (principle inspired by personal morality), as long as the children are happy in expectation (bare-minimum principle).
People are free to try to influence other people’s stances and parenting choices (principle inspired by personal morality), as long as they remain within the boundaries of what is acceptable in a civil society (bare-minimum principle).
For decisions that are made collectively, we'll probably want some type of democratic compromise.
I get the impression that a lot of effective altruists have negative associations with moral theories that leave things underspecified. But think about what it would imply if nothing was underspecfied: As Bernard Williams has noted, if the true morality left nothing underspecified, then morally-inclined people would have no freedom to choose what to live for. I no longer think it's possible or even desirable to find such an all-encompassing morality.
One may object that the picture I'm painting cheapens the motivation behind some people's strongly held population-ethical convictions. The objection could be summarized this way: "Total utilitarians aren't just people who self-orientedly like there to be a lot of happiness in the future! Instead, they want there to be a lot of happiness in the future because that's what they think makes up the most good."
I think this objection has two components. The first component is inspired by a belief in moral realism, and to that, I'd reply that moral realism is false. The second component of the objection is an important intuition that I sympathize with. I think this intuition can still be accommodated in my framework. This works as follows: What I labelled "principle inspired by personal morality" wasn't a euphemism for "some random thing people do to feel good about themselves." People's personal moral principles can be super serious and inspired by the utmost desire to do what's good for others. It's just important to internalize that there isn't just one single way to do good for others. There are multiple flavors of doing good.