Posts

Why I'm skeptical of moral circle expansion as a cause area 2022-07-14T20:29:28.452Z
Hacking Weirdness Points 2022-05-24T23:42:20.664Z
Targeting Celebrities to Spread Effective Altruism 2022-05-02T23:49:10.029Z
What are the most underfunded EA organizations? 2021-12-17T23:28:04.122Z
Intactivism as a potential Effective Altruist cause area? 2021-06-26T07:21:26.177Z

Comments

Comment by Question Mark on Why say 'longtermism' and not just 'extinction risk'? · 2022-08-13T05:48:15.400Z · EA · GW

Suffering risks have the potential to be far, far worse than the risk of extinction.  Negative utilitarians and EFILists may also argue that human extinction and biosphere destruction may be a good thing or at least morally neutral, since a world with no life would have a complete absence of suffering. Whether to prioritize extinction risk depends on the expected value of the far future. If the expected value of the far future is close to zero, it could be argued that improving the quality of the far future in the event we survive is more important than making sure we survive.

Comment by Question Mark on (p-)Zombie Universe: another X-risk · 2022-07-29T03:37:06.281Z · EA · GW

A P-zombie universe could be considered a good thing if one is a negative utilitarian. If a universe lacks any conscious experience, it would not contain any suffering.

Comment by Question Mark on Existential Biorisk vs. GCBR · 2022-07-16T00:23:47.546Z · EA · GW

There's a book called The Revolutionary Phenotype that discusses what you refer to as existential biorisk. It argues that gene editing and related technologies will give rise to what it refers to as a "phenotypic revolution" which will likely result in human extinction.

Comment by Question Mark on Recommendations for non-technical books on AI? · 2022-07-13T01:57:48.712Z · EA · GW

A lot of people will probably dismiss this due to it being written by a domestic terrorist, but Ted Kaczynski's book Anti-Tech Revolution: Why and How is worth reading. He goes into detail on why he thinks the technological system will destroy itself, and why he thinks it's impossible for society to be subject to rational control. He goes into detail on the nature of chaotic systems and self-propagating systems, and he heavily criticizes individuals like Ray Kurzweil. Robin Hanson critiqued Kaczynski's collapse theory a few years ago on Overcoming Bias. It's an interesting read if nothing else, and has some interesting arguments.

Comment by Question Mark on Explore the new UN demographic projections to 2100 · 2022-07-11T22:35:46.907Z · EA · GW

I suspect there's a good chance that populations in Western nations could be significantly higher than predicted according to your link. The reason for this is that we should expect natural selection to select for whatever traits maximize fertility in the modern environment, such as higher religiosity. This will likely lead to fertility rates rebounding in the next several generations. The sorts of people who aren't reproducing in the modern environment are being weeded out of the gene pool, and we are likely undergoing selection pressure for "breeders" with a strong instinctive desire to have as many biological children as possible. Certain religious groups, like the Old Order Amish, Hutterites, and Haredim are also growing exponentially, and will likely be demographically dominant in the future.

Comment by Question Mark on What is the top concept that all EAs should understand? · 2022-07-06T04:48:12.281Z · EA · GW

Suffering risks

Comment by Question Mark on Depression and psychedelics - an anonymous blog proposal · 2022-06-25T23:44:58.099Z · EA · GW

Have you tried any tryptamine research chemicals like 4-HO-MET or 4-HO-MiPT? If so, have they had any noticeable effect on your depression?

Comment by Question Mark on Half-baked ideas thread (EA / AI Safety) · 2022-06-25T23:14:11.156Z · EA · GW

Would you mind posting a link to it?

Comment by Question Mark on More funding is really good · 2022-06-25T23:10:18.227Z · EA · GW

Do you know of any estimates of the impact of more funding for AI safety? For instance, how much would an additional $1,000 increase the odds of the AI control problem being solved?

Comment by Question Mark on Are poultry birds really important? Yes... · 2022-06-21T05:03:36.841Z · EA · GW

Here's a chart of the amount of suffering caused by different animal foods that Brian Tomasik created. Farmed fish may have even more negative utility than chicken, since they are small and therefore require more animals per unit of meat. The chart is based on suffering per unit of edible food produced rather than suffering throughout the total population, and I'm not sure what the population of farmed fish is relative to the population of chickens. Chicken probably has more negative utility than fish if the chicken population is substantially higher than the farmed fish population. Beef is probably the meat with the lest negative utility.

Comment by Question Mark on Emphasize Vegetarian Retention · 2022-06-11T05:06:40.005Z · EA · GW

Vegetarians/vegans should consider promoting eating only beef/dairy as the only animal products they consume as a potential strategy to have people cause less suffering to livestock with a high retention rate. I suspect that the average person would be much more willing to give up most animal products while still consuming beef and dairy, compared to giving up meat entirely. Since cows are big, fewer animals are needed to produce a single unit of meat, compared to meat coming from smaller animals. Vitalik Buterin has argued that eating big animals as an animal welfare strategy could be 99% as good as veganism. Brian Tomasik also compiled this list of different animal products ranked by the amount of suffering they cause per kilogram, and beef and milk are at the bottom.

An objection people might make to this is that eating more beef could contribute to climate change, but I'm skeptical that the amount of additional suffering caused by climate change will exceed the amount of suffering reduced by having less factory farming. It could also be argued that habitat loss may reduce wild animal populations, which may reduce wild animal suffering by preventing wild animals by being born.

As a side note, there needs to be some sort of name for the philosophy of eating big animals to reduce livestock suffering described above. Sizeatarianism? Beefatarianism? Big-animal-atarianism? Sufferingatarianism? 

Comment by Question Mark on New cause area: Violence against women and girls · 2022-06-09T18:39:36.488Z · EA · GW

... which arguably gives circumcised males the benefit of longer sex ;-)

Not necessarily. Male circumcision may actually cause premature ejaculation in some men.

More seriously: FGM can cause severe bleeding and problems urinating, and later cysts, infections, as well as complications in childbirth and increased risk of newborn deaths (WHO).

Other than complications in childbirth, male circumcision can also cause all of these complications. According to Ayaan Hirsi Ali, who is herself a victim of FGM, boys being circumcised in Africa have a higher risk of complications compared to girls subjected to FGM. Circumcisions/mutilations in Africa are often performed in unsanitary conditions, which is true for both boys and girls subjected to genital mutilation.

Comment by Question Mark on New cause area: Violence against women and girls · 2022-06-09T01:54:16.079Z · EA · GW

In the same vein, comparing female genital mutilation to forced circumcision is... let's say ignorant of the effects of FGM.

This lecture by Eric Clopper has a decent analysis of the differences between male circumcision and FGM. Male circumcision removes more erogenous tissue and more nerve endings than most forms of FGM.

Comment by Question Mark on New cause area: Violence against women and girls · 2022-06-09T01:49:45.444Z · EA · GW

While it's true that women are more likely to be victims of sexual violence, men are more likely to be victims of non-sexual violence, such as murder and aggravated assault.

Comment by Question Mark on New cause area: Violence against women and girls · 2022-06-08T04:39:14.422Z · EA · GW

How does this compare to violence against men and boys as a cause area? Worldwide, 78.7% of homicide victims are men. Female genital mutilation is also generally recognized as being a human rights violation, while forced circumcision of boys is still extremely prevalent worldwide. For various social reasons, violence against males seems to be a more neglected cause area compared to violence against females.

Comment by Question Mark on Which possible AI impacts should receive the most additional attention? · 2022-06-02T06:10:56.393Z · EA · GW

How's this argument different from saying, for example, that we can't rule out God's existence so we should take him into consideration? Or that we can't rule out the possibility of the universe being suddenly magically replaced with a utilitarian optional one?

If you want to reduce the risk of going to some form of hell as much as possible, you ought to determine what sorts of “hells” have the highest probability of existing, and to what extent avoiding said hells is tractable. As far as I can tell, the “hells” that seem to be the most realistic are hells resulting from bad AI alignment, and hells resulting from living in a simulation. Hells resulting from bad AI alignment can be plausibly avoided by contributing in some way to solving the AI alignment problem. It’s not clear how hells resulting from living in a simulation could be avoided, but it’s possible that ways to avoid these sorts of hells could be discovered with further analysis of different theoretical types of simulations we may be living in, such as in this map. Robin Hanson explored some of the potential utilitarian implications of the simulation hypothesis in his article How To Live In A Simulation. Furthermore, mind enhancement could potentially reduce S-risks. If you manage to improve your general thinking abilities, you could potentially discover a new way to reduce S-risks.

A Christian or a Muslim could argue that you ought to convert to their religions in order to avoid going to hell. But a problem with Pascal’s Wager-type arguments is the issue of tradeoffs. It’s not clear that practicing a religion is the most optimal way to avoid hell/S-risks. The time spent going to church, praying, and otherwise being dedicated to your religion is time not spent thinking about AI safety and strategizing ways to avoid S-risks. Working on AI safety, strategizing ways to avoid S-risks, and trying to improve your thinking abilities would probably be more effective at reducing your risk of going to some sort of hell than, say, converting to Christianity would.

The linked post is basically a definition of what "survival" means, without any argument on how any of it is at all plausible.

It mentions finding ways to travel to other universes, send information to other universes, creating a superintelligence to figure out ways to avoid heat death, convincing the creators of the simulation to not turn it off, etc. While these hypothetical ways to survive heat death do involve a lot of speculative physics, they are more than just “defining survival”.

I believe neither is plausible by mistake.

Yet we live in a reality where happiness and suffering exist seemingly by mistake. Your nervous system is the result of millions of years of evolution, not the result of an intelligent designer.

Comment by Question Mark on Which possible AI impacts should receive the most additional attention? · 2022-05-31T07:23:57.370Z · EA · GW

the scope is surely not infinite. The heat death of the universe and the finite number of atoms in it pose a limit.

We can't say for certain that travel to other universes is impossible, so we can't rule it out as a theoretical possibility. As for the heat death if the universe, Alexey Turchin created this chart of theoretical ways that the heat death of the universe could be survivable by our descendants.

Unless you think unaligned AIs will somehow be inclined to not only ignore what people want, but actually keep them alive and torture them - which sounds implausible to me - how's this not Pascal's mugging?

The entities that are being subjected to the torture wouldn't necessarily be "people" per se. I am talking about conscious entities in general. Solving the alignment problem from the perspective of hedonistic utilitarianism would involve the superintelligence having consciousness-centric values and the ability to create and preserve conscious states with high levels of valence. If a superintelligence with consciousness-centric values that can create large amounts of bliss is realistically possible, the possibility of a consciousness-centric superintelligence that creates large amounts of suffering isn't necessarily that much less realistic. If you believe that a superintelligence causing torture is implausible, you also have to accept that a superintelligence creating a utopia is also implausible.

Comment by Question Mark on Which possible AI impacts should receive the most additional attention? · 2022-05-31T05:00:17.747Z · EA · GW

Suffering risks. S-risks are arguably a far more serious issue than reducing the risk of extinction, as the scope of the suffering could be infinite. The fact that there is a risk of a maligned superintelligence creating a hellish dystopia on a cosmic scale with more intense suffering than has ever existed in history means that even if the risk of this happening is small, this is balanced by its extreme disutility. S-risks are also highly neglected, relative to their potential extreme disutility. It could even be argues that it would be rational to completely dedicate your life to reducing S-risks because of this. The only organizations I'm aware of that are directly working on reducing S-risks are the Center on Long-Term Risk and the Center for Reducing Suffering. One possible way AI could lead to astronomical suffering is if there is a "near miss" in AI alignment, where the AI alignment problem is partially solved, but not entirely. Other potential sources of S-risks may include malevolence, or an AI that includes religious hells when aligned to reflect the values of humanity.

Comment by Question Mark on [deleted post] 2022-05-30T14:14:12.796Z

80,000 Hours has this list of what they consider to be the most pressing world problems, and this list ranking different cause areas by importance, tractability, and uncrowdedness. As for lists of specific organizations, Nuño Sempere created this list of longtermist organizations and evaluations of them, and I also found this AI alignment literature review and charity comparison. Brian Tomasik also wrote this list of charities evaluated from a suffering-reduction perspective.

Comment by Question Mark on The case to abolish the biology of suffering as a longtermist action · 2022-05-21T16:39:18.835Z · EA · GW

Brian Tomasik's essay "Why I Don't Focus on the Hedonistic Imperative" is worth reading. Since biological life will almost certainly be phased out in the long run and be replaced with machine intelligence, AI safety probably has far more longtermist impact compared to biotech-related suffering reduction. Still, it could be argued that having a better understanding of valence and consciousness could make future AIs safer.

Comment by Question Mark on Arguments for Why Preventing Human Extinction is Wrong · 2022-05-21T16:26:17.735Z · EA · GW

An argument against advocating human extinction is that cosmic rescue missions might eventually be possible. If the future of posthuman civilization converges toward utilitarianism, and posthumanity becomes capable of expanding throughout and beyond the entire universe, it might be possible to intervene in far-flung regions of the multiverse and put an end to suffering there.

Comment by Question Mark on Arguments for Why Preventing Human Extinction is Wrong · 2022-05-21T16:20:13.604Z · EA · GW

5. Argument from Deep Ecology

    This is similar to the Argument from D-Risks, albeit more down to Earth (pun intended), and is the main stance of groups like the Voluntary Human Extinction Movement. Human civilization has already caused immense harm to the natural environment, and will likely not stop anytime soon. To prevent further damage to the ecosystem, we must allow our problematic species to go extinct.

This seems inconsistent with anti-natalism and negative utilitarianism. If we ought to focus on preventing suffering, why shouldn't anti-natalism also apply to nature? It could be argued that reducing populations of wild animals is a good thing, since it would reduce the amount of suffering in nature, following the same line of reasoning as anti-natalism applied to humans.

Comment by Question Mark on If EA is no longer funding constrained, why should *I* give? · 2022-05-17T16:29:46.892Z · EA · GW

Even if the Symmetry Theory of Valence turns out to be completely wrong, that doesn't mean that QRI will fail to gain any useful insight into the inner mechanics of consciousness. Andrew Zuckerman sent me this comment previously on QRI's pathway to impact, in response to Nuño Sempere's criticisms of QRI. The expected value of QRI's research may therefore have a very high degree of variance. It's possible that their research will amount to almost nothing, but it's also possible that their research could turn out to have a large impact. As far as I know, there aren't any other EA-aligned organizations that are doing the sort of consciousness research that QRI is doing.

Comment by Question Mark on Is Our Universe A Newcomb’s Paradox Simulation? · 2022-05-16T01:57:56.978Z · EA · GW

The way I presented the problem also fails to account for the fact that it seems quite possible  there is a strong apocalyptic fermi filter that will destroy humanity, as this could account for why it seems we are so early in the cosmic history (cosmic history is unavoidably about to end). This should skew us more toward hedonism.

Anatoly Karlin's Katechon Hypothesis is one Fermi Paradox hypothesis that is similar to what you are describing. The basic idea is that if we live in a simulation, the simulation may have computational limits. Once advanced civilizations use too much computational power or outlive their usefulness, they are deleted from the simulation.

Comment by Question Mark on Is Our Universe A Newcomb’s Paradox Simulation? · 2022-05-16T01:53:39.119Z · EA · GW

If we choose longtermism, then we are almost definitely in a simulation, because that means other people like us would have also chosen longtermism, and then would create countless simulations of beings in special situations like ourselves. This seems exceedingly more likely than that we just happened to be at the crux of the entire universe by sheer dumb luck.

 Andrés Gómez Emilsson discusses this sort of thing in this video. The fact that our position in history may be uniquely positioned to influence the far future may be strong evidence that we live in a simulation.

Robin Hanson wrote about the ethical and strategic implications of living in a simulation in his article "How to Live in a Simulation".  According to Hanson, living in a simulation may imply that you should care less about others, live more for today, make your world look more likely to become rich, expect to and try more to participate in pivotal events, be more entertaining and praiseworthy, and keep the famous people around you happier and more interested in you.

If some form of utilitarianism turns out to be the objectively correct system of morality, and post-singularity civilizations converge toward utilitarianism and paradise engineering is tractable, this may be evidence against the simulation hypothesis. Magnus Vinding argues that simulated realities would likely be utopias, and since our reality is not a utopia, the simulation hypothesis is almost certainly false.  Thus, if we do live in a simulation, this may imply that either post-singularity civilizations tend to not be utilitarians or that paradise engineering is extremely difficult.

Assuming we do live in a simulation, Alexey Turchin created this map of the different types of simulations we may be living in. Scientific experiments, AI confinement, and education of high-level beings are possible reasons why the simulation may exist in the first place.

Comment by Question Mark on If EA is no longer funding constrained, why should *I* give? · 2022-05-14T23:04:43.032Z · EA · GW

Even though there are some EA-aligned organizations that have plenty of funding, not all EA organizations are that well funded. You should consider donating to the causes within EA that are the most neglected, such as cause prioritization research. The Center for Reducing Suffering, for example, has only received £82,864.99 GBP in total funding as of late 2021. The Qualia Research Institute is another EA-aligned organization that is funding-constrained, and believes it could put significantly more funding to good use.

Comment by Question Mark on AI Alignment YouTube Playlists · 2022-05-10T01:47:03.083Z · EA · GW

This isn't specifically AI alignment-related, but I found this playlist on defending utilitarian ethics. It discusses things like utility monsters and the torture vs. dust specks thought experiment, and is still somewhat relevant to effective altruism.

Comment by Question Mark on Why do you care? · 2022-05-08T21:27:08.062Z · EA · GW

My concern for reducing S-risks is based largely on self-interest. There was this LessWrong post on the implications of worse than death scenarios. As long as there is a >0% chance of eternal oblivion being false and there being a risk of experiencing something resembling eternal hell, it seems rational to try to avert this risk, simply because of its extreme disutility. If Open Individualism turns out to be the correct theory of personal identity, there is a convergence between self-interest and altruism, because I am everyone.

The dilemma is that it does not seem possible to continue living as normal when considering the prevention of worse than death scenarios. If it is agreed that anything should be done to prevent them then Pascal's Mugging seems inevitable. Suicide speaks for itself, and even the other two options, if taken seriously, would change your life. What I mean by this is that it would seem rational to completely devote your life to these causes. It would be rational to do anything to obtain money to donate to AI safety for example, and you would be obliged to sleep for exactly nine hours a day to improve your mental condition, increasing the probability that you will find a way to prevent the scenarios. I would be interested in hearing your thoughts on this dilemma and if you think there are better ways of reducing the probability.

Comment by Question Mark on List of lists of EA-related open philosophy research questions · 2022-05-07T22:55:02.519Z · EA · GW

The Center for Reducing Suffering has this list of open research questions related to how to reduce S-risks.

Comment by Question Mark on Big List of Cause Candidates: January 2021–March 2022 update · 2022-05-01T20:37:53.281Z · EA · GW

This partially falls under cognitive enhancement, but what about other forms of consciousness research besides increasing intelligence, such as what QRI is doing? Hedonic set-point enhancement, i.e. making the brain more suffering-resistant and research into creating David Pearce's idea of "biohappiness", is arguably just as important as intelligence enhancement. Having a better understanding of valence could also potentially make future AIs safer. Magnus Vinding also wrote this post on personality traits that may be desirable from an effective altruist perspective, so research into cognitive enhancement could also include figuring out how to increase these traits in the population.

Comment by Question Mark on Effective Evil · 2022-04-28T01:03:54.842Z · EA · GW

Regarding the risk of Effective Evil, I found this article regarding ways to reduce the threat of malevolent actors creating these sorts of dsasters.

Comment by Question Mark on EA logo and title on Reddit's r/Place · 2022-04-03T21:04:25.446Z · EA · GW

Looks like the old logo got destroyed. Is there a new Effective Altruism logo being put up? If so, what are its coordinates?

Edit: The new EA logo is at approximately (955,1771). Here's the progress so far:

Comment by Question Mark on Do we have any *lists* of 'academics/research groups relevant/adjacent to EA' ... and open science? · 2022-03-31T02:16:17.977Z · EA · GW

There was this post that is a list of EA-related organizations. The org update tag also has a list of EA organizations. Nuño Sempere also wrote this list of evaluations of various longtermist EA organizations. As for specific individuals, Wikipedia has a category for people associated with Effective Altruism.

Comment by Question Mark on Is AI safety still neglected? · 2022-03-31T00:30:42.693Z · EA · GW

Which leads to the question of how we can get more people to produce promising work in AI safety. There are plenty of highly intelligent people out there who are capable of doing work in AI safety, yet almost none of them do. Maybe trying to popularize AI safety would help to indirectly contribute to it, since it might help to convince geniuses with the potential to work in AI safety to start working on it. It could also be an incentive problem. Maybe potential AI safety researchers think they can make more money by working in other fields, or maybe there are barriers that make it extremely difficult to become an AI safety researcher.

If you don't mind me asking, which AI safety researchers do you think are doing the most promising work? Also, are there any AI safety researchers who you think are the least promising, or are doing work that is misguided or harmful?

Comment by Question Mark on Is AI safety still neglected? · 2022-03-30T12:11:44.934Z · EA · GW

It depends on what you mean by "neglected", since neglect is a spectrum. It's a lot less neglected than it was in the past, but it's still neglected compared to, say, cancer research or climate change. In terms of public opinion, the average person probably has little understanding of AI safety. I've encountered plenty of people saying things like "AI will never be a threat because AI can only do what it's programmed to do" and variants thereof.

What is neglected within AI safety is suffering-focused AI safety for preventing S-risks. Most AI safety research and existential risk research in general seems to be focused on reducing extinction risks and on colonizing space, rather than on reducing the risk of worse than death scenarios. There is also a risk that some AI alignment research could be actively harmful. One scenario where AI alignment could be actively harmful is the possibility of a "near miss" in AI alignment. In other words, risk from AI alignment roughly follows a Laffer curve, with AI that is slightly misaligned being more risky than both a perfectly aligned AI and a paperclip maximizer. For example, suppose there is an AI aligned to reflect human values. Yet "human values" could include religious hells. There are plenty of religious people who believe that an omnibenevolent God subjects certain people to eternal damnation, which makes one wonder if these sorts of individuals would implement a Hell if they had the power. Thus, an AI designed to reflect human values in this way could potentially involve subjecting certain individuals to something equivalent to a Biblical Hell.

Regarding specific AI safety organizations, Brian Tomasik wrote an evaluation of various AI/EA/longtermist organizations, in which he estimated that MIRI has a ~38% chance of being actively harmful. Eliezer Yudkowsky has also harshly criticized OpenAI, arguing that open access to their research poses a significant existential risk. Open access to AI research may increase the risk of malevolent actors creating or influencing the first superintelligence to be created, which poses a potential S-risk. 

Comment by Question Mark on Predicting Polygenic Selection for IQ · 2022-03-29T01:22:28.646Z · EA · GW

A major reason why support for eugenically raising IQs through gene editing is low in Western countries could be a backlash against Nazism, since Nazism is associated with eugenics in the mind of the average person. The low level of support in East Asia is more uncertain. One possible explanation is that East Asians have a risk-averse culture.

Interestingly, Hindus and Buddhists also have some of the highest rates of support for evolution among any religious groups. There was a poll from 2009 that showed that 80% of Hindus and 81% of Buddhists in the United States accept evolution, while only 48% of the total US population accepts evolution. Another poll showed that 77% of Indians believe that there is significant evidence to support evolution. The high rate of acceptance of gene editing technology among Hindu Indians could therefore be a reflection of greater acceptance of science in general.

Comment by Question Mark on Predicting Polygenic Selection for IQ · 2022-03-28T19:27:58.644Z · EA · GW

As a side note, I found this poll of public opinion of gene editing in different countries. India apparently has the highest rate of social acceptance of using gene editing to increase intelligence of any of the countries surveyed. This could have significant geopolitical implications, since the first country or countries to practice gene editing for higher intelligence could have an enormous first-mover advantage. Whatever countries start practicing gene editing for higher intelligence will have far more geniuses per capita, which will greatly increase levels of innovation, soft power, effective governance, and economic efficiency in general. The countries that increase their intelligence through gene editing will likely end up having a massive advantage over countries that don't.

Comment by Question Mark on $1 to extend an infant's life by one day? · 2022-03-28T19:02:53.889Z · EA · GW

What's the point of extending an infant's life by a single day? If the infant in question has some sort of terminal illness that will inevitably cause them to die in infancy, prolonging their life by a single day seems extremely cruel. It would do nothing but prolong the infant's suffering.

Comment by Question Mark on What are longtermist arguments for and against psychedelics/drug reform as an EA cause area? · 2022-03-26T07:42:34.705Z · EA · GW

There's also the psychedelics in problem-solving experiment. The experiment involved having groups engineers solve engineering problems while on psychedelics in order to see if the psychedelics would enhance their performance. 

Comment by Question Mark on What EAG sessions would you like on Global Catastrophic Risks? · 2022-03-22T06:25:53.475Z · EA · GW

I already posted this in the post about EAG sessions about AI, but I'm reposting it since I think it's extremely important.

What is the topic of the session?

Suffering risks, also known as S-risks

Who would you like to give the session?

Possible speakers could be Brian Tomasik, Tobias Baumann, Magnus Vinding, Daniel Kokotajlo, or Jesse Cliton, among others.

What is the format of the talk?

The speaker would discuss some of the different scenarios in which astronomical suffering on a cosmic scale could emerge, such as risks from malevolent actors, a near-miss in AI alignment, and suffering-spreading space colonization. They would then discuss possible strategies for reducing S-risks, and some of the open questions related to S-risks and how to prevent them.

Why is it important?

So that worse that death scenarios can be avoided if possible.

Comment by Question Mark on What EAG sessions would you like on AI? · 2022-03-22T06:21:38.246Z · EA · GW

What is the topic of the talk?

Suffering risks, also known as S-risks

Who would you like to give the talk?

Possible speakers could be Brian Tomasik, Tobias Baumann, Magnus Vinding, Daniel Kokotajlo, or Jesse Cliton, among others.

What is the format of the talk?

The speaker would discuss some of the different scenarios in which astronomical suffering on a cosmic scale could emerge, such as risks from malevolent actors, a near-miss in AI alignment, and suffering-spreading space colonization. They would then discuss possible strategies for reducing S-risks, and some of the open questions related to S-risks and how to prevent them.

Why is it important?

So that worse that death scenarios can be avoided if possible.

Comment by Question Mark on Mediocre AI safety as existential risk · 2022-03-16T23:43:35.633Z · EA · GW

Brian Tomasik wrote something similar about the risks of slightly misaligned artificial intelligence, although it is focused on suffering risks specifically rather than on existential risks in general.

Comment by Question Mark on Get Russians out of Russia · 2022-03-07T06:29:32.052Z · EA · GW

Two Russians I know of who are affiliated with Effective Altruism are Alexey Turchin and Anatoly Karlin. You may want to try to contact them to see if you can convince them to emigrate. Alexey Turchin's email is available on his website and can be messaged on Reddit, and Anatoly Karlin can be contacted via email, Reddit, Twitter, Discord, and Substack.

Comment by Question Mark on EA Topic Suggestions for Research Mapping? · 2022-03-05T23:43:50.656Z · EA · GW

Your epistemic maps seem like a useful idea, since it would make it easier to visualize the most important cause areas for where we should push. Alexey Turchin created a number of roadmaps related to existential risks and AI safety, which seem similar to what you're talking about creating. You should consider making an epistemic map of S-risks, or risks of astronomical suffering.  Tobias Baumann and Brian Tomasik have written a number of articles on S-risks, which might help you get started. I also found this LessWrong article on worse than death scenarios, which breaks down some of the possible sources of worse than death scenarios and possible ways to prevent them. S-risks are a highly neglected cause area, since longtermist/AI safety research is generally about reducing extinction risks and preserving human values rather than averting worse than death scenarios. The Center on Long-Term Risk and the Center for Reducing Suffering have done significant research on S-risk prevention, which might be useful to you if you want to know the most promising research areas for reducing S-risks. 

Comment by Question Mark on Shortening & enlightening dark ages as a sub-area of catastrophic risk reduction · 2022-03-05T23:07:49.638Z · EA · GW

This article series on the Age of Malthusian Industrialism may provide some insight on what the next dark age might realistically look like. One possible way an upcoming dark age could be averted is through radical IQ augmentation via gene editing/embryo selection.

Comment by Question Mark on What psychological traits predict interest in effective altruism? · 2022-02-26T00:14:15.353Z · EA · GW

Since there's a significant overlap between Effective Altruism and the rationality community, you might be interested in Anatoly Karlin's article on Coffee Salon Demographics. It goes into detail with the national/racial/gender breakdown of the rationality community and various groups he considers to be high in intellectual curiosity. He also wrote this article on the demographics of LessWrong specifically.

Comment by Question Mark on Re: Some thoughts on vegetarianism and veganism · 2022-02-25T23:47:15.501Z · EA · GW

One animal welfare strategy EAs should consider promoting in the short term is getting meat eaters to eat meat from larger animals instead of smaller ones, i.e. beef instead of chicken and fish. With larger animals, it takes fewer animals to produce a unit of meat compared to smaller animals. Vitalik Buterin has argued that doing this may be 99% as good as veganism. Brian Tomasik compiled this chart of the amount of direct suffering that is caused by consuming various animal products, and beef and dairy are at the bottom.  For lacto-ovo vegetarians, they should also be encouraged to consume more dairy and fewer eggs, since battery-cage eggs involve significant suffering.

Some may argue that convincing people to substitute other animal products for beef will contribute to climate change, but I'm skeptical that the additional suffering caused by the marginal increase in climate change will outweigh the suffering prevented by the drastic decrease in the number of animals subjected to factory farming. In the long term, cultured meat and/or genetically engineering farm animals to not have nociceptors are better solutions.

Comment by Question Mark on Retrospective on Shall the Religious Inherit The Earth · 2022-02-25T04:03:05.728Z · EA · GW

I did a reverse image search on it, and I found a map that seems to have the same data for France and Germany that was posted in early 2014.

Comment by Question Mark on Retrospective on Shall the Religious Inherit The Earth · 2022-02-23T23:21:17.694Z · EA · GW

On the topic of the Amish, I found this article "Assortative Mating, Class, and Caste". In the article, Henry Harpending and Gregory Cochran argue that the Amish are undergoing selection pressure for increased "Amishness" which is essentially truncation selection.  The Amish have a practice known as "Rumspringa" in which Amish young adults get to experience the outside world, and some fraction of Amish youths choose to leave the Amish community and join the outside world every generation. The defection rate among the Amish has been decreasing over time. The defection rate in recent years has been around 10-15%, but was around 18-24% in the past. Because of this, your assertion that decreasing religiosity will outpace high fundamentalist population growth seems questionable.

From the article: 

The Amish marry within their faith. Although they accept converts, there are very few, so there is almost no inward gene flow. They descend almost entirely from about 200 18th century founders. On the other hand, there is considerable outward gene flow, since a significant fraction of Amish youth do not choose to adopt the Amish way of life. In recent years, something like 10-15% of young Amish leave the community In the past, the defection rate seems to have been higher, more like 18-24%. Defection is up to the individual – there are no exterior barriers against Amish who want to participate in modern society.
Since the Amish have very high birth rates ( > 6 children per family), their numbers have increased very rapidly, even though there is a substantial defection rate. There were about 5,000 descendants of the original 200 by 1920, and today [2013] there are about 280,000 Amish. 

Regarding the implications of future demographics for Effective Altruism/Longtermism, Robin Hanson wrote this article "The Insular Fertile Future". Robin Hanson talks about ways modern values could be preserved in light of these demographic shifts. One possible strategy to preserve modern values could be to encourage the creation of new subcultures that inherit most of their cultural elements from the dominant culture, but also have high fertility and the adaptive characteristics that insular, religious subcultures with high fertility have.

Also worth reading is Anatoly Karlin's article series on the Age of Malthusian Industrialism, particularly the article "Breeders' Revenge". Karlin argues that a reverse of the demographic transition and a "breeder transition" where there is a resurgence of high fertility due to selection for "breeders" is a mathematical inevitability. Karlin also talks about how France has a fertility rate that is roughly 1.5x that of Germany, which Karlin argues may be the result of France having an earlier demographic transition than other countries, and therefore has had more time for selection for "breeders" to take place.

Comment by Question Mark on EA Memes Feb 2022 · 2022-02-16T22:37:10.856Z · EA · GW

I found this Facebook group "Effective Altruism Memes with Post-Darwinian Themes".

These aren't entirely EA-related, but I also found this subreddit with memes related to transhumanism.