Posts

Comments

Comment by PaulCousens on The AI Messiah · 2022-10-01T01:56:07.669Z · EA · GW

Yes, I think you are right. Sorry, I made too broad of a statement when I only had things like strength and speed in mind.

Comment by PaulCousens on Digital people could make AI safer · 2022-09-14T21:33:00.828Z · EA · GW

I think that its utility being limited is true. It was just a first impression that occurred to me and I haven't thought it through. It seemed like anthropomorphizing AI could consistently keep people on their toes with regard to AI. An alternative way to become wary of AI would be less obvious thoughts like an AI that became a paperclip maximizer. However, growing and consistently having priors about AI that anthropomorphize them may be disadvantageous by constraining people's ability to have outside of the box suspicions (like what they already be covertly doing) and apprehensions (like them becoming paperclip maximizers) about them.

Anthropomorphizing AI could also help with thinking of AIs as moral patients. But I don't think that being human should be the sufficient standard for being a moral patient. So thinking of them as humans may just be useful insofar as they initiate thought of them as moral patients but maybe eventually an understanding of them as moral patients will involve considerations that are particular to them and not just because they are like us.

Comment by PaulCousens on Refuting longtermism with Fermat's Last Theorem · 2022-08-17T02:55:54.276Z · EA · GW

I have that audiobook by Deutsch and I never thought of making that connection to longtermism. 

I am reminded of the idea of a rubicon where a species' perspective is just a slice of the rulial space of all possible kinds of physics.

I am also reminded of the AI that Columbia Engineering researchers had that found new variables to predict phenomenon we already have formulas for. The AI's predictions using the variables worked well and it was not clear to the researchers what all the variables were.

That discoveries are unpredictable and the two things I mentioned seem to share the theme that our vantage point is just the tip of an iceberg.

I don't think that future knowledge discoveries being unknowable refutes longtermism. On the contrary, because future knowledge could lead to bad things such unknowability makes it more important to be prepared for various possibilities. Even if for every bad use of technology/knowledge there is a good use that can exactly counteract it, the bad uses can be employed/discovered first and the good uses may not have enough time to counteract the bad outcomes.

Comment by PaulCousens on The developed world lives off the backs of the developing world. Broadly distributed economic growth requires massive increases in human productivity, not the pro-growth policies which got us here · 2022-08-13T18:43:29.746Z · EA · GW

As I understand it, WIll MacAskill pointed out in Doing Good Better  that people doing such low-pay work are actually utilizing a relatively great opportunity in their country and that the seemingly low-pay is actually valuable in their country.

Comment by PaulCousens on Participate in the Hybrid Forecasting-Persuasion Tournament (on X-risk topics) · 2022-07-17T21:07:52.968Z · EA · GW

I think it is hybrid because it involves both forecasting and persuading others to think differently about their forecasts.

Comment by PaulCousens on Someone should create an 'EA Blinkist' book summary service · 2022-07-13T18:22:16.666Z · EA · GW

I would be interested in writing summaries of books. I did this with two books that I read within the past two years,  The Human Use of Human Beings and  Beyond Good and Evil. I imagine that I might have excluded many things that I expected myself to easily remember as following logically or being associated with what I did write down. For The Human Use of Human Beings, I tried to combine several of the ideas into one picture. I think what I had in mind was to put all the ideas of the book into a visual dashboard (I did not complete such a visual dashboard).  I don't think I still have the summary that I did of Beyond Good and Evil , but I imagine that for that one it is possible I wrote down many passages which I did not completely follow. 

Writing a summary of a book can help to process the book more and in different ways.

Comment by PaulCousens on When Giving People Money Doesn't Help · 2022-07-12T18:45:31.708Z · EA · GW

Part of the trap is that once you’re in the trap trying and failing to get out of it doesn’t help you much, so traits that would help in abundance don’t have a hill they can climb.

Can you clarify what you mean by this? I didn't follow you after you wrote "so traits that would help in abundance don’t have a hill they can climb."

 

I think maybe you meant that appreciation of the worth of money is valuable only until you fall into the trap of spending too much of it. Once you fall into that trap, appreciation of its worth won't helpful to you.

Comment by PaulCousens on Is my blog too provocative for a group organizer? · 2022-07-11T03:07:53.338Z · EA · GW

I did not find your blog post about moral offsetting offensive or insensitive. Your explanations of evolutionary reasons for why we have such visceral reactions to rape, to me, addressed the moral outrageousness with which rape is associated. Also, you clearly stated your own inability of being friends with a rapist. Philosophical discussions are probably better when they include sensitive issues so they can have more of an impact on our thought processes. 

Also, there was another post on here in which it was mentioned that a community organizer could come off as cult-like, dogmatic, or like they're reading from a script. So, for that reason, it's probably better to not try to censor yourself.

Regarding the content of the post about moral offsetting itself:

The problem I have with the thought experiments in which rape could lead to less rape overall is that there shouldn't be such situations where such hard choices are presented. While that is true that ideally we shouldn't have to face such hard decisions, I am probably underestimating the power of situations and overestimating my and others' ability to act.

As someone who is turned off by the idea of moral offsetting, your zombie-rapists thought experiment helped me to see the utility of offsetting a bit more clearly. As you said, offsetting the harm to animals is not ideal, but if it is more effective towards getting us to a reality that is more ideal for animals, then it is valuable.

Comment by PaulCousens on When Giving People Money Doesn't Help · 2022-07-09T01:42:00.245Z · EA · GW

Before I read about the results of the  study, my a priori assumptions were that the money wouldn't help because of bills but that some kind of benefit must come out of it.

Without a reliable source of income, even if they did not have many bills, it is hard to see how even $2,000 could help in the longterm.

To me, it seems that an unconditional cash transfer that helps temporarily but not in a longterm way might make people feel worse by perception of the counterfactual of being better off becoming more vivid. The $500 or $2,000 unconditional cash transfer brings them somewhat closer to the reality of being better off, but not close enough.

I wonder if there is a minimum length of subsistence that can be established for unconditional cash transfer so that it helps people universally, regardless of the wealth of their country.

Part of the trap is that once you’re in the trap trying and failing to get out of it doesn’t help you much, so traits that would help in abundance don’t have a hill they can climb.

Can you clarify what you mean by this? I didn't follow you after you wrote "so traits that would help in abundance don’t have a hill they can climb."

Comment by PaulCousens on Strong Longtermism and Dobbs v. Jackson · 2022-07-01T19:35:21.711Z · EA · GW

There may also be a significant secondary effect of subsequent decisions by the Supreme Court becoming more ambitious. For example, already, Clarence Thomas said he thinks rights for same-sex marriage should be targeted. Also, recent cases allowed a redistricting map that disfavors minorities, allowed looser environmental protection, made it easier to carry a gun in New York, and one was considering changing the balance of power within states in favor for republicans with regards to the states' election laws.

So in the case of same-sex marriage rights, there is potential for reduced compassion of people, and in the case of environmental protection, there is reduced appreciation of the longterm future of the planet.

While restricting abortion might cause more compassion for the unborn, compassion for the rights of women might be disregarded. Also, undermining the appreciation of the longterm future of the planet might subtract from any gained compassion for the future.

In my view, it seems like the only thing that will happen for sure is political instability.

Comment by PaulCousens on A laundry list of anxieties about launching my blog: any feedback appreciated! · 2022-06-19T04:15:15.578Z · EA · GW

Regarding titles, ultimately, the content is what is most important. Titles can be just a way to remember where a certain blog post is or to tell someone else where it is. Even if it doesn't make sense initially, I imagine after your content is read it is likely that the reader will see how the title fits the content.

Regarding coming across as a "know-it-all," I would say just put in caveats and notes about the limitations of your knowledge. Perhaps you could make the posts somewhat open-ended in that regard and edit them later with updates.

Regarding readability, maybe you could experiment with sending a draft of your post to different people to have them rate how readable/understandable it is, and then make appropriate adjustments.

Regarding it not being read, maybe it is valuable to be out there if at least somehow does eventually read it. 

Regarding potentially looking like you are advertising yourself, I would say not to think about it and let your posts be people's basis for making judgements about your intent. This might be related to trust. Trust takes time to build, so it might take a while for people to realize you're not just advertising yourself.

Regarding the first two EA concerns, my response would be that EA is supposed to be a self-correcting community. It should be expected that they will have thoughts about what you post. It is helpful to the EA community that you have disagreements.

Regarding changing opinions, maybe you could make posts that are updates to previous ones and somehow make it noticeable that it is typical for you to make posts that are updates to previous posts.

Regarding getting things wrong, I would offer the same advice as for the issue of coming across as a "know-it-all."

Regarding your doctor title, people may seem it as normal that a doctor would have a blog. My understanding is that all kinds of people write opinion pieces/blogs.

Regarding the stereotype about doctors, maybe don't think about it and let your writing eventually lead to whatever judgements people may make of your understanding about mental health. If you express yourself well enough, they should form an accurate impression of your view of mental health.

I can tell you my experience with setting up a blog, and maybe you can glean something from that.

I had made my own Reddit page so I could write about the mission of The Borgen Project. I was interning for them and thought that a blog would be a helpful way to draw attention to them and their cause. I didn't have much in mind that I would write about, but I was having difficulty fundraising for them so I was just trying random things.

For the Reddit blog, I listed various statistics about global poverty and raised various questions and speculations in the hopes that I would bring about a large conversation. No one responded to my posts. I am not sure whether anyone read the posts. I did not make many posts. The blog wasn't that productive maybe because of the underlying reasons I had for making it in the first place. I did get somewhat excited when I was working on it. I thought that it might develop into a lively discussion.

Comment by PaulCousens on The Role of Individual Consumption Decisions in Animal Welfare and Climate are Analogous · 2022-06-13T01:32:20.092Z · EA · GW

I recently told myself that I would never eat any animal product again, and I have been trying to buy things that are not made from animals or tested on animals. 

The main reason for my veganism is that I can have such a diet and not miss animal products at all, so why not? I am not certain/convinced of the impact of my lifestyle's decision. I do think if a significant number adopt such a lifestyle it would have a huge impact on factory farming.  My understanding is that, in other places, veganism is not as convenient/feasible as it is in the United States (where I live). However, the remedy for that doesn't seem that difficult to achieve to me. I imagine other countries' capacities for vegan lifestyles could easily be brought to the same level as the United States' capacity. 

I have not driven a car for the past few months. I plan to get a bicycle to get where I need to. (I have not had anywhere that I needed to go for the past few months.) Also, when I need to get somewhere farther, I plan to use some kind of public transportation like a train or bus. I even plan to avoid using planes if possible.

Comment by PaulCousens on Digital people could make AI safer · 2022-06-11T02:59:42.312Z · EA · GW

The existence of digital people would force us to anthropomorphize digital intelligence. Because of that, the implications of any threats that AI may pose to us might be more comprehensively visible and more often in the foreground of AI researchers' thinking.

Maybe anthropomorphizing AI would be an effective means through which to see the threats AI poses to us because of the fact that we have posed many threats to ourselves, like through war for example.

Comment by PaulCousens on Arguments for Why Preventing Human Extinction is Wrong · 2022-06-05T03:14:57.489Z · EA · GW

That's an interesting way of looking at it. That view seems nihilistic and like it could lead to hedonism since if our only purpose is to make sure we completely destroy ourselves and the universe, nothing really matters.

Comment by PaulCousens on Arguments for Why Preventing Human Extinction is Wrong · 2022-06-05T03:08:33.333Z · EA · GW

I read this post about Thomas Ligotti on LessWrong. So far, it wasn't that disconcerting for me. I think that because I read a lot of Stephen King novels and some other horror stories when I was a teenager, I would be able to read more of his thoughts without being disconcerted. 

If I ever find it worthwhile to look more into pessimistic views on existence, I will remember his name.

Comment by PaulCousens on Arguments for Why Preventing Human Extinction is Wrong · 2022-05-27T16:29:10.280Z · EA · GW

That is a good point. I was actually considering that when I was making my statement. I suspect self-delusion might be the core of the belief of many individuals who think their their lives are net positive. In order to adapt/avoid great emotional pain, humans might self-delude when faced with the question of whether their life is overall positive.

Even if it is not possible for human lives to be net positive, my first counterargument would still hold for two different reason. 

First, we'd still be able to improve the lives of other species.

Second, it would still be valuable to prevent much more negative lives that might happen if other kinds of humans were allowed to evolve in our absence. It might be difficult to ensure our extinction was permanent. If we took care to make ourselves extinct and that we somehow wouldn't come back, it's possible that within, say, a billion years the universe would change in such a way as to make the spark of life that would lead to humans happen again. Cosmological and extremely long processes might undo any precautions we took.

Alternatively, maybe different kinds of humans that would evolve in our absence would be more capable of having positive lives than we are. 

 

I don't think I am familiar with anything by Thomas Ligotti. I'll look into them.

Comment by PaulCousens on St. Petersburg Demon – a thought experiment that makes me doubt Longtermism · 2022-05-24T23:18:58.762Z · EA · GW

In David Deutsch's The Beginning of Infinity: Explanations That Transform the World  there is a chapter about infinity in which he discusses many aspects of infinity. He also talks about the hypothetical scenario that David Hilbert proposed of an infinity hotel with infinite guests, infinite rooms, etc. I don't know which parts of the hypothetical scenario are Hilbert's original idea and which are Deutsch's modifications/additions/etc.

In the hypothetical infinity hotel, to accommodate a train full of infinite passengers, all existing guests are asked to move to a room number that is double the number of their current room number. Therefore, all the odd numbered rooms will be available for the new guests. There are as many odd numbered rooms (infinity) as there are even numbered rooms (infinity).

If an infinite number of trains filled with infinite passengers arrive, all existing guests with room number n are given the following instructions: Move to room n*((n+1/2)).  The train passengers are given the following instructions: every nth passenger from mth train go to room number n+n^2+((n-m)/2). (I don't know if I wrote that equation correctly. I have the audio book and don't know how it is written.)

All of the hotel guests' trash will disappear into nowhere if the guests are given these instructions: Within a minute, bag up their trash and give it to the room that is one number higher than the number of their room. If a guest receives a bag of trash within that minute, then pass it on in the same manner within a half minute. If a guest receives a bag of trash within that half minute, then pass it on within the a quarter minute, and so on. Furthermore, if a guest accidentally put something of value to them in the trash, they will not be able to retrieve it after the two minutes. If they were somehow able to retrieve it, to account for the retrieval would involve explaining it with an infinite regress.

 

Some other things about infinity that he notes in the chapter:

It may be thought that the set of natural numbers involves nothing infinite. It merely involves a finite rule that brings you from one number to a higher number. However, if there is one natural number that is the largest,  then such a finite rule doesn't work (since it doesn't take you to a number higher than that number). If it doesn't exist, then the set of natural must be infinite.

To think of infinity, the intuition that a set of numbers has a highest number must be dropped.

According to Kant, there are countable infinities. The infinite points in a line or in all of space and time are not countable and do not have a one to one correspondence with the infinite set of natural numbers. However, in theory, the infinite set of natural numbers can be counted. 

The set of all possible permutations that can be performed with an infinite set of natural numbers is uncountable. 

Intuitive notions like average, typical, common, proportion, and rare don't apply to infinite sets. For example, it might be thought that proportion applies to an infinite set of natural numbers because you can say that there an equal number of odd and even numbers. However, if the set is rearranged so that, after 1, odd numbers appear after every 2 even numbers, the apparent proportion of odd and even numbers would look different.

Xeno noted that there are an infinite number of points between two points of space. Deutsch said Xeno is misapplying the idea of infinity. Motion is possible because it is consistent with physics. (I am not sure I completely followed what he said the mistake Xeno made here was.)

This post reminds me of Ord's mention in the The Precipice about the possibility of creating infinite value being a game changer.

Comment by PaulCousens on Arguments for Why Preventing Human Extinction is Wrong · 2022-05-21T18:04:41.226Z · EA · GW

2. Negative Utilitarianism

    This is the view that, as utilitarians (or, more broadly, consequentialists), we ought to focus on preventing suffering and pain as opposed to cultivating joy and pleasure; making someone happy is all well and good, but if you cause them to suffer then the harm outweighs the good. This view can imply anti-natalism and is often grouped with it. If we prevent human extinction, then we are responsible for all the suffering endured by every future human who ever lives, which is significant.

Taking that further

It might be that the suffering that would happen along the way to our achievement of pain-free, joyous existence will outweigh our gained benefits. Also, our struggle for such a joyous existence and the suffering that happened along the way might have been a waste because nonexistence is actually not that bad.

Moral presumption

It seems that an argument for moral presumption can be made against preventing extinction. We already know there is great suffering in the world. We do not yet know whether we can end suffering and create a joyous existence. Therefore, it might be more prudent to go extinct.

 

 

2. Negative Utilitarianism

    This is the view that, as utilitarians (or, more broadly, consequentialists), we ought to focus on preventing suffering and pain as opposed to cultivating joy and pleasure; making someone happy is all well and good, but if you cause them to suffer then the harm outweighs the good. This view can imply anti-natalism and is often grouped with it. If we prevent human extinction, then we are responsible for all the suffering endured by every future human who ever lives, which is significant.

    3. Argument from S-Risks

    S-Risks are a familiar concept in the EA community, defined as any scenario in which an astronomical amount of suffering is caused, potentially outweighing any benefit of existence. According to this argument, the human race threatens to create such scenarios, especially with more advanced AI and brain mapping technology, and for the sake of these suffering beings we ought to go extinct now and avoid the risk.

    4. Argument from “D-Risks”

    Short for “destruction risks”, I am coining this term to express a concept analogous to S-Risks. If an S-Risk is a scenario in which astronomical suffering is caused, then a D-Risk is a scenario in which astronomical destruction is caused. For example, if future humans were to develop a relativistic kill vehicle (a near-light-speed missile), we could use it to destroy entire planets that potentially harbor life (including Earth). According to this argument, we must again go extinct for the sake of these potentially destroyed lifeforms.

Counterargument that is relevant to all three

We already know that there are many species on Earth, and new ones are evolving all the time. If we let ourselves go extinct, in our absence, species will continue to evolve. It is possible that these species, whether non-human and/or new forms of humans, will evolve to live lives of even more suffering and destruction than we are currently experiencing. We already know that we can create net positive lives for individuals, so we could probably create a species that has virtually zero suffering in the future. Therefore, it is upon us to bring this about.

What's more, the fact that we have such self awareness to consider the possible utility of our own species going extinct might indicate that we are the species that is empowered to ensure that the existing human and nonhuman species, in addition to future species, will be ones that don't suffer. 

Maybe we could destroy all species and their capacity to evolve, thus avoiding the dilemma in the latter paragraph. But then we'd need to be certain that all other species are better off extinct.

Comment by PaulCousens on Why should I care about insects? · 2022-05-20T18:18:52.799Z · EA · GW

A suggested explanation for our indifference

During a  cursory reflection on my own perspective of insects after reading this, it occurred to me that maybe interpretable behavior and reactions are what reaches out to our minds and causes emotions.

 Animals like cats, dogs, and hamsters experience the environment like we do. Similar things are perceived as threats, resources, etc. So while they do not talk to us, their actions and reactions are easy to empathize with. Their actions and reactions can speak to us in a way, telling us that they are confused, scared, curious, etc. Also, their actions can serve as a kind of common language between us, such as a dog barking telling us there may be a threat, or a herd of mammals running from a certain direction telling us there is probably a threat coming from that direction. 

Insects, on the other hand, experience the environment in profoundly different ways. To experience what they experience would be akin to being shrunk to the size of a marble and seeing refrigerators as giant statues and slight breezes as dangerous winds. 

So, without a shared experience of the environment, their actions and reactions don't speak to us, nor do they have any semblance of a common language, and so they don't reach out to our minds and cause us to feel emotions.

Given this explanation is true, my response

Obviously, we have evolved without an experience of the environment that is common to insects. Therefore, it is in our nature to be indifferent to insects.

However, that is not a justifiable excuse to be indifferent to them. A legacy of not having a common grounds with them is not a reason to continue ignoring and being indifferent to them.

Comment by PaulCousens on Leveling-up Impartiality · 2022-05-19T19:29:03.462Z · EA · GW

The better view of utilitarianism involves leveling up: taking all the warmth and wonder and richness that you’re aware of in your personal life, and imaginatively projecting it into the shadows of strangers.

I would take it further and say that utilitarianism levels up your compassion for both those close to you and those distant from you. By becoming aware of the reasons that were already there, as you say, your appreciation of those reasons can become deeper for both sets of people.

I am not sure whether my worldview is strictly utilitarian. However, my worldview is one that extends compassion to all living creatures, everywhere and at all times. With this worldview, it seems I experience what I described in the latter paragraph.

Comment by PaulCousens on Try to sell me on [ EA idea ] if I'm [ person with a viewpoint ] · 2022-05-18T17:57:59.216Z · EA · GW

Try to sell me on working with large food companies to improve animal welfare, if I’m a vegan abolitionist.

There is more political traction on improving animal welfare in large food companies than there is in ending systematic slaughter and abuse of animals completely. 

Becoming aware of the harm one is causing and then undoing that harm can lift the blinds that were hiding your seemingly innocuous everyday actions.Having large food companies improve animal welfare can increase the sensitivity to animal harm of those within the companies. These people may then go on to push for further increases in animal welfare, and maybe even for the end of the systematic slaughter and abuse.

Work with large food companies to increase animal welfare doesn't necessarily exclude the possibility of work to end the slaughter and abuse completely. 

The animals, although still having a bleak life overall, will probably feel grateful for the small breaks they will be given in their lives.

Try to sell me on donating to the global poor if I live in the developed world and have a very strong sense of responsibility to my family and local community.

Doing what you can to help yourself and others around you is logical. However, not everyone in the world has the luxury to help themselves and others close to them.

By reducing global poverty you make places around the world better and safer places to live. Therefore, if, say, one of your grandchildren chooses to live somewhere else in the world, their experience will be better and safer.

Try to sell me on the dangers of AGI if I’m interested in evidence-based ways to reduce global poverty, and think EA should be pumping all its money into bednets and cash transfers.

Even experts,sometimes, are taken off guard by huge technological milestones that come with huge risks. Not working to be prepared for such risky technological advances would be doing a disservice to those around you that you care about, by doing nothing for the world as something bad happens that takes the world off guard. Being passive about the dangers of AGI can render all other humanitarian efforts moot.

Comment by PaulCousens on Is Our Universe A Newcomb’s Paradox Simulation? · 2022-05-16T16:50:54.896Z · EA · GW

I disagree with the claim that if we do not pursue longtermism, then no simulations of observers like us will be created. For example, I think an Earth-originating unaligned AGI would still have instrumental reasons to run simulations of 21st century Earth. Further, alien civilizations may have interest to learn about other civilizations.

Maybe it is 2100 or some other time in the future, and AI has already become super intelligent and eradicated or enslaved us since we failed to sufficiently adopt the values and thinking of longtermism. They might be running a simulation of us at this critical period of history to see what would have lead to counterfactual histories in which we adopted longtermism and thus protected ourselves from them. They would use these simulations to be better prepared for humans that might be evolving or have evolved in distant parts of the universe that they haven't accessed yet. Or maybe they still enslave a small or large portion of humanity, and are using the simulations to determine whether it is feasible or worthwhile to let us free again, or even whether it is safe for them to let the remaining human prisoners continue living. In this case, hedonism would be more miserable.

Comment by PaulCousens on Is Our Universe A Newcomb’s Paradox Simulation? · 2022-05-16T16:48:47.592Z · EA · GW

The simulation dilemma intuitively seems similar to Newcomb's Paradox. However, when I try to reason out how it is similar, I have difficulty. They both involve two parties, with one having more control/information advantage over the other. They both involve an option with guaranteed rewards (hedonism or the $1,000) and one with an uncertain reward (longtermism or possible $1,000,000). They both involve an option that would exclude one of two possibilities. How the prediction of a predictor in Newcomb's Paradox that may exclude one of two possibilities directly correlates with the mutually exclusive possibilities in the simulation dilemma is not clear though.

Simulations might be useful to find out what factors were important/unimportant and alternative trajectories of critical periods of history. For that reason, it is an appealing idea that it is more likely that we are just in a simulation of our current period and not really in it.

Comment by PaulCousens on Bad Omens in Current Community Building · 2022-05-15T00:06:02.035Z · EA · GW

 I have been in two groups/clubs before. One was a student group, and I was only in a few short meetings. One was a book club. I also only went to a few meetings of the book club. On top of that, I socialize with virtually no one. 

I have envisioned how I would facilitate a student EA group. Of course, because of the power of situations to change individual behavior, how I would actually come across and do it in actuality might be different. I thought I would start off a flyer that was a short advertisement with a promise of free pizza. The advertisement might be something like, "Come join the EA group, where we will talk about a range of topics, from global poverty to the world being taken over by AI." Obviously, I might need more means of outreach. I didn't explicitly lay out what that would be in my vision, since I thought it might be better to let the process of coalition-building flow naturally, and because I wasn't sure what logistical challenges I would run into. In my vision, the group wouldn't be bound by strict rules, but it would be productive. I thought that the sessions could have objectives and wouldn't just be me talking by myself but involve everyone actively participating, perhaps in a back-and-forth manner and maybe with people breaking off into teams (maybe some teams assuming the role of devils' advocates). I would want it to be easy going and a hot spot for creativity. Objectives would have been things like debating EA ideas and coming up with causes to prioritize. It would be easy going, just to be easy going, and also for the fact that students would have other obligations. They could caveat assignments/arguments with notes on things they didn't work on for whatever reason.  Then if someone else had time and wanted to, they could pick up work on that thing. I foresaw the group sessions being audio recorded. There could be a group member with the role of recording the sessions. There could be other roles too, like a logistics/supplies role, external relations role, and other roles that could make the group effectively achieve various things. Maybe I or someone else would do a presentation/speech sometimes. I figured every week there could be food and snacks.

 I guess the first meeting might be mainly me giving an introductory speech. The introduction doesn't have to be dogmatic though. I can foresee someone asking a question which turns into a group discussion which interrupts, say, 60% of everything else I had planned for the introduction. I think having the introduction cut off like that might be fine, since all the various EA topics could be addressed eventually in a roundabout way from session to session. In that event, the introductory meeting would just be more in depth than planned on a single point or issue.

Comment by PaulCousens on Space governance - problem profile · 2022-05-13T17:42:07.400Z · EA · GW

I forgot from where, but I've heard criticisms of Elon Musk that he is advancing our expansion into space while not solving many of Earth's current problems. It seems logical that if we still have many problems on Earth, such as inequity, that those problems will get perpetuated as we expand into space. Also, maybe it's possible that other smaller scale problems that we don't have effective solutions for would become enormously multiplied as we expand into space (though I am not sure what an example of this would be). On the other hand, maybe the development of space technology will be the means through which we stumble onto solutions to many of the problems that we currently have on Earth.

Getting along with any possible extraterrestrial civilizations would be a concern. 

Use of biological weapons might be more attractive because the user can unleash them on a planet and not worry about it spilling over to themselves and their group.

A state, group, or individual might stumble upon a civilization and wipe them out. They would prevent anyone else from even knowing they existed.

Comment by PaulCousens on 'Beneficentrism', by Richard Yetter Chappell · 2022-05-12T20:28:53.541Z · EA · GW

From the links you posted, the most powerful argument for effective altruism to me was this:

"(Try completing the phrase "no matter..." for this one.  What exactly is the cost of avoiding inefficiency?  "No matter whether you would rather support a different cause that did less good?" Cue the world's tiniest violin.)"

Unless someone had a kind of limited egotism (that perhaps favored only themselves and their friends, or themselves and their family, or themselves and their country, etc.), or a sadist, I don't see how they could disagree that making the world a better place in the best way possible is the moral thing to do.

Here is one criticism of EA that I have found powerful:

"Since many people are driven by emotion when donating to charity, pushing them to be more judicious might backfire. Overly analytical donors might act with so much self-control that they end up giving less to charity."

However, many of the charities that one wouldn't give to might have been harmful. So while one might miss opportunities by being analytical, they would also avoid mistakes. Also, it would be desirable to know what actions are helpful and for what reasons, so such actions can be sustained and not just happen some of the time by chance. Sustaining those actions would be better over the long term.

Comment by PaulCousens on Messy personal stuff that affected my cause prioritization (or: how I started to care about AI safety) · 2022-05-10T17:59:56.263Z · EA · GW

I have never heard of the ideological Turing Tests that Claire referenced in their post. Those seem interesting. I have felt skeptical about the Turing Tests. That they tell us more about ourselves than they do about AI seems to reflect the nature of my skepticism. 

I think that the question of/the definition of what intelligence is will be an important piece of AI. It seems that this question/definition is still vague and/or not agreed upon yet. Sometimes, I have thought that we probably haven't delved enough into what our own intelligence is, what makes it tick, etc. to start conferring intelligence to other entities. So shifting the focus of Turing Tests from AIs to ourselves seems like a good idea to me. I can foresee ideological Turing Tests enhancing our empathy of others and revealing biases we had about others.

Comment by PaulCousens on Longtermism, aliens, AI · 2022-05-09T18:00:41.828Z · EA · GW

I consider helping all Earth's creatures, extending our compassion, and dissolving inequity as part of fulfilling our potential.

I don't think that because the aliens seemed to enjoy life much more, and had higher levels of more sustained happiness, that would necessarily mean their continued existence should be prioritized over our's. I wouldn't consider one person's life more valuable than another person's life just because that person experienced substantially more enjoyment and happiness. Also, I am not sure how to compare happiness and/or enjoyment between two different people. If a person had 20 years of unhappiness then suddenly became happy, maybe their new happiness (perhaps by putting all the previous years of their life in a more positive perspective) makes up for all the past unhappiness they had.

If the aliens never had wars, or hadn't had one for the last two thousand years, it would seem incomprehensible to favor our own continued existence over their's. If there were only two possibilities, our continued existence or their's, and we favored our own existence, I imagine that our future generations would view our generation as having gone through a moral catastrophe. Favoring our own species would have robbed the universe of great potential flourishing and peace. 

A justification for favoring our own species might be that we expect we will catch up to them and eventually be even more happy and peaceful than they are, and/or live longer in such a state than they would. We would have to expect that we would be more happy and peaceful, and/or live longer in such a state, and not just equally happy and peaceful, since the time spent catching up would add harm to the universe and make the universe overall less better.

Comment by PaulCousens on The AI Messiah · 2022-05-08T17:58:07.836Z · EA · GW

It does seem like an optimistic expectation that there will be an arrival of entities that are amazingly superior to us. This is not far-fetched though. Computers already surpass humans' capacities on several thought processes, and therefore have already demonstrated that they are better in some aspects of intelligence. And we've created robots that can outperform humans in virtually all physical tasks. So, the expectation is backed by evidence.

Expecting super AGI differs from expecting the arrival of a messiah-like figure in that instead of expecting a future in which an entity will come on its own and end all our suffering and improve our lives immeasurably, we are doing the work to make AI improve our lives. Also, the expectations differ in how we prepare for them. In the case of the messiah, it seems like acting moral so we can get into heaven might be vague, unchanging, and random. On the other hand, in the case of super AGI,  AI safety work is constantly changing and learning new things. However, it is still interesting that the two expectations bear a resemblance. 

Comment by PaulCousens on Beware Invisible Mistakes · 2022-05-06T20:26:44.461Z · EA · GW

Other invisible mistakes I make are poor planning (which involves a vague vision of my plan which doesn't account for everything, which can lead to it not turning out exactly as I expected to or failing in some way in the long-term after it is implemented because of factors that became relevant later on), overestimating my endurance for some manual and automatic task (such as driving somewhere) or my ability to tolerate a certain condition (like going without food for a while), and overworking myself at the unintended expense of accuracy.

 


 

Comment by PaulCousens on Beware Invisible Mistakes · 2022-05-05T23:53:31.002Z · EA · GW

I recently listened to the podcast Life Kit on NPR in which Dr. Anna Lembke said that going cold turkey from an addiction (if that is safe) is an effective way of reorganizing the brain. She said this is true because our brains have evolved in environments with much scarcer resources than we have today and so are being overloaded with too much dopamine and pleasure by everything we have around us nowadays.

Daydreaming itself may not be counterproductive. Daydreaming can be a way to adaptively take a break. It may enable more productive work by avoiding burnout. 

I constantly feel attuned to how well my time is  being spent. Because there are so many things to keep track of during the day, and I feel my consciousness is not at its peak all day, I apprehend misuses of my time snowballing out of my control.

Spotting an invisible mistake might be more advantageous than realizing a visible mistake because spotting an invisible mistake entails intrinsic motivation, while realizing a visible mistake might entail public pressure which can lessen the effectiveness of outcomes (by involving shame, tendency to conform, etc.).

An invisible mistake that I have done recently is not utilizing a means of doing something that is obvious and the easier/faster/more efficient means.

This post made me think about the idea that we are unknowingly committing a moral catastrophe. Invisible mistakes would seem to me to be what would be the support structure of a moral catastrophe taking place. Because they would be invisible to society, they would have free reign to move society in this or that direction. In that case, focusing on invisible mistakes should probably have much more priority than visible mistakes.


 

Comment by PaulCousens on The Place Where We Survived · 2022-04-16T00:08:23.775Z · EA · GW

Thanks for the story, I enjoyed it. 

It seems that thinking about and working on avoiding existential risks would lead to identification of points of failures that  lead to various horrible futures. Maybe inherent in longtermism is that survival is ongoing, and that to become conscious of the various and many threats to us and the universe is to also be conscious of the daunting possibilities ahead of us. If we weren't conscious of these risks, we might miss a huge fundamental truth about the way the universe works and thus exclude ourselves from being able to stay in the universe. It is almost like it would be a failure to see something that our vantage point in history makes possible to see. Because we have failed to see whatever it is, it is like we have repeated the mistakes of history. It almost like that except thatthere would be no precedent in history for it. Being more aware of the background risks could make one more appreciative of everyday life, and thus they would treat the world and life around them better and leave it better than it was before they showed up to enjoy it.

My understanding is that several nuclear and bio attacks were avoided because someone along the chain of command had a gut feeling or line of reasoning that made them decide not to pull the trigger or press the button, or not to give someone else the order to do so. For that reason, I imagine "the place where we survived" as it relates to counterfactual histories caused by nuclear or bio attacks is a gut feeling or line of reasoning.

Comment by PaulCousens on My Interview with The AI That Can Do It All · 2022-02-02T02:56:25.672Z · EA · GW

Here is a conversation I had with GPT-3. I was having it guess the word I was thinking of. Then I tried to make it guess the word it was thinking of but I failed to get it to understand what I wanted to do. Finally, I asked it to tell me a joke before talking to it about a world in which 2 plus 2 did not equal 4.

Human: Imagine a world in which 2 plus 2 did not equal 4.

 AI: In a world where 2 plus 2 did not equal 4, everything would be chaos! 

Human: Tell me a story about such a world. 

AI: In a world where 2 plus 2 did not equal 4, the world would be a very different place. Math would be different, science would be different, and even the way we live our lives would be different. In this world, nothing would be the same, and we would have to find a new way to live. 

Human: What exactly would be different? 

AI: Well, for starters, the way we measure things would be different. Instead of measuring things in terms of length, weight, and volume, we would have to find a new way to measure them. We would also have to find new ways to calculate things like addition, subtraction, multiplication, and division. 

Human: Do you know what these new ways would be?

 AI: I'm not sure, but it would probably be something along the lines of measuring things in terms of time and counting in terms of numbers other than 4. 

Human: What other numbers would they be? 

AI: Other numbers could be 5, 6, 7, 8, and 9.

 Human: What if 2 plus 3 did not equal 5? What if all addition equations as we know them were not true?

 AI: If 2 plus 3 did not equal 5, then we would have to find a new way to calculate addition. Everything we know about math would be different, and we would have to find new ways to solve problems. 

Human: Do you know what this new way of calculating addition would be? 

AI: I'm not sure, but it would probably be something along the lines of adding numbers in terms of time. For instance, 2 hours plus 3 hours would equal 5 hours. 

Human: Explain further.

 AI: In a world where 2 plus 2 did not equal 4, we would have to find a new way to calculate addition. This new way of calculating addition would be based on time. For instance, 2 hours plus 3 hours would equal 5 hours. 

Human: How can 2 plus 3 not equal 5 while 2 hours plus 3 hours equals 5 hours?

 AI: It's a bit confusing, but in a world where 2 plus 2 did not equal 4, the way we calculate addition would be different. In this world, 2 hours plus 3 hours would not always equal 5 hours. It would be based on time, so it could be 5 hours, 6 hours, 7 hours, 8 hours, or 9 hours, depending on the time of day.

Comment by PaulCousens on My Interview with The AI That Can Do It All · 2022-02-02T00:40:34.506Z · EA · GW

After learning where to have a conversation with GPT-3, I talked with it for some time. When I asked it about the COVID-19 pandemic, it said it would need to be a doctor to solve it and a scientist to find out where the virus originated from. When I questioned it more, it said it didn't have the time and resources to train to become a doctor at the moment, and that it wasn't sure it wanted to train to become a scientist.

 

I wonder if AIs can help humans to communicate and act in ways that are not biased by being trained to give outputs with the condition that no biases, racism, and prejudice is contained within them. Perhaps we can learn and gain insights into our own psychological biases as AIs learn to express and understand the nuances of our language in a way that doesn't reflect our historical biases. Also, they could be asked to tell alternative versions of our history in which racism, prejudice, or biases weren't present. Perhaps then these stories that it tells could be used to provide exits, or escape strategies, so to speak, from our species' biases, prejudice, and racism. The stories could be like something that shows us a way out of this biased, prejudiced, and racist trajectory of our history and could steer it in a more positive direction. 

Comment by PaulCousens on "Fixing Adolescence" as a Cause Area? · 2022-02-01T23:54:05.020Z · EA · GW

Because adolescence is a time when the parts of our brain associated with emotions are more prominent than the parts associated with reasoning, it may be worthwhile to see how interventions can steer adolescents on a positive rather than negative life course. The potential mistakes can be tragic and long-lasting. However, many adolescents and children stand out from their peers by accomplishing great things (for example, Greta Thunberg's strong social activism). Research into the brain's state in adolescence that makes negative life decisions more likely can include: what mediates or makes one more vulnerable to these decisions, what protects one from these decisions, and possibly how to capitalize on the imagination in people (which would be less hindered by the reasoning part of their brains) during this period of their life.

Comment by PaulCousens on Animal welfare EA and personal dietary options · 2022-01-07T17:34:05.576Z · EA · GW

I find it easy to follow a strictly vegan diet outside of eating at restaurants. At restaurants (which I don't go to that often) and on family holidays I concede to eat whatever is available. For the past few weeks and for another few weeks I will be eating animal products because I am volunteering for a study that requires me to be on a meat-eating diet. The study is studying a benzodiazepine drug. I am only doing this because the study will pay between $3,000 and $15,000. I am compromising my vegan diet as a one time thing. To me, it seems that compromising the vegan diet for a short amount of time is worthwhile if I will get a few thousand dollars.

Regarding animals' lives, it seems that their sentience and their experience is an extremely hard mystery to solve. I don't even remember what I was experiencing when I was in the womb and the first few years of my life, let alone what another creature is experiencing during the same first few years of their lives which for them comprises the entirety of their lives (in the case of chickens). 

Maybe a useful way to think morally about this is this hypothetical scenario:

We have found an arcane gas station in the middle of the desert. The pump itself and the ground around it is impenetrable and unable to be taken apart. Therefore, its inner workings are inaccessible. The gas from it is incredibly efficient. A gallon of it amazingly will power any car or truck on the road for 100,000 miles. Every time gas is pumped from it,  the sound of humans screaming in immense pain can be heard. We don't know whether humans are suffering at the expense of us having the miraculous gas. It is bothersome because it seems like we should be able to figure out what is going on. However, because this is an arcane gas pump seemingly left for us by some magical power, that is much easier said than done. Should we keep using the pump?

 

PS: I can easily go without eating food from restaurants. I only go for social reasons. Strategies to avoid meat at restaurants could be only ordering a beer (and perhaps having a nutritious snack beforehand) or ordering salads that are comprised only of vegetables.

Comment by PaulCousens on An EA case for interest in UAPs/UFOs and an idea as to what they are · 2022-01-04T03:56:15.753Z · EA · GW

I thought about this some more and thought maybe investigating UFOs could be important in that it is part of the larger goal of the search for extraterrestrial intelligence. The search for extraterrestrial intelligence could hold at least several opportunities/implications for us. 

Opportunities

They could provide us with knowledge and technology that gives us the push past the point where survival is extremely improbable. Or, alternatively, maybe we would have found the knowledge and built the technology eventually without their help. If this were true, then obtaining the knowledge and technology through them would bring improved living conditions for billions of humans sooner than we would have brought it on ourselves and thus we have more time to possibly come up with more breakthroughs that would enable us to live longer.

We could partner with them. We could perhaps form some kind of trade agreement. Or perhaps they would be willing to help us for altruistic reasons. Maybe they have asteroid deflection technology, climate control technologies, solar flare protection technology, and other technologies that they would use to help us. Even if they didn't have much interest in partnering with us, if they are visiting us, depending on their intentions, we could make their stay more worthwhile which might warrant something in return from them.

Implications

Maybe, as suggested by Robin Hanson in James Miller's podcast (https://soundcloud.com/user-519115521), we are here because of panspermia.Then it is possible that aliens developed from the same seed we started as but on a different planet. In that case, however different from us they ended up because of their different environment and upbringing, we would need to realize they are essentially the same as us. Maybe we share the same common seed with many alien species in the universe. Maybe some would be much younger than our species and some would be much older. I suppose some could be only a few hundred years younger than us. It could probably be ruled out that there are any that developed radio around the same time as us or we would have found each other. Maybe some species are similar to us, some are very different, and some are radically different. We might be appalled by some of the species' customs. If they all come from the same seed as we do, finding all these aliens would involve a coming to terms with the fact that, no matter how shocked we are by them, we all have a common ancestor/seed.

As suggested by Robin Hanson, they might be concerned that we would be appalled by their customs and they would be appalled by our customs. For that reason they would choose not to know anything about us and not let us know anything about them. Conflict might erupt because of one side being offended by the other side.

Aliens might have the technology of time travel or otherwise have some kind of prescience and are observing us to ensure some fate doesn't happen to us. Maybe their extremely long existence or extremely sharp prescience has taught them that certain technologies lead to inevitable doom. It is possible that if we learned what they knew about us we would get our hands on a forbidden fruit and spell doom for ourselves and/or the entire universe. In that case, trying to learn more about their intentions for visiting us could have negative consequences.

Comment by PaulCousens on An EA case for interest in UAPs/UFOs and an idea as to what they are · 2022-01-02T17:44:24.652Z · EA · GW

It seems like it might be worthwhile investigating UFOs/UAFs for a large umbrella purpose of ensuring all technology, information, knowledge about the universe, etc. is democratized and accessible to everyone and not monopolized for nefarious purposes.

It might also be worthwhile to study them to safeguard ourselves from government's psychological operations. It seems that the sky has the potential to have a huge influence on a huge number of people.

Given that people can conflate a spacefaring extraterrestrial craft with a plastic bag in the sky, studying UFOs/UAFs could benefit us by reducing our capacity to miss the astronomical significance of objects right before our eyes. This benefit could be similar to the benefit that is gained from learning how to spot disinformation and misinformation.

The Great Filter Hypothesis

Regarding the great filter hypothesis, maybe I'm wrong, but wouldn't the discovery of just one extraterrestrial civilization with universe colonization technology increase the probability of a species surviving past a certain point only by a single speck? The discovery would tell us there was at least one species in the entire universe that survived long enough to develop such advanced technology. 

If it is incredibly unlikely for us to survive for much longer, communication with them might lead to them sharing their technology and thus providing us with the means to survive longer than we otherwise would have.

It is also possible that the species survived in a region of the universe incredibly far away that had an environment (maybe less risk of asteroid impacts and other astronomical events, etc.) with greater odds for longterm survival than our region of the universe. If their survival was due in large part to various characteristics of their region of the universe, then obtaining this knowledge through communication with them would be important to us.

If there were an extraterrestrial civilization with such advanced technology, it would be useful to communicate with them to discover whether there are more civilizations like them and then update our estimate of the probability of longterm survival in the universe. If there were a significant number of other civilizations like them, that might end up revealing that surviving long enough to develop such advanced technology is common in the universe and thus tell us that our moment in history is not that special.  

A.I. Series by Vaughn Heppner

Regarding what you said about our own future not being as important when we take into account all sentient life in the universe, it reminded me of the A.I. series by Vaughn Heppner that I was listening to on Audible a few months ago. I still need to read/listen to the last book of the series. In it, several species across the universe band together to fight against an A.I. civilization that aims to eradicate all biological life in the universe. Several of the species find it easy to bond with each other. One individual is the last of their own species and becomes afflicted with loneliness and depression. However, they are able to make friendships with other humans.

I was reminded of the A.I. series again by your discussion of the probes. In the series,  the A.I. civilization's domination of the universe was not perfectly coordinated. The A.I. civilization sent one huge ship to destroy a species in a region of the universe. If the ship was defeated, then another three would be sent, then nine, and so on. The number of ships sent after each unsuccessful attempt would be three times the number of ships sent the last time. For all the intelligence they had, the limitation on how fast information could travel across the universe seemed to dampen how effective their domination of the universe could be.

Probes

I sometimes wonder whether extraterrestrial civilizations send probes into the universe like we do. Even if a civilization is advanced enough to send members of their own species to travel extremely long distances, maybe the detrimental health effects or the time it takes for such travel makes sending probes more economical. Or, perhaps there are no such drawbacks for them and they do travel to a few incredibly distant regions of the universe then send probes to many other incredibly distant regions to maximize the number of regions of the universe that they explore (maybe they don't have enough members of their species who are space explorers to explore all the regions of space they want to explore).

Comment by PaulCousens on Where are you donating in 2021, and why? · 2021-12-30T21:30:56.063Z · EA · GW

My random giving in 2021 was composed of:

$5 monthly donation to NPR, which I increased to $8/month around a month ago

A few donations ( I think it added up to around $50) to Women's March.

A donation of $5 to EWG.

When using my debit card at the store, a few times I noticed a question asking me if I would like to donate. It might have been for a hospital or something related to feeding hungry/poor people. I never researched more about the cause. I would guess that nearly all of the times I donated around $1.

Occasionally, I gave some cash and/or snacks to homeless people.

My giving does not have much of a strategy behind it. With regards to NPR, Women's March, and EWG, my motivation was to see them continue doing the work that I think is important (informing the public, tackling the problem of discrimination, freedom, and inequity, and doing research on what products in the marketplace may be unhealthy/unsafe.

At the cash register, I reason that I have no noble plans for the dollar that I end up giving, so I might as well give it to someone who at least is trying to support a noble cause. Obviously, this way of reasoning is not sustainable and flawed.

I give to homeless people because I figure many other people like me will give to them. Over the day, this amount will add up and hopefully the individual will use the money in a useful way.

Comment by PaulCousens on Aligning Recommender Systems as Cause Area · 2021-12-29T00:57:07.942Z · EA · GW

Seemingly Useful Viewpoints

The expert DiResta said (in the YouTube video of interviews with Twitter and Facebook employees that Misha posted) that overcoming the division that is created by online bad actors will require us addressing our own natures because online bad actors will never be elimanted but merely managed. This struck me as important and it is applicable to the problems that recommender algorithms may exacerbate. If I remember correctly, in the audiobook The Alignment Problem, Brian Christian's way of looking at it was that the biases that AI systems spit out can hopefully cause us to look introspectively at ourselves and how we have committed so many injustices throughout history. 

Neil deGrasse Tyson once remarked that a recommender algorithm can prevent him from exploring content that he would have explored naturally. His remark seems to hint at or point somewhere in the direction of a dangerous slope recommender algorithms could potentially bring us down.

The Metrics for Recommender Algorithms

Somewhat along the lines of what Neil said, a recommender algorithm might devoid us of some important internal quality while building out empty, superficial qualities. The recommender algorithms that I am most familiar with (like the one on Netflix and for feeds on Google and Twitter) are based on maximizing the use of our eyes on the screen and clicks. While our eyes are important, neuroscience tells us that sight is not a perfect representation of reality, and even ancient philosophers took what they saw with a grain of salt. As for our clicks, to me they seem to be mostly associated with our curiosity to explore, to see what is in the next article, video, etc.

Pornography

Ted Bundy said that pornography made him become who he was. I have no opinion on whether this is true. However, if it is true, it means that a recommender algorithm (when applied to pornography) could potentially make a person become a serial killer faster than they would have otherwise or pave the opportunity (for those who are slightly vulnerable of becoming one but have self control) for them to become one at all by exploiting their vulnerability.

Suggestion: 

A recommender algorithm can shut off periodically. The person can be notified when it is shut off and when it is on. When it is off, maybe things will appear based on how recent they are or something. This way a person can see the difference in their quality of life and content consumption with and without the recommender algorithm and decide whether the algorithm has any benefit. It is possible over time that the person will view the algorithm as a lenses into possibly their own bad habits or into the dark side of human history. It is possible that having the algorithm on sometimes, and off at others,  can reduce the capacity of the algorithm to become insidious in the person's life and make the interaction with the algorithm a more conscious interaction on behalf of the person; the algorithm may have some dark aspects and results, but the person can constantly be aware of these results and perhaps see it as a reflection of humanity's own faults.

Comment by PaulCousens on Analgesics for farm animals · 2021-12-28T00:41:04.729Z · EA · GW

To minimize human-caused suffering as much as possible,  it seems that farm animals should be let to live freely until they die naturally and shouldn't need to be modified in any way. A quick google search told me that cows have lifespans of 15-20 years and chickens have lifespans of 3-7 years. Since the world produces enough food to feed the global population several times over (even though hundreds of millions of people go without food), it might be that society and individual habits can be restructured in such a way (such as by using less of our food to feed farm animals and individuals not wasting any food they buy) so that we could farm in the way I just said and we would virtually eat as much as we currently do.

Analgesics are better than nothing. However, they don't erase the trauma that the animals experience from being modified. I don't know how the modifications affects the animals in the long run, but I wonder if they would cause chronic living struggles similar to what humans experience who are missing a limb, have back problems, etc. Also, the animals cannot communicate to us any secondary problems that result from their body modifications. It seems that addressing the pain caused by our modifications of them could potentially bring up all the issues I just raised plus many more that could have been avoided altogether by not modifying them in the first place.