Posts

"Aletheism" and "Omnism" as alternatives to "Altruism" 2022-11-23T04:52:18.062Z
Light read: Free Trade, by Judy Eby 2021-02-13T21:39:56.620Z
How should we invest in "long-term short-termism" given the likelihood of transformative AI? 2021-01-12T23:54:28.866Z
James_Banks's Shortform 2021-01-05T06:42:52.531Z
Working to save a life gives it value to you 2020-09-01T04:16:21.730Z
What do we do if AI doesn't take over the world, but still causes a significant global problem? 2020-08-02T03:35:19.473Z
What values would EA want to promote? 2020-07-09T06:35:10.156Z

Comments

Comment by James_Banks on The religion problem in AI alignment · 2022-09-16T05:23:50.228Z · EA · GW

You didn't mention the Long Reflection, which is another point of contact between EA and religion.  The Long Reflection is about figuring out what values are actually right, and I think it would be odd to not do deep study of all the cultures available to us to inform that, including religious ones.  Presumably, EA is all about acting on the best values (when it does good, it does what is really good), so maybe it needs input from the Long Reflection to make big decisions.

Comment by James_Banks on The religion problem in AI alignment · 2022-09-16T05:18:27.937Z · EA · GW

I've wondered if it's easier to align AI to something simple rather than complex (or if it's more like "aligning things at all is really hard, but adding complexity is relatively easy once you get there").  If simplicity is more practical, then training an AI to do something libertarian might be simpler than to pursue any other value.  The AI could protect "agency" (one version of that being "ability of each human to move their bodies as they wish, and the ability to secure their own decision-making ability").  Or, it might turn out to be easier to program AI to listen to humans, so that AI end up under the rule of human political and economic structures, or some other way to aggregate human decision-making.   Under either a libertarian or human-obeying AI programming, humans can pursue their religions mostly as they always have. 

Comment by James_Banks on Guided by the Beauty of One’s Philosophies: Why Aesthetics Matter · 2022-05-15T20:04:02.509Z · EA · GW

This is sort of a loose reply to your essay.  (The things I say about "EA" are just my impressions of the movement as a whole.)

I think that EA has aesthetics, it's just that the (probably not totally conscious) aesthetic value behind them is "lowkeyness" or "minimalism".  The Forum and logo seems simple and minimalistically warm, classy, and functional to me.

Your mention of Christianity focuses more on medieval-derived / Catholic elements.   Those lean more "thick" and "nationalistic".  ("Nationalistic" like "building up a people group that has a deeper emotional identity and shared history", maybe one which can motivate the strongest interpersonal and communitarian bonds).  But there are other versions of Christianity, more modern / Protestant / Puritan / desert.   Sometimes people are put off by the poor aesthetics of Protestant Christianity, but at some times and in some contexts, people prefer Protestantism over Catholicism, despite its relative aesthetic poverty.  I think one set of things that Puritan (and to an extent Protestant), and desert Christianities have in common is self-discipline, work, and frugality.   Self-discipline, work, and frugality seem to be a big part of being an EA, or at least in EA as it has been up to now.  So maybe in that sense, EA (consciously or not) has exactly the aesthetic it should have.

I think aesthetic lack helps a movement be less "thick" and "nationalistic" and avoiding politics is an EA goal.  (EA might like to affect politics, but avoid political identity at the same time.)  If you have a "nice looking flag" you might "kill and die" for it.  The more developed your identity, the more you feel like you have to engage in "wars" (at least flame wars) over it.  I think EA is conflict-averse and wants to avoid politics (maybe it sometimes wants to change politics but not be politically committed? or change politics in the least "stereotypically political" way possible, least "politicized"?).  EA favors normative uncertainty and being agnostic about what the good is.  So EAs might not want to have more-developed aesthetics, if those aesthetics come with commitments.

I think the EA movement as it is is doing (more or less) the right thing aesthetically.  But, the foundational ideas of EA (the things that change people's lives so that they are altruistic in orientation and have a sense that there is work for them to do and that they have to do it "effectively", or maybe that cause them to try to expand their moral circles) are ones that might ought to be exported to other cultures, perhaps to a secular culture that is the "thick" version of EA, or to existing more-"thick" cultures, like the various Christian, Muslim, Buddhist, Hindu, etc. cultures.   A "thick EA" might innovate aesthetically and create a unique (secular, I assume) utopian vision in addition to the numerous other aesthetic/futuristic visions that exist.  But "thick EA" would be a different thing than the existing "thin EA".

Comment by James_Banks on 13 ideas for new Existential Risk Movies & TV Shows – what are your ideas? · 2022-04-13T07:42:14.272Z · EA · GW

I hadn't heard of When the Wind Blows before.  From the trailer, I would say Testament may be darker, although a lot of that has to do with me not responding to animation (or When the Wind Blows' animation) as strongly as to live-action.  (And then from the Wikipedia summary, it sounds pretty similar.)

Comment by James_Banks on 13 ideas for new Existential Risk Movies & TV Shows – what are your ideas? · 2022-04-12T18:44:05.558Z · EA · GW

I would recommend Testament  as a reference for people making X-risk movies.  It's about people dying out from radiation after a nuclear war, from the perspective of a mom with kids.  I would describe it as emotionally serious, and also it presents a woman's and "ordinary person's" perspective.  I guess it could be remade if someone wanted to, or it could just be a good influence on other movies.

Comment by James_Banks on Why should we care about existential risk? · 2022-04-09T00:24:43.690Z · EA · GW

Existential risk might be worth talking about because of normative uncertainty.  Not all EAs are necessarily hedonists, and perhaps the ones who are shouldn't be, for reasons to be discovered later.  So, if we don't know what "value" is, or, as a movement, EA doesn't "know" what "value" is, a priori, we might want to keep our options open, and if everyone is dead, then we can't figure out what "value" really is or ought to be.

Comment by James_Banks on James_Banks's Shortform · 2022-04-07T21:27:45.974Z · EA · GW

If EA has a lot of extra money, could that be spent on incentivizing AI safety research?  Maybe offer a really big bounty for solving some subproblem that's really worth solving.  (Like if somehow we could read  and understand neural networks directly instead of them being black boxes.)

Could EA (and fellow travelers) become the market for an AI safety industry?

Comment by James_Banks on Liars · 2022-04-05T21:52:42.149Z · EA · GW

I wonder if there are other situations where a person has a "main job" (being a scientist, for instance) and is then presented with a "morally urgent situation" that comes up (realizing your colleague is probably a fraud and you should do something about it).  The traditional example is being on your way to your established job and seeing someone beaten up on the side of the road whom you could take care of.  This "side problem" can be left to someone else (who might take responsibility, or not) and if taken on, may well be an open-ended and energy draining project that has unpredictable outcomes for the person deciding whether to take it on.  Are there other kinds of "morally urgent side problems that come up " and are there any better or worse ways to deal with the decision whether to engage?

Comment by James_Banks on A Primer on God, Liberalism and the End of History · 2022-04-01T05:19:47.337Z · EA · GW

The plausibility of this depends on exactly what the culture of the elite is.  (In general, I would be interested in knowing what all the different elite cultures in the world actually are.)  I can imagine there being some tendency toward thinking of the poor / "low-merit", as being  superfluous, but I can also imagine superrich people not being that extremely elitist and thinking "why not? The world is big, let the undeserving live."  or even things which are more humane than that.

But also, despite whatever humaneness there might be in the elite, I can see there being Molochian pressures to discard humans.  Can Moloch be stopped?  (This seems like it would be a very important thing to accomplish, if tractable.)   If we could solve international competition (competition between elite cultures who are in charge of things), then nations could choose to not have the most advanced economies they possibly could, and thus could have a more "pro-slack" mentality.  

Maybe AGI will solve international competition?  I think a relatively simple, safe alignment for an AGI , would be for one that was the servant of humans -- but which ones?  Each individual? Or the elites who currently represent them?  If the elites, then it wouldn't automatically stop Moloch.  But otherwise it might.  

(Or the AGI could respect the autonomy of humans and let them have whatever values they want, including international competition, which may plausibly be humanity's "revealed preference".)

Comment by James_Banks on Why the expected numbers of farmed animals in the far future might be huge · 2022-03-24T06:09:00.353Z · EA · GW

This is kind of like my comment at the other post, but it's what I could think of as feedback here.

--

I liked your point IV, that inefficiency might not go away.  One reason it might not is because humans (even digital ones) would have something like free will, or caprice, or random preferences, in the same way that they do now.    Human values may not behave according to our concept of "reasonable rational values" over time, as they evolve.  In human history, there have been impulses toward the rational and the irrational.   So they might for some reason prefer something like "authentic" beef from a real / biological cow (rather than digital-world simulated beef), or wish to make some kind of sacrifice of "atoms" for some weird far future religion or quasi-religion that evolves.   

--

I don't know if my view is a mainstream one in longtermism, but I tend to think that civilization is inherently prone to fragility, and that it is uncertain that we will ever have faster-than-light travel or communications.  (I haven't thought a lot about these things, so maybe someone can show me a better way to see this.)  If we don't have FTL, then the different planets we colonize will be far apart enough to develop divergent cultures, and generally be unable to be helped by others in case of trouble.  Maybe the trouble would be something like an asteroid strike.  Or maybe it would be an endogenous cultural problem, like a power struggle among digital humans rippling out into the operation of the colony.

If this "trouble" caused a breakdown in civilization on some remote planet, it might impair their ability to do high tech things (like produce cultured meat).  If there is some risk of this happening, they would probably try to have some kind of backup system.  The backup system could be flesh-and-blood humans (more resilient in a physical environment than digital beings, even ones wedded to advanced robotics), along with a natural ecosystem and some kind of agriculture.  They would have to keep the backup ecosystem and humans going throughout their history, and then if "trouble" came, the backup ecosystem and society might take over.  Maybe for a while, hoping to return to high-tech digital human society, or maybe permanently, if they feel like it.

At that point, it all depends on the culture of the backup society staying true to "no factory farming" as to whether they don't redevelop factory farming.  If they do redevelop factory farming, then that would be part of the far future's "burden of suffering" (or whatever term is better than that).

I guess one way to prevent this kind of thing from happening (maybe what longtermists already suggest), is to simply assume that some planets will break down, and try to re-colonize them if that happens, instead of expecting them to be able to deal with their own problems.

I guess if there isn't such a thing as FTL, our ability to colonize space will be greatly limited, and so the sheer quantity of suffering possible will be a lot lower (as well as whatever good sentience gets out of existence).  But, say, we only colonize 100 planets over the remainder of our existence (under no-FTL), and 5% of them re-develop factory farming, that's still five times as many as Earth today.

Comment by James_Banks on Who is protecting animals in the long-term future? · 2022-03-21T19:30:46.881Z · EA · GW

This isn't a very direct response to your questions, but is relevant, and is a case for why there might be a risk of factory farming in the long-term future.  (This doesn't address the scenarios from your second question.) [Edit: it does have an attempt at answering your third question at the end.]

--

It may be possible that if plant-based meat substitutes are cheap enough and taste like (smell like, have mouth feel of, etc.) animal-derived meat, then it won't make economic sense to keep animals for that purpose.

That's the hopeful take, and I'm guessing maybe a more mainstream take.

If life is always cheaper in the long-run for producing meat substitutes (the best genetic engineering can always produce life that can out-compete the best non-life lab techniques), would it have to be sentient life, or could it be some kind of bacteria or something like that?  It doesn't seem to me that sentience is helpful in making animal protein, and probably just imposes some cost.

(Another hopeful take.)

A less hopeful take:  One advantage that life has over non-life, and where sentience might be an advantage, is that it can be let loose in an environment unsupervised and then rounded up for slaughter.  So we could imagine "pioneers" on a lifeless planet letting loose some kind of future animal as part of terraforming, then rounding them up and slaughtering them.   This is not the same as factory farming, but if the slaughtering process (or rounding-up process) is excessively painful, that is something to be concerned about.

My guess is that one obstacle to humans being kind to animals (or being generous in any other way) has to do with whether they are in "personal survival mode".  Utilitarian altruists might be in a "global survival mode" and care about X-risk.  But, when times get hard for people, personally, they tend to become more of "personal survival mode" people.  Maybe being a pioneer on a lifeless planet is a hard thing that can go wrong (for the pioneers), and the cultures that are formed by that founding experience will have a hard time being fully generous.

Global survival mode might be compatible with caring about animal welfare.  But personal survival mode is probably more effective at solving personal problems than global survival mode (or there is a decent reason to think that it could be), even if global survival mode implies that you should care about your own well-being as part of the whole, because personal survival mode is more desperate and efficient, and so more focused and driven toward the outcome of personal survival.  Maybe global survival mode is sufficient for human survival, but it would make sense that personal survival mode could outcompete it and seem attractive when times get hard.

Basically, we can imagine space colonization as a furtherance of our highest levels of civilization, all the colonists selected for their civilized values before being sent out, but maybe each colony would be somewhat fragile and isolated, and could restart at, or devolve to, a lower level of civilization, bringing back to life in it whatever less-civilized values we feel we have grown past.  Maybe from that, factory farming could re-emerge.

If we can't break the speed of light, it seems likely to me that space colonies (at least, if made of humans), will undergo their own cultural evolution and become somewhat estranged from us and each other (because it will be too hard to stay in touch), and that will risk the re-emergence of values we don't like from human history. 

How much of cultural evolution is more or less an automatic response to economic development, and how much is path-dependent?  If there is path-dependency, we would want to seed each new space colony with colonists who 1) think globally (or maybe "cosmically" is a better term at this scale), with an expanded moral circle, or more important, a tendency to expand their moral circles; 2) are not intimidated by their own deaths; 3) maybe have other safeguards against personal survival mode; 4) but still are effective enough at surviving.  And try to institutionalize those tendencies into an ongoing colonial culture.  (So that they can survive, but without going into personal survival mode.)   For references for that seeded culture, maybe we would look to past human civilizations which produced people who were more global than they had to be given their economic circumstances, or notably global even in a relatively "disestablished" (chaotic, undeveloped, dysfunctional, insecure) or stressed state or environment. 

(That's a guess at an answer to your third question.)

Comment by James_Banks on Book Review: Deontology by Jeremy Bentham · 2022-03-04T05:25:40.298Z · EA · GW

I don't think your dialogue seems creepy, but I would put it in the childish/childlike category.   The more mature way to love is to value someone in who they are (so you are loving them, a unique personal being, the wholeness of who they are rather than the fact that they offer you something else) and to be willing to pay a real cost for them.  

I use the terms "mature" and "childish/childlike" because (while children are sometimes more genuinely loving than adults), I think there is a natural tendency to lose some of your taste for the flavors, sounds, feelings of excitement, and so on, you tend to like as a child,  and to be forced to pay for people, and to come to love them more deeply (more genuinely) because of it, as you grow older.  

"Person X gives me great pleasure, a good thing" and "Person X is happy, another good thing" -- Is Person X substitutable for an even greater pleasure?  Like, would you vaporize Person X (even without causing them pain), so that you could get high/experience tranquility if that gave you greater pleasure? Or from a more altruistic or all-things-considered perspective, if that would cause there to be more pleasure in the world as a whole? If you wouldn't, then I think there's something other than extreme  hedonism going on.

I do think that you can love people in the very act of enjoying them (something I hadn't realized when I wrote the comment you replied to).  I am not sure if that is always the case when someone enjoys someone else, though.  The case I would now make for loving someone just because you enjoy them would be something like this: 

  1. "love" of a person is "valuing a person in a personal way, as what they are, a person"; 
  2. you can value consciously and by a choice of will; 
  3. or, you can value unconsciously/involuntarily by being receptive to enhancement from them.  Your body (or something like your body) is in an attitude of receiving good from them.  ("Receptivity to enhancement" is Joseph Godfrey's definition of trust from Trust of  People, Words, and God.)
  4. being receptive to enhancement (trusting) is (or could be) your body saying "I ask you to benefit me with real benefit, there is value in you with which to bring me value, you help me with a real need I have, a real need that I have is when there's something I really lack (when there's a lack of value in my  eyes), you are valuable in bringing me value, you are valuable".
  5. if the receptivity that is a valuing is receptive to a "you" that to it is a person (unique, personal, unsubstitutable), then you value that person in who they are, and you love them

It's possible that creepy people enjoy other people in a way that denies that they are persons and the other persons' unique personhood.  Or, they only enjoy without trusting (or only trusting in a minimal way).   Fungibility implies a control over your situation and a certain level of indifference about how to dispose of things.  (Vulnerability (deeper trust) inhibits fungibility.)   The person who is enjoyed has become a fungible "hedonic unit" to the creepy person.

(Creepy hedonic love: a spider with a fly wrapped in silk, a fly which is now a meal.  Non-creepy hedonic love: a calf nursing from a cow, a mutuality.)

A person could be consciously or officially a thorough-going hedonist, but subconsciously enjoy people in a non-creepy way.  

I think maturity is like a medicine that helps protect against the tendency of the childish/childlike to sometimes become creepy.

Comment by James_Banks on The Cost of Rejection · 2021-10-12T05:09:22.901Z · EA · GW

Would it be possible for some kind of third party to give feedback on applications?  That way people can get feedback even if hiring organizations find it too costly.   Someone who was familiar with how EA organizations think / with hiring processes specifically, or who was some kind of career coach, to be able to say "You are in the nth percentile of EAs I counsel.  It's likely/unlikely that if you are rejected it's because you're unqualified overall." or "Here are your general strengths and weaknesses as someone applying to this position, or your strengths and weaknesses as someone seeking a career in EA overall."  Maybe hiring organizations could cooperate with such third parties to educate them on what the organization's hiring criteria / philosophy are, so that they have something like an inside view.

Comment by James_Banks on Blameworthiness for Avoidable Psychological Harms · 2021-02-09T05:06:10.480Z · EA · GW

Suppose there is some kind of new moral truth, but only one person knows it.  (Arguably, there will always be a first person.  New moral truth might be the adoption of a moral realism, the more rigorous application of reason in moral affairs, an expansion of the moral circle, an intensification of what we owe the beings in the moral circle, or a redefinition of what "harm" means. ) 

This person may well adopt an affectively expensive point of view, which won't make any sense to their peers (or may make all too much sense).  Their peers may have their feelings hurt by this new moral truth, and retaliate against them.  The person with the new moral truth may endure an almost self-destructive life pattern due to the moral truth's dissonance with the status quo, which will be objected to by other peers, who will pressure  that person to give up their moral truth and wear away at them to try to "save" them.  In the process of resisting the "caring peer", the new-moral-truth person does things that hurt the "caring peer"'s feelings.

There are at least two ideologies at play here.  (The new one and the old one, or the old ones if there are more than one.)  So we're looking at a battle between ideologies, played out on the field of accounting personal harm.  Which ideology does a norm of honoring the least-cost principle favor?  Wouldn't all the harm that gets traded back and forth simply not happen if the new-moral-truth person just hadn't adopted their new ideology in the first place?  So the "court" (popular opinion? an actual court?) that enforces the least-cost principle would probably interpret things according to the status quo's point of view and enforce adherence to the status quo.  But if there is such a thing as moral truth, then we are better off hearing it, even if it's unpopular.

Perhaps the least-cost principle is good, but there should be some provision in a "court"for considering whether ideologies are true and thus inherently require a certain set of emotional reactions.

Comment by James_Banks on Would you buy from an altruistic shop? · 2021-02-08T22:45:49.492Z · EA · GW

The $100 an item market sounds like fair trade.  So you might compete with fair trade and try to explain why your approach is better.

The $50,000 an item market sounds harder but more interesting.  I'm not sure I would ever buy a $50,000 hoodie or mug, no matter how much money I had or how nice the designs on them were.  But I could see myself (if I was rolling in money and cared about my personal appearance) buying a tailored suit for $50,000, and explaining that it only cost $200 to make (or whatever it really does) and the rest went to charity.  You might have to establish your brand in a conventional way (tailored suits, fancy dresses, runway shows, etc.) and be compelling artistically, as well as have the ethical angle.  You would probably need both to compete at that level, is my guess.

Comment by James_Banks on Religious Texts and EA: What Can We Learn and What Can We Inform? · 2021-01-31T00:57:02.599Z · EA · GW

This kind of pursuit is something I am interested in, and I'm glad to see you pursue it.

One thing you could look for, if you want, is the "psychological constitution" being written by a text.  People are psychological beings and the ideas they hold or try to practice shape their overall psychological makeup, affecting how they feel about things and act.  So, in the Bhagavad-Gita, we are told that it is good to be detached from the fruits of action, but to act anyway.  What effect would that idea have if EAs took it (to the extent that they haven't already)?  Or a whole population? (Similarly with its advice to meditate.)  EAs psychologically relate with the fruits of their action, in some way, already.  The theistic religions can blend relationship with ideals or truth itself with relationship with a person.  What difference would that blending make to EAs or the population at large?  I would guess it would produce a different kind of knowing -- maybe not changing object-level beliefs (although it could), but changing the psychology of believing (holding an ideal as a relationship to a person or a loyalty to a person rather than an impersonal law, for instance). 

Comment by James_Banks on Some thoughts on risks from narrow, non-agentic AI · 2021-01-19T07:26:54.825Z · EA · GW

One possibility that maybe you didn't close off (unless I missed it) is "death by feature creep" (more likely "decline by feature creep").  It's somewhat related to the slow-rolling catastrophe, but with the assumption that AI (or systems of agents including AI,  also involving humans) might be trying to optimize for stability and thus regulate each other, as well as trying to maximize some growth variable (innovation, profit).

 Our inter-agent (social, regulatory, economic, political) systems were built by the application of human intelligence, to the point that human intelligence can't comprehend the whole, making it hard to solve systemic problems.  So in one possible scenario, humans plus narrow AI might simplify the system at first, but then keep adding features to the system of civilization until it is unwieldy again.  (Maybe a superintelligent AGI could figure it out?  But if it started adding its own features, then maybe not even it understand what had evolved.)  Complexity can come from competitive pressures, but also from technological innovations.  Each innovation stresses the system, until the system can assimilate it more or less safely, by means of new regulation (social media that messes up politics unless / until we can break or manage some of its power).  

Then, if some kind of feedback loop leading toward civilizational decline begins, general intelligences (humans, if humans are the only general intelligences) might be even less capable of figuring out how to reverse course than they currently are.  In a way, this could be narrow AI as just another important technology, marginally complicating the world.  But also,  we might use narrow AI as tools in AI/AI+humans governance (or perhaps in understanding innovation), and they might be capable of understanding things that we cannot (often things that AI themselves made up), creating a dependency that could contribute in a unique way to a decline.  

(Maybe "understand" is the wrong word to apply to narrow AI but "process in a way sufficiently opaque to humans" works and is as bad.)

Comment by James_Banks on Being Inclusive · 2021-01-18T04:00:13.686Z · EA · GW

One thought that re-occurs to me is that there could be two, related EA movements, which draw from each other.    No official barrier to participating in both (like being on LessWrong and EA Forum at the same time).  Possible to be a leader in both at the same time (if you have time/energy for it).  One of them emphasizes the "effective" in "effective altruists", the other the "altruists".  The first more like current EA, the second more focused on increasing the (lasting) altruism of the greatest number of people.  Human resource focused.   

Just about anyone could contribute to the second one, I would think.  It could be a pool of people from which to recruit for the first one, and both movements would share ideas and culture (to an appropriate degree).

Comment by James_Banks on James_Banks's Shortform · 2021-01-05T06:42:52.887Z · EA · GW

"King Emeric's gift has thus played an important role in enabling us to live the monastic life, and it is a fitting sign of gratitude that we have been offering the Holy Sacrifice for him annually for the past 815 years."

(source: https://sancrucensis.wordpress.com/2019/07/10/king-emeric-of-hungary/ )

It seems to me like longtermists could learn something from people like this.  (Maintaining a point of view for 800 years, both keeping the values aligned enough to do this and being around to be able to.)

(Also a short blog post by me occasioned by these monks about "being orthogonal to history" https://formulalessness.blogspot.com/2019/07/orthogonal-to-history.html )

Comment by James_Banks on The despair of normative realism bot · 2021-01-05T01:11:08.637Z · EA · GW

Moral realism can be useful in letting us know what kind of things should be considered moral.

For instance, if you ground morality in God, you might say: Which God? Well, if we know which one, we might know his/her/its preferences, and that inflects our morality.  Also, if God partially cashes out to "the foundation of trustworthiness, through love", then we will approach knowing and obligation themselves (as psychological realities) in a different way (less obsessive? less militant? or, perhaps, less rigorously responsible?).

Sharon Hewitt Rawlette (in The Feeling of Value) grounds her moral realism in "normative qualia", which for her is something like "the component of pain that feels unacceptable" or its opposite in pleasure), which leads her to hedonic utilitarianism.  Not to preference satisfaction or anything else, but specifically to hedonism.

I think both of the above are best grounded in a "naturalism" (a "one-ontological-world-ism" from my other comment), rather than in anything Enochian or Parfitian.  

Comment by James_Banks on The despair of normative realism bot · 2021-01-05T00:28:51.493Z · EA · GW

I can see the appeal in having one ontological world.  What is that world, exactly?  Is it that which can be proven scientifically (in the sense of, through the scientific method used in natural science)?  I think what can be proven scientifically is perhaps what we are most sure is real or true.  But things that we are less certain of being real can still exist, as part of the same ontological world.  The uncertainty is in us, not in the world.  One simplistic definition of natural science is that it is simply rigorous empiricism.   The rigor isn't how we are metaphysically connected with things, rather it's the empirical that does so, the experiences contacting or occurring to observers.  The rigor simply helps us interpret our experiences.

We can have random experiences that don't add up to anything.  But maybe whatever experiences that give rise to our concept "morality", which we do seem to be able to discuss with some success with other people, and have done so in different time periods, may be rooted in a natural reality (which is not part of the deliverances of "natural science" as "natural" is commonly understood, but which is part of "natural science" if by "natural" we mean "part of the one ontological world").  Morality is something we try hard to make a science of (hence the field of ethics), but which to some extent eludes us.  But that doesn't mean that there isn't something natural there, but that it's something we have so far not figured out.

Comment by James_Banks on What types of charity will be the most effective for creating a more equal society? · 2020-10-12T19:25:46.545Z · EA · GW

Here are some ideas:

The rich have too much money relative to poor:

Taking money versus eliciting money.

Taking via

  • revolution
  • taxation

Eliciting via

  • shame, pressure, guilt
  • persuasion, psychological skill
  • friendship

Change of culture

  • culture in general
  • elite culture

Targeting elite money

  • used to be stewards of investments
  • used for personal spending

--

Revolutions are risky and can lead to worse governments.

Taxation might work better. (Closing tax haven loopholes.) Building political will for higher taxes on wealthy. There are people in the US who don't want there to be higher taxes on wealthy even though it would materially benefit them (culture change opportunity).

Eliciting could be more effective. Social justice culture (OK with shame, pressure, guilt) has philanthropic charities. (Not exactly aligned with EA.) Guerrilla Foundation, Resource Generation. (Already established. You could donate or join now.)

Eliciting via persuasion or psychological tactics sounds like it would appeal to some people to try to do.

Eliciting via friendship: what if a person, or movement, was very good friends with both rich and poor people? Then they could represent the legitimate interests of both to each other in a trustworthy way. I'm not sure anyone is trying this route. Maybe the Giving Pledge counts?

Change of culture. What are the roots of the altruistic mindset? What would help people have, or prepare people to have, values of altruists (a list of such for EA or EA-compatible people; there could be other lists)? Can this be something that gets "in the water" of culture at large? Can culture at large reach into elite culture, or does there have to be a special intervention to get values into elite culture? This sounds more like a project for a movement or set of movements than for a discrete charity.

Elite people have money that they spend on themselves personally -- easy to imagine they could just spend $30,000 a year on themselves and no more, give the balance to charity. But they also have money tied up in investments. Not so easy to ask them to liquidate those investments. If they are still in charge of those investments, then there is an inequality of power, since they can make decisions that affect many people without really understanding the situation of those people. Maybe nationalize industries? But then there can still be an inequality of power between governments and citizens.

If there can be a good flow between citizens and governments, whereby the citizens' voices are heard by the government, then could there be a similar thing between citizens and unelected elite? Probably somebody needs to be in charge of complex and powerful infrastructure, inevitably leading to potential for inequalities of power. Do the elite have an effective norm of listening to non-elite?

--

You might also consider the effect of AI and genetic engineering, or other technologies, on the problem of creating a more equal society. AI will either be basically under human control, or not. If it is, the humans who control it will be yet another elite. If it isn't, then we have to live with whatever society it comes up with. We can hope that maybe AI will enforce norms that we all really want deep down but couldn't enforce ourselves, like equality.

On the other hand, maybe, given the ability to change our own nature using genetic engineering, we (perhaps with the help of the elite) will choose to no longer care about inequality, only a basic sense of happiness which will be attainable by the emerging status quo.

Comment by James_Banks on Expected value theory is fanatical, but that's a good thing · 2020-09-21T19:44:32.778Z · EA · GW

1. I don't know much about probability and statistics, so forgive me if this sounds completely naive (I'd be interested in reading more on this problem, if it's as simple for you as saying "go read X").

Having said that, though, I may have an objection to fanaticism, or something in the neighborhood of it:

  • Let's say there are a suite of short-term payoff, high certainty bets for making things better.
  • And also a suite of long-term payoff, low certainty bets for making things better. (Things that promise "super-great futures".)

You could throw a lot of resources at the low certainty bets, and if the certainty is low enough, you could get to the end of time and say "we got nothing for all that". If the individual bets are low-certainty enough, even if you had a lot of them in your suite you would still have a very high probability of getting nothing for your troubles. (The state of coming up empty-handed.)

That investment could have come at the cost of pursuing the short-term, high certainty suite.

So you might feel regret at the end of time for not having pursued the safer bets, and with that in mind, it might be intuitively rational to pursue safe bets, even with less expected value. You could say "I should pursue high EV things just because they're high EV", and this "avoid coming up empty-handed" consideration might be a defeater for that.

You can defeat that defeater with "no, actually the likelihood of all these high-EV bets failing is low enough that the high-EV suite is worth pursuing."

2. It might be equally rational to pursue safety as it is to pursue high EV, it's just that the safety person and the high-EV person have different values.

3. I think in the real world, people do something like have a mixed portfolio, like Taleb's advice of "expose yourself to high-risk, high-reward investments/experiences/etc., and also low-risk, low-reward." And how they do that shows, practically speaking, how much they value super-great futures versus not coming up empty-handed. Do you think your paper, if it got its full audience, would do something like "get some people to shift their resources a little more toward high-risk, high-reward investments"? Or do you think it would have a more radical effect? (A big shift toward high-risk, high-reward? A real bullet-biting, where people do the bare minimum to survive and invest all other resources into pursuing super-high-reward futures?)

Comment by James_Banks on Are social media algorithms an existential risk? · 2020-09-15T20:46:10.199Z · EA · GW

(The following is long, sorry about that. Maybe I should have written it up already as a normal post. A one sentence abstract could be: "Social media algorithms could be dangerous as a part of the overall process of leading people to 'consent' to being lesser forms of themselves to further elite/AI/state goals, perhaps threatening the destruction of humanity's longterm potential.")

It seems plausible to me that something like algorithmic behavior modification (social media algorithms are algorithms designed to modify human behavior, to some extent; could be early examples of the phenomenon) could bend human preferences so that future humans freely (or "freely"?) choose things that we (the readers of this comment? reflective humans of 2020?) would consider non-optimal. If you combine that with the possibility of algorithms recommending changes in human genes, it's possible to rewrite human nature (with the consent of humans) into a form that AI (or the elite who control AI) find more convenient. For instance, humans could be simplified so that they consume fewer resources or present less of a political threat. The simplest humans are blobs of pleasure (easily satisfying hedonism) and/or "yes machines" (people who prefer cheap and easy things and thus whose preferences are trivial to satisfy). Whether this technically counts as existential risk, I'm not sure. It might be considered a "destruction of humanity's longterm potential". Part of human potential is the potential of humans to be something.

I suggest "freely" might ought to be in quotes for two reasons. One is the "scam phenomenon". A scammer can get a mark into a mindset in which they do things they wouldn't ordinarily do. (Withdraw a large sum of money from their bank account and give it to the scammer, just because the scammer asks for it.) The scammer never puts a gun to the mark's head. They just give them a plausible-enough story, and perhaps build a simple relationship, skillfully but not forcefully suggesting that the mark has something to gain from giving, or some obligation compelling it. If after "giving" the money, the mark wises up and feels regret, they might appeal to the police. Surely they were psychologically manipulated. And they were, they were in a kind of dream world woven by the scammer, who never forced anything but who drew the mark into an alternate reality. In some sense what happened was criminal, a form of theft. But the police will say "But it was of your own free will." The police are somewhat correct in what they say. The mark was "free" in some sense. But in another sense, the mark was not. We might fear that an algorithm (or AI) could be like a sophisticated scammer, and scam the human race, much like some humans have scammed large numbers of humans before.

The second reason is that adoption of changes (notably technology, but also social changes), of which changing human genes would be an example, and of which accepting algorithmic behavior modification could be another, is something that is only in a limited sense a satisfaction of the preferences of humans, or the result of their conscious decision. In the S-shaped curve of adoption, there are early adopters, late/non-adopters, and people in the middle. Early adopters probably really do affirm the innovations they adopt. Late or non-adopters probably really do have some kind of aversion to them. These people have true opinions about innovations. But most people, in the middle of the graph, are incentivized to a large extent by "doing whatever it is looks like is popular, is becoming popular, is something that looks pretty clear has become and will be popular". So technological adoption, or the adoption of any other innovation, is not necessarily something we as a whole species truly prefer or decide for, but there's enough momentum that we find ourselves falling in line.

I think more likely than the extreme of "blobs of pleasure / yes machines" are people who lack depth, are useless, and live in a VR dream world. On some, deeper, level they would be analogous to blobs/yes machines, but their subjective experience, on a surface level, would be more recognizably human. Their lives would be positive on some level and thus would be such that altruistic/paternalistic AI or AI-controlling elite could feel like they were doing the right thing by them. But their lives would be lacking in dimensions that perhaps AI or AI-controlling elite wouldn't think of including in their (the people's, or even the elite's/AI's own) experience. The people might not have to pay a significant price for anything and thus never value things (or other people) in a deeper way. They might be incapable of desiring anything other than "this life", such as a "spiritual world" (or something like a "spiritual world", a place of greater meaning) (something the author of Brave New World or Christians or Nietzscheans would all object to). In some objective sense, perhaps capability -- toward securing your own well-being, capability in general, behaving in a significant way, being able to behave in a way that really matters -- is something that is part of human well-being (and so civilization is both progress and regress as we make people who are less and less capable of, say, growing their own food, because of all the conveniences and safety we build up). We could further open up the thought that there is some objective state of affairs, something other than human perceptions of well-being or preference-satisfaction, which constitutes part of human well-being. Perhaps to be rightly related to reality (properly believing in God, or properly not believing in God, as the case may be).

So we might need to figure out exactly what human well-being is, or if we can't figure it out in advance for the whole human species (after all, each person has a claim to knowing what human well-being is), then try to keep technology and policy from doing things that hamper the ability of each person to come to discover and to pursue true human well-being. One could see in hedonism and preferentialism a kind of attempt at value agnosticism: we no longer say that God (a particular understanding of God), or the state, or some sacred site is the Real Value, we instead say "well, we as the state will support you or at least not hinder you in your preference for God, the state, or the sacred site, whatever you want, as long as it doesn't get in the way of someone else's preference -- whatever makes you happy". But preferentialism and hedonism aren't value-agnostic if they start to imply through their shaping of a person's experience "none of your sacred things are worth anything, we're just going to make you into a blob of pleasure who says yes, on most levels, with a veneer of human experience on the surface level of your consciousness." I think that a truly value-agnostic state/elite/AI might ought to try to maximize "the ability for each person to secure their own decision-making ability and basic physical movement", which could be taken as a proxy for the maximization of each person's agency and thus their ability to discover and pursue true human well-being. And to make fewer and fewer decisions for the populace, to try to make itself less and less necessary from a paternalistic point of view. Rather than paternalism, adopt a parental view -- parents tend to want their children to be capable, and to become, in a sense, their equals. All these are things that altruists who might influence the AI-controlling elite in the coming decades or centuries, or those who might want to align AI, could take into account.

We might be concerned with AI alignment, but we should also be concerned with the alignment of human civilization. Or the non-alignment, the drift of it. Fast take-off AI can give us stark stories where someone accidentally misaligns an AI to a fake utility function and it messes up human experience and/or existence irrevocably and suddenly -- and we consider that a fate to worry about and try to avoid. But slow take-off AI (I think) would/will involve the emergence of a bunch of powerful Tool AIs, each of which (I would expect) would be designed to be basically controllable by some human and to not obviously kill anyone or cause comparably clear harm (analogous to design of airplanes, bridges, etc.) -- that's what "alignment" means in that context [correct me if I'm wrong]; none of which are explicitly defined to take care of human well-being as a whole (something a fast-takeoff aligner might consciously worry about and decide about); no one of which rules decisively; all of which would be in some kind of equilibrium reminiscent of democracy, capitalism, and the geopolitical world. They would be more a continuation of human civilization than a break with it. Because the fake utility function imposition in a slow takeoff civilizational evolution is slow and "consensual", it is not stark and we can "sleep through it". The fact that Nietzsche and Huxley raised their complaints against this drift long ago shows that it's a slow and relatively steady one, a gradual iteration of versions of the status quo, easy for us to discount or adapt to. Social media algorithms are just a more recent expression of it.

Comment by James_Banks on Deliberate Consumption of Emotional Content to Increase Altruistic Motivation · 2020-09-13T20:12:31.922Z · EA · GW

OK, this person on the EA subreddit uses a kind of meditation to reduce irrational/ineffective guilt.

Comment by James_Banks on Deliberate Consumption of Emotional Content to Increase Altruistic Motivation · 2020-09-13T19:31:13.218Z · EA · GW

I like the idea of coming up with some kind of practice to retrain yourself to be more altruistic. There should be some version of that idea that works, and maybe exposing yourself to stories / imagery / etc. about people / animals who can be helped would be part of that.

One possibility is that such images could become naturally compelling for people (and thus would tend to be addictive or obsession-producing, because of their awful compellingness) -- for such people, this practice is probably bad, sometimes (often?) a net bad. But for other people, the images would lose their natural compellingness, and would have to be consumed deliberately.

In our culture we don't train ourselves to deliberately meditate on things, so it feels "culturally unrealistic", like something you can't expect of yourself and the people around you. (Or perhaps some subtle interplay of environmental influences on how we develop as "processors of reality" when we're growing up is to blame.) I feel like that part of me is more or less irrevocably closed over (maybe not an accurate sentiment, but a strong one). But in other cultures (not so much in the contemporary West), deliberate meditation was / is a thing. For instance people used to (maybe still do) meditate on the death of Jesus to motivate their love of God.

Comment by James_Banks on [deleted post] 2020-09-12T18:20:22.084Z

Also, this makes me curious: have things changed any since 2007? Does the promotion of 1 still seem as necessary? What role has the letter (or similar ideas/sentiments) played in whatever has happened with charities and funders over the last 13 years?

Comment by James_Banks on [deleted post] 2020-09-12T18:02:02.067Z

I think there's a split between 1) "I personally will listen to brutal advice because I'm not going to let my feelings get in the way of things being better" and 2) "I will give brutal advice because other people's feelings shouldn't get in the way of things being better". Maybe Holden wanted people to internalize 1 at the risk of engaging in 2. 2 may have been his way of promoting 1, a way of invalidating the feelings of his readers, who would go on to then be 1 people.

I'm pretty sure that there's a way to be kind and honest, both in object-level discussion ("your charity is doing X wrong") and in the meta discussion, of 1. (My possibly uninformed opinion:) Probably there needs to be a meeting in the middle: charities adopting 1 more and more, and funders finding away to be honest without 2. It takes effort for both to go against what is emotionally satisfying (the thinking nice things about yourself of anti-1, and the lashing out at frustrating immature people of 2). It takes effort to make that kind of change in both funder and charity culture (maybe something to work on for someone who's appropriately talented?).

Comment by James_Banks on Here's what Should be Prioritized as the Main Threat of AI · 2020-09-10T17:24:43.466Z · EA · GW

It looks like some people downvoted you, and my guess is that it may have to do with the title of the post. It's a strong claim, but also not as informative as it could be, doesn't mention anything to do with climate change or GHGs, for instance.

Comment by James_Banks on Here's what Should be Prioritized as the Main Threat of AI · 2020-09-10T16:59:29.162Z · EA · GW

Similarly, one could be concerned that the rapid economic growth that AI are expected to bring about could cause a lot of GHG emissions unless somehow we (or they) figure out how to use clean energy instead.

Comment by James_Banks on When can Writing Fiction Change the World? · 2020-08-25T18:28:24.219Z · EA · GW

Here's a related quote from Eccentrics by David Weeks and Jamie James (pp. 67 - 68) (I think it's Weeks speaking in the following quote:)

My own concept of creativity is that it is effective, empathic problem-solving. The part that empathy plays in this formulation is that it represents a transaction between the individual and the problem. (I am using the word "problem" loosely, as did Ghiselin: for an artist, the problem might be how to depict an apple.) The creative person displaces his point of view into the problem, investing it with something of his own intellect and personality, and even draws insights from it. He identifies himself with all the depths of the problem. Georges Braque expounded a version of this concept succinctly: "One must not just depict the objects, one must penetrate into them, and one must oneself become the object."
This total immersion in the problem means that there is a great commitment to understand it at all costs, a deep commitment that recognizes no limits. In some cases the behavior that results can appear extreme by everyday standards. For example, when the brilliant architect Kiyo Izumi was designing a hospital for schizophrenics, he took LSD, which mimics some of the effects of schizophrenia, in order to understand the perceptual distortions of the people who would be living in the building. This phenomenon of total immersion is typical of eccentricity: overboard is the only way most eccentrics know how to go.

This makes me think: "You become the problem, and then at high stakes are forced to solve yourself, because now it's a life or death situation for you."

Comment by James_Banks on When can Writing Fiction Change the World? · 2020-08-25T18:08:43.039Z · EA · GW

Thinking back on books that have made a big effect on me, I think they were things which spoke to something already in me, maybe something genetic, to a large extent. It's like I was programmed from birth to have certain life movements, and so I could immediately recognize what I read as the truth when it came to me -- "that's what I was always wanting to say, but didn't know how!" I think that probably explains HP:MOR to a large extent (but I haven't read HP:MOR).

My guess is that a large part of Yudkowsky's motivation in writing the inspiring texts of the rationalist community was his big huge personality -- him expressing himself. It happens that by doing that, he expressed a lot of other people's personalities. I'm reminded of quotes (which unfortunately I can't source at the moment) that I remember from David Bowie and John Lennon. David Bowie was accused of being powerful but he said "I'm not powerful. I'm an observer." (which is actually a really powerful role). John Lennon said something like "Our power was in mainly just talking about our own lives" (vis a vis psychedelics, them getting into Eastern thinking, maybe other things) "and that's a powerful thing." Maybe Yudkowsky was really just talking about his life being mad at how the world isn't an actually good place and how he personally was going to do something about it, and just seeing things that he personally found stupid about how other people thought about things (OK, that's maybe a strawman of him ;-) ). I think whatever art you do will be potentially more powerful (if you're lucky enough to get an audience) the deeper it comes from who you are, the more you take it personally.

Comment by James_Banks on Book Review: Deontology by Jeremy Bentham · 2020-08-17T22:47:23.553Z · EA · GW

Interesting. A point I could get out of this is: "don't take your own ideology too seriously, especially when the whole point of your ideology is to make yourself happy."

An extreme hedonism (a really faithful one) is likely to produce outcomes like:

"I love you."

"You mean, I give you pleasure?"

"Well, yeah! Duh!"

Which is a funny thing to say, kind of childish or childlike. (Or one could make the exchange be creepy: "Yeah, you mean nothing more to me than the pleasure you give me.")

Do people really exist to each other?

I see a person X:

1. X has a body. --Okay, on that level they're real.

2. I can form a mental model of X's mind. --Good, I consider them a person.

3. X exists for me only relevant to the pleasure or pain they give to me. --No, on that level, all that exists to me is my pleasure or pain.

If I'm rigorously hedonistic, then at that deepest level (level 3 above), I am alone with my feelings and points of view. But Bentham maybe doesn't want me to be rigorously hedonistic anyway.

Comment by James_Banks on A New X-Risk Factor: Brain-Computer Interfaces · 2020-08-10T19:41:27.767Z · EA · GW

I can see a scenario where BCI totalitarianism sounds like a pretty good thing from a hedonic utilitarian point of view:

People are usually more effective workers when they're happy. So a pragmatic totalitarian government (like Brave New World) rather than a sadistic one or sadistic/pragmatic (1984, maybe) would want its people to be happy all the time, and would stimulate whatever in the brain makes them happy. To suppress dissent it would just delete thoughts and feelings in that direction as painlessly as possible. Competing governments would have an incentive to be pragmatic rather than sadistic.

Then the risk comes from the possibility that humans aren't worth keeping around as workers, due to automation.

Comment by James_Banks on What do we do if AI doesn't take over the world, but still causes a significant global problem? · 2020-08-06T03:18:31.346Z · EA · GW
In any case, both that quoted statement of yours and my tweaked version of it seem very different from the claim "if we don't currently know how to align/control AIs, it's inevitable there'll eventually be significantly non-aligned AIs someday"?

Yes, I agree that there's a difference.

I wrote up a longer reply to your first comment (the one marked "Answer'), but then I looked up your AI safety doc and realized that I might better read through the readings in that first.

Comment by James_Banks on What do we do if AI doesn't take over the world, but still causes a significant global problem? · 2020-08-05T23:49:21.214Z · EA · GW

Yeah, I wasn't being totally clear with respect to what I was really thinking in that context. I was thinking "from the point of view of people who have just been devastated by some not-exactly superintelligent but still pretty smart AI that wasn't adequately controlled, people who want to make that never happen again, what would they assume is the prudent approach to whether there will be more non-aligned AI someday?", figuring that they would think "Assume that if there are more, it is inevitable that there will be some non-aligned ones at some point". The logic being that if we don't know how to control alignment, there's no reason to think there won't someday be significantly non-aligned ones, and we should plan for that contingency.

Comment by James_Banks on Objections to Value-Alignment between Effective Altruists · 2020-07-15T20:39:25.452Z · EA · GW

A few things this makes me think of:

explore vs. exploit: The first part of your life (the first 37%?), you gather information, then the last part, you use that information, maximizing and optimizing according to it. Humans have definite lifespans, but movements don't. Perhaps a movement's life depends somewhat on how much exploration they continue to do.

Christianity: I think maybe the only thing all professed Christians have in common is attraction to Jesus, who is vaguely or definitely understood. You could think of Christianity as a movement of submovements (denominations). The results are these nicely homogenous groups. There's a Catholic personality or personality-space, a Methodist, Church of Christ, Baptist, etc. Within them are more, or less, autonomous congregations. Congregations die all the time. Denominations wax and wane. Over time, what used to divide people into denominations (doctrinal differences) has become less relevant (people don't care about doctrine as much anymore), and new classification criteria connect and divide people along new lines (conservative vs. evangelical vs. mainline vs. progressive). An evangelical Christian family who attend a Baptist church might see only a little problem in switching to a Reformed church that was also evangelical. A Church of Christ member, at a church that would have considered all Baptists to not really be Christians 50 or 100 years ago, listens to some generic non-denominational nominally Baptist preacher who says things he likes to hear, while also hearing the more traditional Church of Christ sermons on Sunday morning.

The application of that example to EA could be something like: Altruism with a capital-A is something like Jesus, a resonant image. Any Altruist ought to be on the same side as any other Altruist, just like any Christian ought to be on the same side as any other Christian, because they share Altruism, or Jesus. Just as there is an ecosystem of Christian movements, submovements, and semiautonomous assemblies, there could be an ecosystem of Altruistic movements, submovements, and semiautonomous groups. It could be encouraged or expected of Altruists that they each be part of multiple Altruistic movements, and thus be exposed to all kinds of outside assumptions, all within some umbrella of Altruism. In this way, within each smaller group, there can be homogeneity. The little groups that exploit can run their course and die while being effective tools in the short- or medium-term, but the overall movement or megamovement does not, because overall it keeps exploring. And, as you point out, continuing to explore improves the effectiveness of altruism. Individual movements can be enriched and corrected by their members' memberships in other movements.

A Christian who no longer likes being Baptist can find a different Christianity. So it could be the same with Altruists. EAs who "value drift" might do better in a different Altruism, and EA could recruit from people in other Altruisms who felt like moving on from those.

Capital-A Altruism should be defined in a minimalist way in order to include many altruistic people from different perspectives. EAs might think of whatever elements of their altruism that are not EA-specific as a first approximation of Altruism. Once Altruism is defined, it may turn out that there are already a number of existing groups that are basically Altruistic, though having different cultures and different perspectives than EA.

Little-a altruism might be too broad for compatibility with EA. I would think that groups involved in politicizing go against EA's ways. But then, maybe having connection even with them is good for Altruists.

In parallel to Christianity, when Altruism is at least somewhat defined, then people will want to take the name of it, and might not even be really compliant with the N Points of Altruism, whatever value of N one could come up with -- this can be a good and a bad thing, better for diversity, worse for brand strength. But also in parallel to Christianity, there is generally a similarity within professed Christians which is at least a little bit meaningful. Experienced Christians have some idea of how to sort each other out, and so it could be with Altruists. Effective Altruism can continue to be as rigorously defined as it might want to be, allowing other Altruisms to be different.

Comment by James_Banks on What values would EA want to promote? · 2020-07-10T18:16:54.113Z · EA · GW

A few free ideas occasioned by this:

1. The fact that this is a government paper makes me think of "people coming together to write a mission statement." To an extent, values are agreed-upon by society, and it's good to bear that in mind. (Working with widespread values instead of against them, accepting that to an extent values are socially-constructed (or aren't, but the crowd could be objectively right and you wrong) and adjusting to what's popular instead of using a lot of energy to try to change things.)

2. My first reaction when reading the "Champion democracy,..." list is "everybody knows about those things... boring", but if you want to do good, you shouldn't be dissuaded by the "unsexiness" of a value or pursuit. That could be a supporting value to the practice of altruism.

Comment by James_Banks on What values would EA want to promote? · 2020-07-09T21:37:11.404Z · EA · GW

I'm basically an outsider to EA, but "from afar", I would guess that some of the values of EA are 1) against politicization, 2) for working and building rather than fighting and exposing ("exposing" being "saying the unhealthy truth for truth's sake", I guess), 3) for knowing and self-improvement (your point), 4) concern for effectiveness (Gordon's point). And of course, the value of altruism.

These seem like they are relatively safe to promote (unless I'm missing something.)

Altruism is composed of 1) other-orientation / a relative lack of self-focus (curiosity is an intellectual version of this), 2) something like optimism, 3) openness to evidence (you could define "hope" as a certain combination of 2 and 3), 4) personal connection with reality (maybe a sense of moral obligation, a connection with other being's subjective states, or a taste for a better world), 5) inclination to work, 6...) probably others. So if you value altruism, you have to value whatever subvalues it has.

These also seem fairly safe to promote.

Altruism is supported by 1) "some kind of ambition is good", 2) "humility is good but trying to maximize humility is bad" (being so humble you don't have any confidence in your knowledge prevents action), 3) "courage is good but not foolhardiness", 4) "will is good, if it stays in touch with reality", 5) "being 'real' is good" (following through on promises, really having intentions), 6) "personal sufficiency is good" (you have enough or are enough to dare reach into someone else's reality), 7...) probably others.

These are riskier. I think one thing to remember is that ideas are things in people's minds, that culture is really embodied in people, not in words. A lot of culture is in interpersonal contact, which forms the context for ideas. So ideally, if you promote values, you shouldn't just say things, but should instruct people (or be in relationship with people) such that they really understand what you're saying. (Advice I've seen on this forum.) Genes become phenotype through epigenetics, and concepts become emotions, attitudes, and behaviors through the "epiconceptual". The epiconceptual could be the cultural background that informs how people hear a message (like "yes, this is the moral truth, but we don't actually expect people to live up to the moral truth"), or it could be the subcultural background from a relationship or community that makes it make sense. The practices and expectations of culture / subculture. So values are a thing which are not promoted just by communicators, but also by community-builders, and good communities help make risky but productive words safe to spread.