Posts

Light read: Free Trade, by Judy Eby 2021-02-13T21:39:56.620Z
How should we invest in "long-term short-termism" given the likelihood of transformative AI? 2021-01-12T23:54:28.866Z
James_Banks's Shortform 2021-01-05T06:42:52.531Z
Working to save a life gives it value to you 2020-09-01T04:16:21.730Z
What do we do if AI doesn't take over the world, but still causes a significant global problem? 2020-08-02T03:35:19.473Z
What values would EA want to promote? 2020-07-09T06:35:10.156Z

Comments

Comment by James_Banks on The Cost of Rejection · 2021-10-12T05:09:22.901Z · EA · GW

Would it be possible for some kind of third party to give feedback on applications?  That way people can get feedback even if hiring organizations find it too costly.   Someone who was familiar with how EA organizations think / with hiring processes specifically, or who was some kind of career coach, to be able to say "You are in the nth percentile of EAs I counsel.  It's likely/unlikely that if you are rejected it's because you're unqualified overall." or "Here are your general strengths and weaknesses as someone applying to this position, or your strengths and weaknesses as someone seeking a career in EA overall."  Maybe hiring organizations could cooperate with such third parties to educate them on what the organization's hiring criteria / philosophy are, so that they have something like an inside view.

Comment by James_Banks on Blameworthiness for Avoidable Psychological Harms · 2021-02-09T05:06:10.480Z · EA · GW

Suppose there is some kind of new moral truth, but only one person knows it.  (Arguably, there will always be a first person.  New moral truth might be the adoption of a moral realism, the more rigorous application of reason in moral affairs, an expansion of the moral circle, an intensification of what we owe the beings in the moral circle, or a redefinition of what "harm" means. ) 

This person may well adopt an affectively expensive point of view, which won't make any sense to their peers (or may make all too much sense).  Their peers may have their feelings hurt by this new moral truth, and retaliate against them.  The person with the new moral truth may endure an almost self-destructive life pattern due to the moral truth's dissonance with the status quo, which will be objected to by other peers, who will pressure  that person to give up their moral truth and wear away at them to try to "save" them.  In the process of resisting the "caring peer", the new-moral-truth person does things that hurt the "caring peer"'s feelings.

There are at least two ideologies at play here.  (The new one and the old one, or the old ones if there are more than one.)  So we're looking at a battle between ideologies, played out on the field of accounting personal harm.  Which ideology does a norm of honoring the least-cost principle favor?  Wouldn't all the harm that gets traded back and forth simply not happen if the new-moral-truth person just hadn't adopted their new ideology in the first place?  So the "court" (popular opinion? an actual court?) that enforces the least-cost principle would probably interpret things according to the status quo's point of view and enforce adherence to the status quo.  But if there is such a thing as moral truth, then we are better off hearing it, even if it's unpopular.

Perhaps the least-cost principle is good, but there should be some provision in a "court"for considering whether ideologies are true and thus inherently require a certain set of emotional reactions.

Comment by James_Banks on Would you buy from an altruistic shop? · 2021-02-08T22:45:49.492Z · EA · GW

The $100 an item market sounds like fair trade.  So you might compete with fair trade and try to explain why your approach is better.

The $50,000 an item market sounds harder but more interesting.  I'm not sure I would ever buy a $50,000 hoodie or mug, no matter how much money I had or how nice the designs on them were.  But I could see myself (if I was rolling in money and cared about my personal appearance) buying a tailored suit for $50,000, and explaining that it only cost $200 to make (or whatever it really does) and the rest went to charity.  You might have to establish your brand in a conventional way (tailored suits, fancy dresses, runway shows, etc.) and be compelling artistically, as well as have the ethical angle.  You would probably need both to compete at that level, is my guess.

Comment by James_Banks on Religious Texts and EA: What Can We Learn and What Can We Inform? · 2021-01-31T00:57:02.599Z · EA · GW

This kind of pursuit is something I am interested in, and I'm glad to see you pursue it.

One thing you could look for, if you want, is the "psychological constitution" being written by a text.  People are psychological beings and the ideas they hold or try to practice shape their overall psychological makeup, affecting how they feel about things and act.  So, in the Bhagavad-Gita, we are told that it is good to be detached from the fruits of action, but to act anyway.  What effect would that idea have if EAs took it (to the extent that they haven't already)?  Or a whole population? (Similarly with its advice to meditate.)  EAs psychologically relate with the fruits of their action, in some way, already.  The theistic religions can blend relationship with ideals or truth itself with relationship with a person.  What difference would that blending make to EAs or the population at large?  I would guess it would produce a different kind of knowing -- maybe not changing object-level beliefs (although it could), but changing the psychology of believing (holding an ideal as a relationship to a person or a loyalty to a person rather than an impersonal law, for instance). 

Comment by James_Banks on Some thoughts on risks from narrow, non-agentic AI · 2021-01-19T07:26:54.825Z · EA · GW

One possibility that maybe you didn't close off (unless I missed it) is "death by feature creep" (more likely "decline by feature creep").  It's somewhat related to the slow-rolling catastrophe, but with the assumption that AI (or systems of agents including AI,  also involving humans) might be trying to optimize for stability and thus regulate each other, as well as trying to maximize some growth variable (innovation, profit).

 Our inter-agent (social, regulatory, economic, political) systems were built by the application of human intelligence, to the point that human intelligence can't comprehend the whole, making it hard to solve systemic problems.  So in one possible scenario, humans plus narrow AI might simplify the system at first, but then keep adding features to the system of civilization until it is unwieldy again.  (Maybe a superintelligent AGI could figure it out?  But if it started adding its own features, then maybe not even it understand what had evolved.)  Complexity can come from competitive pressures, but also from technological innovations.  Each innovation stresses the system, until the system can assimilate it more or less safely, by means of new regulation (social media that messes up politics unless / until we can break or manage some of its power).  

Then, if some kind of feedback loop leading toward civilizational decline begins, general intelligences (humans, if humans are the only general intelligences) might be even less capable of figuring out how to reverse course than they currently are.  In a way, this could be narrow AI as just another important technology, marginally complicating the world.  But also,  we might use narrow AI as tools in AI/AI+humans governance (or perhaps in understanding innovation), and they might be capable of understanding things that we cannot (often things that AI themselves made up), creating a dependency that could contribute in a unique way to a decline.  

(Maybe "understand" is the wrong word to apply to narrow AI but "process in a way sufficiently opaque to humans" works and is as bad.)

Comment by James_Banks on Being Inclusive · 2021-01-18T04:00:13.686Z · EA · GW

One thought that re-occurs to me is that there could be two, related EA movements, which draw from each other.    No official barrier to participating in both (like being on LessWrong and EA Forum at the same time).  Possible to be a leader in both at the same time (if you have time/energy for it).  One of them emphasizes the "effective" in "effective altruists", the other the "altruists".  The first more like current EA, the second more focused on increasing the (lasting) altruism of the greatest number of people.  Human resource focused.   

Just about anyone could contribute to the second one, I would think.  It could be a pool of people from which to recruit for the first one, and both movements would share ideas and culture (to an appropriate degree).

Comment by James_Banks on James_Banks's Shortform · 2021-01-05T06:42:52.887Z · EA · GW

"King Emeric's gift has thus played an important role in enabling us to live the monastic life, and it is a fitting sign of gratitude that we have been offering the Holy Sacrifice for him annually for the past 815 years."

(source: https://sancrucensis.wordpress.com/2019/07/10/king-emeric-of-hungary/ )

It seems to me like longtermists could learn something from people like this.  (Maintaining a point of view for 800 years, both keeping the values aligned enough to do this and being around to be able to.)

(Also a short blog post by me occasioned by these monks about "being orthogonal to history" https://formulalessness.blogspot.com/2019/07/orthogonal-to-history.html )

Comment by James_Banks on The despair of normative realism bot · 2021-01-05T01:11:08.637Z · EA · GW

Moral realism can be useful in letting us know what kind of things should be considered moral.

For instance, if you ground morality in God, you might say: Which God? Well, if we know which one, we might know his/her/its preferences, and that inflects our morality.  Also, if God partially cashes out to "the foundation of trustworthiness, through love", then we will approach knowing and obligation themselves (as psychological realities) in a different way (less obsessive? less militant? or, perhaps, less rigorously responsible?).

Sharon Hewitt Rawlette (in The Feeling of Value) grounds her moral realism in "normative qualia", which for her is something like "the component of pain that feels unacceptable" or its opposite in pleasure), which leads her to hedonic utilitarianism.  Not to preference satisfaction or anything else, but specifically to hedonism.

I think both of the above are best grounded in a "naturalism" (a "one-ontological-world-ism" from my other comment), rather than in anything Enochian or Parfitian.  

Comment by James_Banks on The despair of normative realism bot · 2021-01-05T00:28:51.493Z · EA · GW

I can see the appeal in having one ontological world.  What is that world, exactly?  Is it that which can be proven scientifically (in the sense of, through the scientific method used in natural science)?  I think what can be proven scientifically is perhaps what we are most sure is real or true.  But things that we are less certain of being real can still exist, as part of the same ontological world.  The uncertainty is in us, not in the world.  One simplistic definition of natural science is that it is simply rigorous empiricism.   The rigor isn't how we are metaphysically connected with things, rather it's the empirical that does so, the experiences contacting or occurring to observers.  The rigor simply helps us interpret our experiences.

We can have random experiences that don't add up to anything.  But maybe whatever experiences that give rise to our concept "morality", which we do seem to be able to discuss with some success with other people, and have done so in different time periods, may be rooted in a natural reality (which is not part of the deliverances of "natural science" as "natural" is commonly understood, but which is part of "natural science" if by "natural" we mean "part of the one ontological world").  Morality is something we try hard to make a science of (hence the field of ethics), but which to some extent eludes us.  But that doesn't mean that there isn't something natural there, but that it's something we have so far not figured out.

Comment by James_Banks on What types of charity will be the most effective for creating a more equal society? · 2020-10-12T19:25:46.545Z · EA · GW

Here are some ideas:

The rich have too much money relative to poor:

Taking money versus eliciting money.

Taking via

  • revolution
  • taxation

Eliciting via

  • shame, pressure, guilt
  • persuasion, psychological skill
  • friendship

Change of culture

  • culture in general
  • elite culture

Targeting elite money

  • used to be stewards of investments
  • used for personal spending

--

Revolutions are risky and can lead to worse governments.

Taxation might work better. (Closing tax haven loopholes.) Building political will for higher taxes on wealthy. There are people in the US who don't want there to be higher taxes on wealthy even though it would materially benefit them (culture change opportunity).

Eliciting could be more effective. Social justice culture (OK with shame, pressure, guilt) has philanthropic charities. (Not exactly aligned with EA.) Guerrilla Foundation, Resource Generation. (Already established. You could donate or join now.)

Eliciting via persuasion or psychological tactics sounds like it would appeal to some people to try to do.

Eliciting via friendship: what if a person, or movement, was very good friends with both rich and poor people? Then they could represent the legitimate interests of both to each other in a trustworthy way. I'm not sure anyone is trying this route. Maybe the Giving Pledge counts?

Change of culture. What are the roots of the altruistic mindset? What would help people have, or prepare people to have, values of altruists (a list of such for EA or EA-compatible people; there could be other lists)? Can this be something that gets "in the water" of culture at large? Can culture at large reach into elite culture, or does there have to be a special intervention to get values into elite culture? This sounds more like a project for a movement or set of movements than for a discrete charity.

Elite people have money that they spend on themselves personally -- easy to imagine they could just spend $30,000 a year on themselves and no more, give the balance to charity. But they also have money tied up in investments. Not so easy to ask them to liquidate those investments. If they are still in charge of those investments, then there is an inequality of power, since they can make decisions that affect many people without really understanding the situation of those people. Maybe nationalize industries? But then there can still be an inequality of power between governments and citizens.

If there can be a good flow between citizens and governments, whereby the citizens' voices are heard by the government, then could there be a similar thing between citizens and unelected elite? Probably somebody needs to be in charge of complex and powerful infrastructure, inevitably leading to potential for inequalities of power. Do the elite have an effective norm of listening to non-elite?

--

You might also consider the effect of AI and genetic engineering, or other technologies, on the problem of creating a more equal society. AI will either be basically under human control, or not. If it is, the humans who control it will be yet another elite. If it isn't, then we have to live with whatever society it comes up with. We can hope that maybe AI will enforce norms that we all really want deep down but couldn't enforce ourselves, like equality.

On the other hand, maybe, given the ability to change our own nature using genetic engineering, we (perhaps with the help of the elite) will choose to no longer care about inequality, only a basic sense of happiness which will be attainable by the emerging status quo.

Comment by James_Banks on Expected value theory is fanatical, but that's a good thing · 2020-09-21T19:44:32.778Z · EA · GW

1. I don't know much about probability and statistics, so forgive me if this sounds completely naive (I'd be interested in reading more on this problem, if it's as simple for you as saying "go read X").

Having said that, though, I may have an objection to fanaticism, or something in the neighborhood of it:

  • Let's say there are a suite of short-term payoff, high certainty bets for making things better.
  • And also a suite of long-term payoff, low certainty bets for making things better. (Things that promise "super-great futures".)

You could throw a lot of resources at the low certainty bets, and if the certainty is low enough, you could get to the end of time and say "we got nothing for all that". If the individual bets are low-certainty enough, even if you had a lot of them in your suite you would still have a very high probability of getting nothing for your troubles. (The state of coming up empty-handed.)

That investment could have come at the cost of pursuing the short-term, high certainty suite.

So you might feel regret at the end of time for not having pursued the safer bets, and with that in mind, it might be intuitively rational to pursue safe bets, even with less expected value. You could say "I should pursue high EV things just because they're high EV", and this "avoid coming up empty-handed" consideration might be a defeater for that.

You can defeat that defeater with "no, actually the likelihood of all these high-EV bets failing is low enough that the high-EV suite is worth pursuing."

2. It might be equally rational to pursue safety as it is to pursue high EV, it's just that the safety person and the high-EV person have different values.

3. I think in the real world, people do something like have a mixed portfolio, like Taleb's advice of "expose yourself to high-risk, high-reward investments/experiences/etc., and also low-risk, low-reward." And how they do that shows, practically speaking, how much they value super-great futures versus not coming up empty-handed. Do you think your paper, if it got its full audience, would do something like "get some people to shift their resources a little more toward high-risk, high-reward investments"? Or do you think it would have a more radical effect? (A big shift toward high-risk, high-reward? A real bullet-biting, where people do the bare minimum to survive and invest all other resources into pursuing super-high-reward futures?)

Comment by James_Banks on Are social media algorithms an existential risk? · 2020-09-15T20:46:10.199Z · EA · GW

(The following is long, sorry about that. Maybe I should have written it up already as a normal post. A one sentence abstract could be: "Social media algorithms could be dangerous as a part of the overall process of leading people to 'consent' to being lesser forms of themselves to further elite/AI/state goals, perhaps threatening the destruction of humanity's longterm potential.")

It seems plausible to me that something like algorithmic behavior modification (social media algorithms are algorithms designed to modify human behavior, to some extent; could be early examples of the phenomenon) could bend human preferences so that future humans freely (or "freely"?) choose things that we (the readers of this comment? reflective humans of 2020?) would consider non-optimal. If you combine that with the possibility of algorithms recommending changes in human genes, it's possible to rewrite human nature (with the consent of humans) into a form that AI (or the elite who control AI) find more convenient. For instance, humans could be simplified so that they consume fewer resources or present less of a political threat. The simplest humans are blobs of pleasure (easily satisfying hedonism) and/or "yes machines" (people who prefer cheap and easy things and thus whose preferences are trivial to satisfy). Whether this technically counts as existential risk, I'm not sure. It might be considered a "destruction of humanity's longterm potential". Part of human potential is the potential of humans to be something.

I suggest "freely" might ought to be in quotes for two reasons. One is the "scam phenomenon". A scammer can get a mark into a mindset in which they do things they wouldn't ordinarily do. (Withdraw a large sum of money from their bank account and give it to the scammer, just because the scammer asks for it.) The scammer never puts a gun to the mark's head. They just give them a plausible-enough story, and perhaps build a simple relationship, skillfully but not forcefully suggesting that the mark has something to gain from giving, or some obligation compelling it. If after "giving" the money, the mark wises up and feels regret, they might appeal to the police. Surely they were psychologically manipulated. And they were, they were in a kind of dream world woven by the scammer, who never forced anything but who drew the mark into an alternate reality. In some sense what happened was criminal, a form of theft. But the police will say "But it was of your own free will." The police are somewhat correct in what they say. The mark was "free" in some sense. But in another sense, the mark was not. We might fear that an algorithm (or AI) could be like a sophisticated scammer, and scam the human race, much like some humans have scammed large numbers of humans before.

The second reason is that adoption of changes (notably technology, but also social changes), of which changing human genes would be an example, and of which accepting algorithmic behavior modification could be another, is something that is only in a limited sense a satisfaction of the preferences of humans, or the result of their conscious decision. In the S-shaped curve of adoption, there are early adopters, late/non-adopters, and people in the middle. Early adopters probably really do affirm the innovations they adopt. Late or non-adopters probably really do have some kind of aversion to them. These people have true opinions about innovations. But most people, in the middle of the graph, are incentivized to a large extent by "doing whatever it is looks like is popular, is becoming popular, is something that looks pretty clear has become and will be popular". So technological adoption, or the adoption of any other innovation, is not necessarily something we as a whole species truly prefer or decide for, but there's enough momentum that we find ourselves falling in line.

I think more likely than the extreme of "blobs of pleasure / yes machines" are people who lack depth, are useless, and live in a VR dream world. On some, deeper, level they would be analogous to blobs/yes machines, but their subjective experience, on a surface level, would be more recognizably human. Their lives would be positive on some level and thus would be such that altruistic/paternalistic AI or AI-controlling elite could feel like they were doing the right thing by them. But their lives would be lacking in dimensions that perhaps AI or AI-controlling elite wouldn't think of including in their (the people's, or even the elite's/AI's own) experience. The people might not have to pay a significant price for anything and thus never value things (or other people) in a deeper way. They might be incapable of desiring anything other than "this life", such as a "spiritual world" (or something like a "spiritual world", a place of greater meaning) (something the author of Brave New World or Christians or Nietzscheans would all object to). In some objective sense, perhaps capability -- toward securing your own well-being, capability in general, behaving in a significant way, being able to behave in a way that really matters -- is something that is part of human well-being (and so civilization is both progress and regress as we make people who are less and less capable of, say, growing their own food, because of all the conveniences and safety we build up). We could further open up the thought that there is some objective state of affairs, something other than human perceptions of well-being or preference-satisfaction, which constitutes part of human well-being. Perhaps to be rightly related to reality (properly believing in God, or properly not believing in God, as the case may be).

So we might need to figure out exactly what human well-being is, or if we can't figure it out in advance for the whole human species (after all, each person has a claim to knowing what human well-being is), then try to keep technology and policy from doing things that hamper the ability of each person to come to discover and to pursue true human well-being. One could see in hedonism and preferentialism a kind of attempt at value agnosticism: we no longer say that God (a particular understanding of God), or the state, or some sacred site is the Real Value, we instead say "well, we as the state will support you or at least not hinder you in your preference for God, the state, or the sacred site, whatever you want, as long as it doesn't get in the way of someone else's preference -- whatever makes you happy". But preferentialism and hedonism aren't value-agnostic if they start to imply through their shaping of a person's experience "none of your sacred things are worth anything, we're just going to make you into a blob of pleasure who says yes, on most levels, with a veneer of human experience on the surface level of your consciousness." I think that a truly value-agnostic state/elite/AI might ought to try to maximize "the ability for each person to secure their own decision-making ability and basic physical movement", which could be taken as a proxy for the maximization of each person's agency and thus their ability to discover and pursue true human well-being. And to make fewer and fewer decisions for the populace, to try to make itself less and less necessary from a paternalistic point of view. Rather than paternalism, adopt a parental view -- parents tend to want their children to be capable, and to become, in a sense, their equals. All these are things that altruists who might influence the AI-controlling elite in the coming decades or centuries, or those who might want to align AI, could take into account.

We might be concerned with AI alignment, but we should also be concerned with the alignment of human civilization. Or the non-alignment, the drift of it. Fast take-off AI can give us stark stories where someone accidentally misaligns an AI to a fake utility function and it messes up human experience and/or existence irrevocably and suddenly -- and we consider that a fate to worry about and try to avoid. But slow take-off AI (I think) would/will involve the emergence of a bunch of powerful Tool AIs, each of which (I would expect) would be designed to be basically controllable by some human and to not obviously kill anyone or cause comparably clear harm (analogous to design of airplanes, bridges, etc.) -- that's what "alignment" means in that context [correct me if I'm wrong]; none of which are explicitly defined to take care of human well-being as a whole (something a fast-takeoff aligner might consciously worry about and decide about); no one of which rules decisively; all of which would be in some kind of equilibrium reminiscent of democracy, capitalism, and the geopolitical world. They would be more a continuation of human civilization than a break with it. Because the fake utility function imposition in a slow takeoff civilizational evolution is slow and "consensual", it is not stark and we can "sleep through it". The fact that Nietzsche and Huxley raised their complaints against this drift long ago shows that it's a slow and relatively steady one, a gradual iteration of versions of the status quo, easy for us to discount or adapt to. Social media algorithms are just a more recent expression of it.

Comment by James_Banks on Deliberate Consumption of Emotional Content to Increase Altruistic Motivation · 2020-09-13T20:12:31.922Z · EA · GW

OK, this person on the EA subreddit uses a kind of meditation to reduce irrational/ineffective guilt.

Comment by James_Banks on Deliberate Consumption of Emotional Content to Increase Altruistic Motivation · 2020-09-13T19:31:13.218Z · EA · GW

I like the idea of coming up with some kind of practice to retrain yourself to be more altruistic. There should be some version of that idea that works, and maybe exposing yourself to stories / imagery / etc. about people / animals who can be helped would be part of that.

One possibility is that such images could become naturally compelling for people (and thus would tend to be addictive or obsession-producing, because of their awful compellingness) -- for such people, this practice is probably bad, sometimes (often?) a net bad. But for other people, the images would lose their natural compellingness, and would have to be consumed deliberately.

In our culture we don't train ourselves to deliberately meditate on things, so it feels "culturally unrealistic", like something you can't expect of yourself and the people around you. (Or perhaps some subtle interplay of environmental influences on how we develop as "processors of reality" when we're growing up is to blame.) I feel like that part of me is more or less irrevocably closed over (maybe not an accurate sentiment, but a strong one). But in other cultures (not so much in the contemporary West), deliberate meditation was / is a thing. For instance people used to (maybe still do) meditate on the death of Jesus to motivate their love of God.

Comment by James_Banks on [deleted post] 2020-09-12T18:20:22.084Z

Also, this makes me curious: have things changed any since 2007? Does the promotion of 1 still seem as necessary? What role has the letter (or similar ideas/sentiments) played in whatever has happened with charities and funders over the last 13 years?

Comment by James_Banks on [deleted post] 2020-09-12T18:02:02.067Z

I think there's a split between 1) "I personally will listen to brutal advice because I'm not going to let my feelings get in the way of things being better" and 2) "I will give brutal advice because other people's feelings shouldn't get in the way of things being better". Maybe Holden wanted people to internalize 1 at the risk of engaging in 2. 2 may have been his way of promoting 1, a way of invalidating the feelings of his readers, who would go on to then be 1 people.

I'm pretty sure that there's a way to be kind and honest, both in object-level discussion ("your charity is doing X wrong") and in the meta discussion, of 1. (My possibly uninformed opinion:) Probably there needs to be a meeting in the middle: charities adopting 1 more and more, and funders finding away to be honest without 2. It takes effort for both to go against what is emotionally satisfying (the thinking nice things about yourself of anti-1, and the lashing out at frustrating immature people of 2). It takes effort to make that kind of change in both funder and charity culture (maybe something to work on for someone who's appropriately talented?).

Comment by James_Banks on Here's what Should be Prioritized as the Main Threat of AI · 2020-09-10T17:24:43.466Z · EA · GW

It looks like some people downvoted you, and my guess is that it may have to do with the title of the post. It's a strong claim, but also not as informative as it could be, doesn't mention anything to do with climate change or GHGs, for instance.

Comment by James_Banks on Here's what Should be Prioritized as the Main Threat of AI · 2020-09-10T16:59:29.162Z · EA · GW

Similarly, one could be concerned that the rapid economic growth that AI are expected to bring about could cause a lot of GHG emissions unless somehow we (or they) figure out how to use clean energy instead.

Comment by James_Banks on When can Writing Fiction Change the World? · 2020-08-25T18:28:24.219Z · EA · GW

Here's a related quote from Eccentrics by David Weeks and Jamie James (pp. 67 - 68) (I think it's Weeks speaking in the following quote:)

My own concept of creativity is that it is effective, empathic problem-solving. The part that empathy plays in this formulation is that it represents a transaction between the individual and the problem. (I am using the word "problem" loosely, as did Ghiselin: for an artist, the problem might be how to depict an apple.) The creative person displaces his point of view into the problem, investing it with something of his own intellect and personality, and even draws insights from it. He identifies himself with all the depths of the problem. Georges Braque expounded a version of this concept succinctly: "One must not just depict the objects, one must penetrate into them, and one must oneself become the object."
This total immersion in the problem means that there is a great commitment to understand it at all costs, a deep commitment that recognizes no limits. In some cases the behavior that results can appear extreme by everyday standards. For example, when the brilliant architect Kiyo Izumi was designing a hospital for schizophrenics, he took LSD, which mimics some of the effects of schizophrenia, in order to understand the perceptual distortions of the people who would be living in the building. This phenomenon of total immersion is typical of eccentricity: overboard is the only way most eccentrics know how to go.

This makes me think: "You become the problem, and then at high stakes are forced to solve yourself, because now it's a life or death situation for you."

Comment by James_Banks on When can Writing Fiction Change the World? · 2020-08-25T18:08:43.039Z · EA · GW

Thinking back on books that have made a big effect on me, I think they were things which spoke to something already in me, maybe something genetic, to a large extent. It's like I was programmed from birth to have certain life movements, and so I could immediately recognize what I read as the truth when it came to me -- "that's what I was always wanting to say, but didn't know how!" I think that probably explains HP:MOR to a large extent (but I haven't read HP:MOR).

My guess is that a large part of Yudkowsky's motivation in writing the inspiring texts of the rationalist community was his big huge personality -- him expressing himself. It happens that by doing that, he expressed a lot of other people's personalities. I'm reminded of quotes (which unfortunately I can't source at the moment) that I remember from David Bowie and John Lennon. David Bowie was accused of being powerful but he said "I'm not powerful. I'm an observer." (which is actually a really powerful role). John Lennon said something like "Our power was in mainly just talking about our own lives" (vis a vis psychedelics, them getting into Eastern thinking, maybe other things) "and that's a powerful thing." Maybe Yudkowsky was really just talking about his life being mad at how the world isn't an actually good place and how he personally was going to do something about it, and just seeing things that he personally found stupid about how other people thought about things (OK, that's maybe a strawman of him ;-) ). I think whatever art you do will be potentially more powerful (if you're lucky enough to get an audience) the deeper it comes from who you are, the more you take it personally.

Comment by James_Banks on Book Review: Deontology by Jeremy Bentham · 2020-08-17T22:47:23.553Z · EA · GW

Interesting. A point I could get out of this is: "don't take your own ideology too seriously, especially when the whole point of your ideology is to make yourself happy."

An extreme hedonism (a really faithful one) is likely to produce outcomes like:

"I love you."

"You mean, I give you pleasure?"

"Well, yeah! Duh!"

Which is a funny thing to say, kind of childish or childlike. (Or one could make the exchange be creepy: "Yeah, you mean nothing more to me than the pleasure you give me.")

Do people really exist to each other?

I see a person X:

1. X has a body. --Okay, on that level they're real.

2. I can form a mental model of X's mind. --Good, I consider them a person.

3. X exists for me only relevant to the pleasure or pain they give to me. --No, on that level, all that exists to me is my pleasure or pain.

If I'm rigorously hedonistic, then at that deepest level (level 3 above), I am alone with my feelings and points of view. But Bentham maybe doesn't want me to be rigorously hedonistic anyway.

Comment by James_Banks on A New X-Risk Factor: Brain-Computer Interfaces · 2020-08-10T19:41:27.767Z · EA · GW

I can see a scenario where BCI totalitarianism sounds like a pretty good thing from a hedonic utilitarian point of view:

People are usually more effective workers when they're happy. So a pragmatic totalitarian government (like Brave New World) rather than a sadistic one or sadistic/pragmatic (1984, maybe) would want its people to be happy all the time, and would stimulate whatever in the brain makes them happy. To suppress dissent it would just delete thoughts and feelings in that direction as painlessly as possible. Competing governments would have an incentive to be pragmatic rather than sadistic.

Then the risk comes from the possibility that humans aren't worth keeping around as workers, due to automation.

Comment by James_Banks on What do we do if AI doesn't take over the world, but still causes a significant global problem? · 2020-08-06T03:18:31.346Z · EA · GW
In any case, both that quoted statement of yours and my tweaked version of it seem very different from the claim "if we don't currently know how to align/control AIs, it's inevitable there'll eventually be significantly non-aligned AIs someday"?

Yes, I agree that there's a difference.

I wrote up a longer reply to your first comment (the one marked "Answer'), but then I looked up your AI safety doc and realized that I might better read through the readings in that first.

Comment by James_Banks on What do we do if AI doesn't take over the world, but still causes a significant global problem? · 2020-08-05T23:49:21.214Z · EA · GW

Yeah, I wasn't being totally clear with respect to what I was really thinking in that context. I was thinking "from the point of view of people who have just been devastated by some not-exactly superintelligent but still pretty smart AI that wasn't adequately controlled, people who want to make that never happen again, what would they assume is the prudent approach to whether there will be more non-aligned AI someday?", figuring that they would think "Assume that if there are more, it is inevitable that there will be some non-aligned ones at some point". The logic being that if we don't know how to control alignment, there's no reason to think there won't someday be significantly non-aligned ones, and we should plan for that contingency.

Comment by James_Banks on Objections to Value-Alignment between Effective Altruists · 2020-07-15T20:39:25.452Z · EA · GW

A few things this makes me think of:

explore vs. exploit: The first part of your life (the first 37%?), you gather information, then the last part, you use that information, maximizing and optimizing according to it. Humans have definite lifespans, but movements don't. Perhaps a movement's life depends somewhat on how much exploration they continue to do.

Christianity: I think maybe the only thing all professed Christians have in common is attraction to Jesus, who is vaguely or definitely understood. You could think of Christianity as a movement of submovements (denominations). The results are these nicely homogenous groups. There's a Catholic personality or personality-space, a Methodist, Church of Christ, Baptist, etc. Within them are more, or less, autonomous congregations. Congregations die all the time. Denominations wax and wane. Over time, what used to divide people into denominations (doctrinal differences) has become less relevant (people don't care about doctrine as much anymore), and new classification criteria connect and divide people along new lines (conservative vs. evangelical vs. mainline vs. progressive). An evangelical Christian family who attend a Baptist church might see only a little problem in switching to a Reformed church that was also evangelical. A Church of Christ member, at a church that would have considered all Baptists to not really be Christians 50 or 100 years ago, listens to some generic non-denominational nominally Baptist preacher who says things he likes to hear, while also hearing the more traditional Church of Christ sermons on Sunday morning.

The application of that example to EA could be something like: Altruism with a capital-A is something like Jesus, a resonant image. Any Altruist ought to be on the same side as any other Altruist, just like any Christian ought to be on the same side as any other Christian, because they share Altruism, or Jesus. Just as there is an ecosystem of Christian movements, submovements, and semiautonomous assemblies, there could be an ecosystem of Altruistic movements, submovements, and semiautonomous groups. It could be encouraged or expected of Altruists that they each be part of multiple Altruistic movements, and thus be exposed to all kinds of outside assumptions, all within some umbrella of Altruism. In this way, within each smaller group, there can be homogeneity. The little groups that exploit can run their course and die while being effective tools in the short- or medium-term, but the overall movement or megamovement does not, because overall it keeps exploring. And, as you point out, continuing to explore improves the effectiveness of altruism. Individual movements can be enriched and corrected by their members' memberships in other movements.

A Christian who no longer likes being Baptist can find a different Christianity. So it could be the same with Altruists. EAs who "value drift" might do better in a different Altruism, and EA could recruit from people in other Altruisms who felt like moving on from those.

Capital-A Altruism should be defined in a minimalist way in order to include many altruistic people from different perspectives. EAs might think of whatever elements of their altruism that are not EA-specific as a first approximation of Altruism. Once Altruism is defined, it may turn out that there are already a number of existing groups that are basically Altruistic, though having different cultures and different perspectives than EA.

Little-a altruism might be too broad for compatibility with EA. I would think that groups involved in politicizing go against EA's ways. But then, maybe having connection even with them is good for Altruists.

In parallel to Christianity, when Altruism is at least somewhat defined, then people will want to take the name of it, and might not even be really compliant with the N Points of Altruism, whatever value of N one could come up with -- this can be a good and a bad thing, better for diversity, worse for brand strength. But also in parallel to Christianity, there is generally a similarity within professed Christians which is at least a little bit meaningful. Experienced Christians have some idea of how to sort each other out, and so it could be with Altruists. Effective Altruism can continue to be as rigorously defined as it might want to be, allowing other Altruisms to be different.

Comment by James_Banks on What values would EA want to promote? · 2020-07-10T18:16:54.113Z · EA · GW

A few free ideas occasioned by this:

1. The fact that this is a government paper makes me think of "people coming together to write a mission statement." To an extent, values are agreed-upon by society, and it's good to bear that in mind. (Working with widespread values instead of against them, accepting that to an extent values are socially-constructed (or aren't, but the crowd could be objectively right and you wrong) and adjusting to what's popular instead of using a lot of energy to try to change things.)

2. My first reaction when reading the "Champion democracy,..." list is "everybody knows about those things... boring", but if you want to do good, you shouldn't be dissuaded by the "unsexiness" of a value or pursuit. That could be a supporting value to the practice of altruism.

Comment by James_Banks on What values would EA want to promote? · 2020-07-09T21:37:11.404Z · EA · GW

I'm basically an outsider to EA, but "from afar", I would guess that some of the values of EA are 1) against politicization, 2) for working and building rather than fighting and exposing ("exposing" being "saying the unhealthy truth for truth's sake", I guess), 3) for knowing and self-improvement (your point), 4) concern for effectiveness (Gordon's point). And of course, the value of altruism.

These seem like they are relatively safe to promote (unless I'm missing something.)

Altruism is composed of 1) other-orientation / a relative lack of self-focus (curiosity is an intellectual version of this), 2) something like optimism, 3) openness to evidence (you could define "hope" as a certain combination of 2 and 3), 4) personal connection with reality (maybe a sense of moral obligation, a connection with other being's subjective states, or a taste for a better world), 5) inclination to work, 6...) probably others. So if you value altruism, you have to value whatever subvalues it has.

These also seem fairly safe to promote.

Altruism is supported by 1) "some kind of ambition is good", 2) "humility is good but trying to maximize humility is bad" (being so humble you don't have any confidence in your knowledge prevents action), 3) "courage is good but not foolhardiness", 4) "will is good, if it stays in touch with reality", 5) "being 'real' is good" (following through on promises, really having intentions), 6) "personal sufficiency is good" (you have enough or are enough to dare reach into someone else's reality), 7...) probably others.

These are riskier. I think one thing to remember is that ideas are things in people's minds, that culture is really embodied in people, not in words. A lot of culture is in interpersonal contact, which forms the context for ideas. So ideally, if you promote values, you shouldn't just say things, but should instruct people (or be in relationship with people) such that they really understand what you're saying. (Advice I've seen on this forum.) Genes become phenotype through epigenetics, and concepts become emotions, attitudes, and behaviors through the "epiconceptual". The epiconceptual could be the cultural background that informs how people hear a message (like "yes, this is the moral truth, but we don't actually expect people to live up to the moral truth"), or it could be the subcultural background from a relationship or community that makes it make sense. The practices and expectations of culture / subculture. So values are a thing which are not promoted just by communicators, but also by community-builders, and good communities help make risky but productive words safe to spread.