Posts

Likelihood of an anti-AI backlash: Results from a preliminary Twitter poll 2022-09-27T22:01:30.671Z
Ask Me Anything about parenting as an Effective Altruist 2022-09-26T23:35:24.548Z
Crypto 'oracle protocols' for AI alignment with real-world data? 2022-09-22T23:05:26.584Z
EA’s brain-over-body bias, and the embodied value problem in AI alignment 2022-09-21T18:55:52.403Z
The heterogeneity of human value types: Implications for AI alignment 2022-09-16T21:21:17.211Z
The religion problem in AI alignment 2022-09-16T01:24:39.737Z
AI alignment with humans... but with which humans? 2022-09-08T23:43:49.753Z
'Psychology of Effective Altruism' course syllabus 2022-09-07T17:31:10.660Z
Could a 'permanent global totalitarian state' ever be permanent? 2022-08-23T17:15:49.557Z
What are the best articles/blogs on the psychology of existential risk? 2020-12-16T18:05:06.142Z
Seeking EA experts interested in the evolutionary psychology of existential risks 2019-10-23T18:19:57.378Z
X-risks of SETI and METI? 2019-07-02T22:41:05.760Z
Suggestions for EA wedding vows? 2019-03-22T02:38:14.577Z
Cognitive and emotional barriers to EA's growth 2018-03-09T18:59:02.990Z
Global catastrophic financial risks? 2018-02-05T23:26:57.041Z
New Effective Altruism course syllabus 2018-01-25T19:25:43.864Z
Ideological engineering and social control: A neglected topic in AI safety research? 2017-09-01T18:52:47.012Z

Comments

Comment by Geoffrey Miller (geoffreymiller) on I'm interviewing prolific AI safety researcher Richard Ngo (now at OpenAI and previously DeepMind). What should I ask him? · 2022-09-30T02:30:26.171Z · EA · GW

Some AI applications may involve AI systems that need to get aligned with the interests, values, and preferences of non-human animals (e.g. pets, livestock, zoo animals, lab animals, endangered wild animals, etc.) -- in addition to being aligned with the humans involved in their care-taking.

Are AI alignment researchers considering how this kind of alignment could happen? 

Which existing alignment strategies might work best for aligning with non-human animals?

Comment by Geoffrey Miller (geoffreymiller) on High-Impact Psychology (HIPsy): Piloting a Global Network · 2022-09-30T01:57:10.788Z · EA · GW

Inga - this sounds exciting and useful, and I'm happy to help with it however I can.

I'm a psychology professor at U. New Mexico (USA) who's taught classes on 'The psychology of Effective Altruism', human emotions, human sexuality, intelligence, evolutionary psychology, etc., and I've written 5 books and lots of papers on  diverse psych topics, the most relevant of which might be the ones on mental disorders (schizophrenia, depression, autism), psych research methods, individual differences (intelligence, personality traits, behavior genetics), consumer psychology, and moral psychology (eg evolutionary origins of altruism and virtue-signaling). I also did a bunch of machine learning research (neural networks, genetic algorithms, autonomous agents) back in the 90s, and I've been catching up on AI alignment research.

Here's my google scholar page: https://scholar.google.com/citations?user=vEqE_rUAAAAJ&hl=en&oi=ao

And my web site: https://www.primalpoly.com/ 

Comment by Geoffrey Miller (geoffreymiller) on Likelihood of an anti-AI backlash: Results from a preliminary Twitter poll · 2022-09-29T20:51:11.604Z · EA · GW

Tricky thing is, everything we can imagine tends to become a partisan, polarized issue, if it's even slightly associated with any existing partisan, polarized positions, and if any political groups can gain any benefit from polarizing it. 

I have trouble imagining a future in which AI and AI safety issues don't become partisanized and polarized. The political incentives for doing so -- in one direction or another -- would just be too strong.

Comment by Geoffrey Miller (geoffreymiller) on Ask Me Anything about parenting as an Effective Altruist · 2022-09-29T16:26:24.095Z · EA · GW

Jeff -- I strongly endorse these suggestions. 

The 'sleeping in separate rooms' can be extremely useful. My wife and I have very different circadian rhythms, so we find it really helpful to sleep in different bedrooms (in the context of an otherwise happy, loving, and delightful marriage.)  We put our baby's basinet in a separate walk-in closet near one of our bedrooms, which can be made nice, dark, and cozy for daytime naps and nighttime sleep even when it's not yet dark outside. So, baby being awake for short periods in the night doesn't need to disrupt our adult sleep, and baby can get scheduled breastfeeding a couple of times a night.

By contrast, many parents of babies try to co-sleep all together in the same bedroom and even in the same bed -- I did this with my first baby long ago, and it was extremely disruptive to sleep. 

I understand the evolutionary background that co-sleeping with babies was pretty typical for hunter-gatherers, and might be more 'natural' in some ways, but I think this might be one of those cases where the original reasons for co-sleeping -- protection from predators and parasites and infanticide, keeping baby warm enough during cold nights, etc -- might not be as relevant in modern life.

Comment by Geoffrey Miller (geoffreymiller) on Ask Me Anything about parenting as an Effective Altruist · 2022-09-29T16:25:50.494Z · EA · GW

Jeff -- I strongly endorse these suggestions. 

The 'sleeping in separate rooms' can be extremely useful. My wife and I have very different circadian rhythms, so we find it really helpful to sleep in different bedrooms (in the context of an otherwise happy, loving, and delightful marriage.)  We put our baby's basinet in a separate walk-in closet near one of our bedrooms, which can be made nice, dark, and cozy for daytime naps and nighttime sleep even when it's not yet dark outside. So, baby being awake for short periods in the night doesn't need to disrupt our adult sleep, and baby can get scheduled breastfeeding a couple of times a night.

By contrast, many parents of babies try to co-sleep all together in the same bedroom and even in the same bed -- I did this with my first baby long ago, and it was extremely disruptive to sleep. 

I understand the evolutionary background that co-sleeping with babies was pretty typical for hunter-gatherers, and might be more 'natural' in some ways, but I think this might be one of those cases where the original reasons for co-sleeping -- protection from predators and parasites and infanticide, keeping baby warm enough during cold nights, etc -- might not be as relevant in modern life.

Comment by Geoffrey Miller (geoffreymiller) on Which EAs should be bigger on Twitter? Upvote those who are top priorities. · 2022-09-29T16:17:12.265Z · EA · GW

This seems a case where there are deep partisan disagreements about what counts as a 'conspiracy theory'. 

When people on the Left say that 'America has shifting racial demographics such that the previous majority group is losing power and influence relative to other groups', mainstream media considers that a good thing and celebrates it as progress. When people on the Right say exactly the same thing, based on exactly the same data, mainstream media calls that 'the Replacement Conspiracy'. The double standard is striking. 

As it happens, I've written and tweeted very little about demographic shifts in the US, relative to other issues, so I'm surprised that you think this is something people would associate with me.

Comment by Geoffrey Miller (geoffreymiller) on Ask Me Anything about parenting as an Effective Altruist · 2022-09-29T16:11:28.262Z · EA · GW

Kat - thanks very much for this detailed and helpful comment.

I think you exemplify the kind of decision strategy I'm urging EAs to use in figuring out whether to have kids: treat it as a serious, high-stakes research project, gather a lot of diverse data, insights, and experiences, consider the fit to one's own life-goals and personality traits, consult with others who have done it (and not done it), consider best-case, worst-case, and likely median outcomes, etc. 

Often, the answer will be 'I should have kids', but often the answer will be 'nope'. 

Also thanks for linking to your full post, and to Jeff Kaufman's reply, which I largely agree with his comments, which I'll expand upon here. IMHO, three big ways that 'babysitting as a parenting trial' doesn't quite work are that

(1) it feels qualitatively different to care for one's own biological children than other people's kids, partly because your own kids will be more genetically and phenotypically similar to you (not just in appearance, but in personality and cognitive traits, quirks, preferences, values, etc) than unrelated kids are, partly because your kids will also resemble whatever lover/spouse/partner you scrambled your genes with, and partly because the process of becoming a parent (being pregnant, giving birth, bonding with baby) activates a whole suite of evolved adaptations for parenting that depend on complex hormonal, epigenetic, and maturation pathways that basically rewire one's brain from non-parent mode into parent mode.

(2) babysitters don't have nearly as much authority and autonomy as parents in determining a kid's daily routine, schedule, feeding, clothing, discipline strategies, training strategies, household setup, etc., so parenting offers much broader scope for deciding over the longer term how to arrange one's life to optimize child care

(3) child care has a difficult and frustrating learning curve, so the first few hours (of first few hundreds of hours) of child care as a beginner aren't representative of how one can do child care as an expert (which most parents become by the time their kids are toddlers). Think of all the things you hated at first as a newbie, but learned to love. For example, the first two days of learning to ski or snowboard absolutely suck. You fall over a lot, it's awkward and scary, your muscles get exhausted, you get cold and wet, you can't pay attention to anything fun or scenic about the experience. But then skiing gets awesome from about days three onwards, and you suddenly understand why so many people enjoy it. Same with the first few experiences of public speaking, or the first few dates, or dances, or posts on EA Forum. It can be quite hard to predict how expert-level performance will feel from a few hours of beginner-level experience. (Not that this applies to Kat, who has a lot of babysitting experience; it's more of a cautionary point for EAs who think 'I'll just try babysitting my niece for a couple of hours are use that experience to update my probability of having kids.')

Another factor I haven't mentioned elsewhere is the issue of giving one's parents grandkids. I grew up in a very pronatalist family; my grandparents had 12 kids, and I have 30 cousins. I always felt a very strong traditionalist, almost deontological imperative to give my own parents grand-kids that they could enjoy, and not to let their bloodline die out with me. I figured, they'd made huge sacrifices to raise me, and I had a moral duty to them to have some kids of my own. 

That might sound weird to some EAs, but think of it an analogous to an AI alignment problem. My parents invested a lot in creating and training me as a little AGI, partly (from an evolutionary perspective) so that I could create and train my own little AGIs in turn. They tried to train me as a good future parent, who had pronatalist values. If I'd decided not to have kids, that would represent a catastrophic alignment failure, from their point of view. And I felt, as a good AGI who felt some moral obligation to my creators, that I shouldn't just drift away from their values -- including the crucial value of becoming grandparents. Of course, in modern societies there's often a strong taboo against parents of adult kids putting much pressure on their kids to produce grandkids. But very few parents of adult kids will be delighted if their kids say 'Sorry, mom and dad, maximizing total future sentient utility in the cosmic light-cone is more important to me than continuing your bloodlines or letting you ever enjoy playing with grandkids'. 

This doesn't mean that one's parents reproductive priorities should always over-ride one's own rational goals. But it does suggest that talking with one's parents (and siblings, and other family stakeholders) might be wise in deciding about the issue of having kids. Some parents might truly not care about grandkids (although this is probably quite rare); some might care a lot, and might suffer bitter, permanent disappointment if they don't get grandkids. This is just something that's worth weighing, in terms of aggregate family utility rather than one's individual utility.

Comment by Geoffrey Miller (geoffreymiller) on Which EAs should be bigger on Twitter? Upvote those who are top priorities. · 2022-09-29T14:45:23.583Z · EA · GW

I'm not trolling. I'm actually curious why you think it's 'very risky' for me to promote EA more actively given my centrist heterodox libertarian political views, as opposed to whatever political views other EAs might have?  Or are EAs only permitted to have soft-Left political views?

Comment by Geoffrey Miller (geoffreymiller) on Which EAs should be bigger on Twitter? Upvote those who are top priorities. · 2022-09-28T21:18:59.112Z · EA · GW

Why? Say more please.

Comment by Geoffrey Miller (geoffreymiller) on Ask Me Anything about parenting as an Effective Altruist · 2022-09-28T20:42:53.139Z · EA · GW

purplefern -- regarding your question (2) about parental investment, and your follow-up comment above:

As an evolutionary psychologist, I tend to take a rather biological view of sex differences, and I think this can be quite helpful in thinking about sexual divisions of labor in parenting. This is not to say that people should pursue a 1950s-style male breadwinner/female housewife model in the 2020s.

Rather, it's to say that people should learn about the deep evolutionary history of sexual selection and parental investment, the sexual divisions of labor typically found in hunter-gatherer, pastoralist, and agricultural societies, the recent historical changes in parental care patterns, etc. This helps put any negotiations between modern moms and dads in a much more realistic, grounded context.

One key insight I got from evolutionary biology is that female mammals have evolved for about 70 million years to be very high-investing parents, in terms of gestation (pregnancy), lactation (breast-feeding), foraging (finding and preparing food for offspring), and general maternal care. Whereas, male mammals are typically focused on mating rather than parenting, and typically do either zero parental care, or very minimal protection against infanticide by other males. Human males are extremely unusual in having evolved much more intensive parental care, but this happened only in the last 2-3 million years or so, and it mostly involved increased effort in hunting, protecting the kids and family from rivals within the tribe, protecting the tribe from other tribes, and doing some care-taking and teaching of kids, especially in middle childhood (ages 6-12, roughly) and adolescence (ages 12-18). 

So, from the viewpoint of a modern woman who doesn't appreciate the evolutionary history, it might be frustrating that a man is doing only 40% of the child care instead of 50%. Whereas any other female mammal might feel incredibly envious that a human male is doing 40% rather than 0% as in her own species. This is not to say that a mom shouldn't try to negotiate with the dad to do 50%. It's just to offer some context for why these imbalances often emerge.

In general, a frequent failure mode for busy couples with kids is that the mom and dad each feel like they're doing much more than their partner, because their own contributions are more salient to them.  I think it's important for couples to switch duties and roles enough that they can cover for each other in emergencies, and so they have a full and salient appreciation of what each of them are doing day-to-day for their kids.

Comment by Geoffrey Miller (geoffreymiller) on Ask Me Anything about parenting as an Effective Altruist · 2022-09-28T20:24:54.784Z · EA · GW

Thanks for a very valuable, thoughtful, and insightful comment. I agree with almost all of it, and I appreciate your effort in turning a painful personal disappointment into some specific and useful advice for others.

I especially appreciated your points about the strong cultural forces (e.g. in US, UK, etc) that make the single-house nuclear family arrangement very hard to escape over the long term -- no matter how expert one is at living in EA group houses, polycules, or other coliving arrangements. 

Ideally, it would be possible for EAs (or people in any like-minded subculture) to set up their own neighborhoods or streets, with a dozen or so houses, restricted to people who share their values and life-goals. But that kind of 'freedom of association' is not actually legal in most countries (it would violate  various anti-discrimination laws). And trying to do coliving on a smaller scale within a single property raises very thorny problems in terms of the home ownership, shared equity, and what happens if couples get divorced or inhabitants get into too much conflict. 

Like it or not, the single-family nuclear house seems a pretty strong 'focal point' in the space of possible living arrangements, especially for parents with kids (and maybe elderly parents), and especially given the current economic, legal, and cultural context.

Comment by Geoffrey Miller (geoffreymiller) on Ask Me Anything about parenting as an Effective Altruist · 2022-09-28T20:12:07.092Z · EA · GW

purplefern -- I'll write separate replies for each of your questions, so any further comments by others on my comments can be fairly well-focused.

Regarding optimal age and timing, and career/health tradeoffs:

For a woman, the main age concerns regarding health, I think, are (1) likelihood of being able to get pregnant declines fairly strongly during the 30s (but there are very big individual differences in this rate), (2) likelihood of baby having genetic defects (eg Down syndrome) increases fairly quickly in the late 30s and 40s (but implications of this depend heavily on whether a woman is willing to use genetic screening and abortion as quality control.)

For issue (1), I think it makes sense for women to get their AMH (anti-Mullerian hormone) levels checked regularly from their late 20s onwards. This is absolutely crucial in predicting how much longer one's likely to remain fertile -- it's a test of 'ovarian reserve'. AMH seems to be a stronger predictor of a woman's remaining fertility than her chronological age is.  (In many countries, you can get an over-the-counter on in-lab AMH test for about $100-200 that just requires a finger prick or blood draw.)  It's also very helpful to know when one's mother, older sisters, or female relatives reached menopause. To risk-hedge, it can be helpful for women to freeze some eggs and/or embryos by age 30-35, which could use later by the woman herself or by a surrogate.

Likewise, for men, I think it can make sense to get sperm  checked with a lab semen analysis (typically less than $200), to assess semen volume, and sperm count,  vitality, motility, and morphology. 

It baffles me that many smart young men and women who invest hundreds of hours into planning their work career won't spend a few hours getting the crucial tests that would allow much more accurate planning of their reproductive career.

Overall, I think the optimal time to start having kids depends much more on one's romantic partnership situation than on one's education/career. If you've found an excellent mate who's compatible, committed, reliable, pro-child, and likely to be at last moderately successful and financially stable, then the best time might be right after marriage, whenever that is. (Having a kid with someone, without the many legal protections of formal marriage -- protections which seem silly and outdated until the point when you really, really need them -- is  foolish and often regretted, IMHO.) 

I'm most familiar with the academic situation when smart young women are trying to decide whether to have kids in grad school, or after they get tenure -- they assume that the 6 years of intense tenure-track work as an assistant professor will make pregnancy impossible then. I think that is a very misguided way to think about it, for a few reasons (1) most universities are actually very generous with parental leave for faculty, and you can pause the 'tenure clock' multiple times before going up for tenure, (2) professors aren't actually that much less busy after tenure than before, (3) realistically, in the current job market, most people who get PhDs in most fields will never get a tenure-track job, so there's a huge danger that women get stuck in 'post-doc limbo' for several years in their late 20s and early 30s, then don't get a tenure-track job until their early/mid 30s, then don't get tenure until their early 40s... and then it might be too late to have kids. There might be analogous issues in other fields such as medicine, law, finance, etc.

Comment by Geoffrey Miller (geoffreymiller) on Ask Me Anything about parenting as an Effective Altruist · 2022-09-28T19:03:29.951Z · EA · GW

Cornelis -- from my evolutionary psychologist perspective, a big difference between becoming a parent and becoming a super-generous donor, is that we've evolved for 70 million years to be good mammalian mothers, and for about 3 million years to be good, high-investing, hominid fathers. So there are many evolved adaptations for parenting just waiting to get switched on after kids arrive, that make parenting feel generally rewarding. (Likewise, kids evolved to be cute, charming, and interesting to their parents, so it's a coevolutionary interaction.)

The basic problem is that with contraception, we're not in a situation where kids just start popping out after we start falling in love and having sex, so many young people don't have the experience of feeling their parental adaptations get activated automatically by kids arriving. So there were quite limited selection pressures to 'want kids' before kids arrived.

Comment by Geoffrey Miller (geoffreymiller) on Ask Me Anything about parenting as an Effective Altruist · 2022-09-28T18:55:59.005Z · EA · GW

Jeff -- I agree. I think there are lots of design features of these traditional holidays that look irrational, outdated, and silly from an adult's point of view, but that suddenly make sense when you have kids enjoying them. 

Kids seem to have a deep hunger for 'special times', holidays, and celebrations, when the normal routines are set aside, and parents make special efforts to interact with extended family, neighbors, and friends, and when there are special foods, feasts, activities, and gift-giving. My speculation is that in hunter-gatherer times, collective feasts and holidays sent kids reliable cues that 'things are going well with our tribe', and kids like that. If kids are deprived of these special times, they might implicitly get cues that 'our tribe is poor, failing, under threat, and not likely to last very long', which could make them anxious and sad.

Comment by Geoffrey Miller (geoffreymiller) on Ask Me Anything about parenting as an Effective Altruist · 2022-09-28T18:51:34.495Z · EA · GW

Jeff -- yes! I think that effect is actually more important than the concerns that people often have about whether they'll be too tired to be good parents in their 40s or 50s.  If people stay in good physical shape, it's honestly not that hard to have the energy for parenting in middle age (speaking as a 57-year-old with a baby). 

However, I'll be 75 when my baby graduates high school, and maybe 83 by the time she has kids of her own.

Hopefully longevity interventions and regenerative medicine will help us all live long enough to meet our great-great-great-grandkids. But until then, having kids younger means you'll get to spend a much higher proportion of life enjoying their company, and being around for future grandkids.

Comment by Geoffrey Miller (geoffreymiller) on Ask Me Anything about parenting as an Effective Altruist · 2022-09-28T18:47:31.341Z · EA · GW

Wei_Dai: I worried somewhat about how to influence my kids' peer group and exposure to ideas. 

But I was reassured by a lot of behavior genetics research that parents and peers don't matter nearly as much as we think they do, in the long run. Kids' personality traits, cognitive traits, values, and interests often drive their choices of friends, peer groups, books, and media.

For example, independent of my influence, my older daughter when she was 14 was curious about global poverty, started reading Peter Singer books, learned about animal ethics, and spontaneously turned vegan, and has stayed vegan in the decade since then. If I'd tried to nudge her into veganism, she might have rebelled. 

Having said that, I do think choice of school can be important for a kids' peer group -- the better and more selective the school, the more likely a smart, curious kid will find like-minded kids, and feel happy and socially fulfilled. This can be extremely important in avoiding teenage depression and alienation.

I'll try to read your posts soon; thanks for the links!

Comment by Geoffrey Miller (geoffreymiller) on 7 traps that (we think) new alignment researchers often fall into · 2022-09-28T18:38:22.547Z · EA · GW

Akash - very nice post, and helpful for (relative) AI alignment newbies like me.

I would add that many AI alignment experts (and many EAs, actually) seem to assume that everyone  getting interested in alignment is in their early 20s and doesn't know much about anything, with no expertise in any other domain. 

This might often be true, but there are also some people who get interested in alignment who have already  had successful careers in other fields, and who can bring new interdisciplinary perspectives that alignment research might lack. Such people might be a lot less likely to fall into traps 3, 4, 5, and 6 that you mention. But they might fall into other kinds of traps that you don't mention, such as thinking 'If only these alignment kids understood my pet field X as well as I do, most of their misconceptions would evaporate and alignment research would progress 5x faster....' (I've been guilty of this on occasion).

Comment by Geoffrey Miller (geoffreymiller) on Likelihood of an anti-AI backlash: Results from a preliminary Twitter poll · 2022-09-28T18:31:51.840Z · EA · GW

Lauren - I was thinking of how BLM stigmatized certain policing methods such as chokeholds, rough restraint tactics, etc (that had been previously accepted).

The analogy would be, an anti-AI movement could stigmatize previously accepted behavior such as doing AI research without any significant public buy-in or oversight, which would be re-framed as 'summoning the demon', 'recklessly endangering all of humanity', 'soulless necromancy', 'playing Dr. Frankenstein', etc.

Comment by Geoffrey Miller (geoffreymiller) on Likelihood of an anti-AI backlash: Results from a preliminary Twitter poll · 2022-09-28T18:28:33.468Z · EA · GW

Yes; it might also happen that AGI attitudes get politically polarized and become a highly partisan issue, just as crypto almost became (with Republicans generally pro-crypto, and Democrats generally anti-crypto). Hard to predict which direction this could go -- Leftist economic populists like AOC might be anti-AI for the unemployment and inequality effects; religious conservatives might be anti-AI based more on moral disgust at simulated souls.

Comment by Geoffrey Miller (geoffreymiller) on Likelihood of an anti-AI backlash: Results from a preliminary Twitter poll · 2022-09-28T18:26:10.057Z · EA · GW

Good points. Collapsing birth rates and changing demographics might slightly soften the technological unemployment problem for younger people. But older people who have been doing the same job for 20-30 years will not be keen to 'retrain', start their careers over, and make an entry-level income in a job that, in turn, might be automated out of existence within another few years.

Comment by Geoffrey Miller (geoffreymiller) on Likelihood of an anti-AI backlash: Results from a preliminary Twitter poll · 2022-09-28T18:23:47.874Z · EA · GW

Yes, I think that once AI systems start communicating with ordinary people through ordinary language, simulated facial expressions, and robot bodies, there will be a lot of 'uncanny valley' effects, spookiness, unease, and moral disgust in response. 

And once technological unemployment from AI really starts to bite into blue collar and white collar jobs, people will not just say 'Oh well! Life is meaningless now, and I have no status or self-respect, and my wife/husband thinks I'm a loser, but universal basic income makes everything OK!'

Comment by Geoffrey Miller (geoffreymiller) on Which EAs should be bigger on Twitter? Upvote those who are top priorities. · 2022-09-28T16:03:28.781Z · EA · GW

Julian Hazell, aka @HuelHater, posts some pretty funny EA jokes and memes on Twitter.  His substack is here: https://hazell.substack.com/

Comment by Geoffrey Miller (geoffreymiller) on Ask Me Anything about parenting as an Effective Altruist · 2022-09-27T22:06:49.205Z · EA · GW

Yes. I also strongly recommend the book Expecting Better; Emily Oster is an economist who takes an unusually skeptical, evidence-based approach to parenting research and advice.

Comment by Geoffrey Miller (geoffreymiller) on Which EAs should be bigger on Twitter? Upvote those who are top priorities. · 2022-09-27T20:53:49.661Z · EA · GW

LOL, I'd get 10x times more grief and flack and it would make me sad.

Although I could pivot from writing hot and ornery political takes (my main activity now) to doing more serious EA promotion. But then I'd lose most of the new followers....

Comment by Geoffrey Miller (geoffreymiller) on Which EAs should be bigger on Twitter? Upvote those who are top priorities. · 2022-09-27T18:58:39.943Z · EA · GW

Nathan -- If you compile a list of EAs worth promoting on Twitter, and DM me the list of their Twitter handles, I'm happy to mention them to my 124k followers. 

Comment by Geoffrey Miller (geoffreymiller) on Ask Me Anything about parenting as an Effective Altruist · 2022-09-27T17:15:32.476Z · EA · GW

Frank -- thanks for your reply. 

It's true that sleep training is quite controversial. If you look at Reddit parenting forums, it's one of the most viciously debated topics. 

There's a strong taboo against explicitly training humans of any age using behaviorist reinforcement methods (which my wife Diana Fleischman is writing about in her forthcoming book). And there's a naturalistic bias in favor of kids co-sleeping with parents, frequent night-time nursing, etc. -- some of which may have an evolutionary rationale, but some of which may be parents virtue-signaling their dedication, empathy, etc. 

Maybe sleep training too early can be traumatic, but it's not clear what 'too early' means, and I haven't seen good data either way. I'm open to updating on this issue -- with the caveat that a lot of parents throw around the term 'traumatic' in a rather alarmist way, without a very clear idea of what that actually means, or how it could be measured in a randomized controlled trial.

(There's an analogy to dog training here -- a lot of dog owners do very little training, very badly, on the view that training is manipulative, oppressive, and mean, and doesn't allow their dogs to 'be themselves'. Whereas owners of well-trained dogs understand that the short-term frustrations of training can have big long-term benefits.)

Regarding what prehistoric, hunter-gatherer, and traditional humans do in terms of parenting, it's useful and fascinating to look at the book 'Mothers and others' (2011) by anthropologist Sarah Blaffer Hrdy. 

Comment by Geoffrey Miller (geoffreymiller) on Ask Me Anything about parenting as an Effective Altruist · 2022-09-27T17:03:57.611Z · EA · GW

This is a fair point. My older daughter (now 26) was very smart, and easily bored in normal public school.  We worked very hard to be able to send her to the best private schools we could find, from age 8 onwards (she ended up at Westminster School in London, then Oxford). She might have also flourished if homeschooled, if we'd had the time to do that.

So, Caplan's data might not apply so clearly if you and your partner are above about IQ 130 or 140, which means your kids are likely to be close to that (there is regression to the mean, but it's fairly limited for IQ, which has a heritability in adults of about 70-80%).  However, Caplan does address this point in the education book.

I would argue that if you have smart kids, try to find the most selective schools you can that embrace standardized testing and streaming, and that have gifted programs, honors classes, etc. Smart kids love having peers who are smart -- and even if it doesn't make all that much different to their eventual career success, it can be a huge benefit to their day-to-day life quality and sentient experience. 

I agree that EAs should support a lot more experimentation in parenting and education, especially in nurturing exceptional talent! I think we are nowhere near optimal in our current educational approaches.

Comment by Geoffrey Miller (geoffreymiller) on Ask Me Anything about parenting as an Effective Altruist · 2022-09-27T01:28:29.213Z · EA · GW

PS for EAs considering having kids, I would strongly recommend two books by economist Bryan Caplan: 

  1. 'Selfish reasons to have more kids' (2012), which explains that the high heritability of most psychological traits means you can relax and 'trust your genes' as a parent, without trying to hothouse and overschedule and push your kids the way that most American parents do
  2. 'The case against education' (2019), which explains why the exact types of schooling your kids get don't actually have much long-term impact on how they turn out, once you control for their cognitive and emotional traits -- e.g. it won't matter that much whether you do public school, private school, home school, or unschooling

Also, for people curious about nature vs. nurture issues in how kids turn out, I'd recommend popular behavior genetics books such as 'Blueprint' (2019) by Robert Plomin, or the classic 'The blank slate' (2001) by Steven Pinker.

Comment by Geoffrey Miller (geoffreymiller) on Ask Me Anything about parenting as an Effective Altruist · 2022-09-27T01:21:26.216Z · EA · GW

Cornelis - very thoughtful questions. 

I would strongly recommend doing some serious research and thinking about this issue now, while you're 30. Partly to plan ahead and prioritize, partly to get clarity before getting seriously involved with a partner (who will probably want that clarity up front!), partly to be able to empathize more effectively with both parents and non-parents, in terms of the tradeoffs they've faced. Being male does buy you a bit more elbow room in terms of reproductive timing; you could potentially wait until your 50s. Mutation load in sperm does slightly increase with age, but it's not a very big effect. Energy for parenting does slightly decrease with age, but not that quickly if you stay in shape.

In my experience, and among male friends and colleagues, it's fairly rare for guys to have a strong, specific desire to have kids, at least until they meet a woman who seems exciting to have kids with. Evolution seems to have figured that if we have a sex drive and good mate choice, we don't need a specific desire for kids. Contraception makes that heuristic less effective now.

Regarding sperm donation, I think it's a very sensible thing to do, if you qualify; I think it's ethical to allow any resulting kids to contact you when they're a teen if they want to. I think raising one's own kids is often significantly more rewarding than raising adopted kids, just because one's own kids will share so much more of one's cognitive traits, personality traits, quirks, etc, that you can empathize better with them.

The pronatalism argument is something I should write about in more detail later. I don't think that reproducing oneself just in order to maximize total number of geniuses is that compelling an argument -- one could 'offset' genius-reproduction by encouraging other smart people to have kids, promoting pronatalism, etc. 

However, I do think there are some specific benefits of becoming a parent, especially for someone working on AI alignment: (1) you get a LOT of insights into how learning works, if you view babies and kids as little 'machine learning systems', and if you read some developmental psychology, (2) you become much more longtermist and future-oriented, personally concerned about the fate of your kids and future grandkids, and more strongly motivated to minimize X risks, (3) you get a lot more credibility with parents when discussing X risks, longtermism, alignment, etc -- they don't want to be reassured that 'AGI will be safe, trust us!' or 'AGI is a big danger that deserves more attention, trust us!' by childless people with no skin in the game.

Comment by Geoffrey Miller (geoffreymiller) on Ask Me Anything about parenting as an Effective Altruist · 2022-09-27T01:05:58.834Z · EA · GW

Pete -- I'd say with babies, most of the frustration comes from trying to multitask when it's not feasible, being anxious about why baby's crying (which gets reduced a lot with subsequent babies), and feeling like one 'should' be doing work stuff when it's not possible. If you don't sleep train a baby, you also end up sleep deprived, which makes you irritable and frustrated. On the other hand, the happiness is very frequent and strong (assuming baby is healthy and well). I'm naturally quite introverted, dysthymic, and irritable, but being around a happy baby makes me playful, delighted, and grounded; often I've been smiling so much at the end of the day that my face muscles hurt (which rarely happens around adults.)

Regarding thinking clearly, the only real cognitive deficits from parenting come from sleep deprivation (which is mostly avoidable), and from having a bit less time for self-care in terms of exercise, nutrition, nootropics, and cognitively stimulating socializing.

Comment by Geoffrey Miller (geoffreymiller) on Ask Me Anything about parenting as an Effective Altruist · 2022-09-27T01:00:30.813Z · EA · GW

Good questions. In reply:

  1. The time for parenting, in my experience, comes mostly from spending less time watching less TV, playing computer games, and reading; doing less traveling and socializing with friends; and working in a different way -- cutting out wasted time on low-priority things, learning to say no to irrelevant distractions, and learning how to collaborate, outsource, and delegate more efficiently. It's very important to have a partner/spouse/co-parent who's smart, efficient, and pragmatic at organizing life, and figuring out good, sustainable, divisions of labor.
  2. My older daughter (age 26) was fully grown when I had my younger daughter (age 6 months), so I haven't had the experience of raising two young kids at the same time. However, I was raising older daughter at the same time as I was helping to care for my teenage step-son, and task-switching between them could be challenging (i.e. not treating a toddler as if they're a teen, or vice-versa). 

Regarding sleep: it's absolutely crucial to sleep-train a baby starting around 3-4 months old, using behaviorist learning principles that can be emotionally challenging to implement at first (e.g. ignoring baby crying for certain lengths of time), but that are hugely beneficial in the long run (e.g. having to wake up with them only twice a night, rather than six times a night.)  Once a kid is about 2-3 years old, they'll typically sleep through the night. And remember, young kids sleep MUCH more than adults -- our baby typically goes to sleep around 6:30 pm and wakes around 6:30 am -- plus has four 40-minute naps during the day. So there's quite a bit of time when they're just sleeping in their crib.

Regarding the dangers of working less: I was very worried about this as a post-doc (age 30) having a kid, and being concerned about getting an academic job and tenure. However, I found that having a baby was enormously motivating. The book I'd been procrastinating about writing for 3 years suddenly got written within a fairly short period, because I really needed the advance money to buy a bigger house for the family. My career strategizing, which had been rather self-indulgent and haphazard, got laser-focused on getting a good stable tenure-track job with decent pay and good colleagues -- and it worked. All because being a parent forces one to get very realistic about money, time, job stability, and career goals, very quickly. 

Regarding job and fulfillment: every parent I know says there's a qualitatively new kind of fulfillment that comes from having kids. When my first daughter was born, I immediately thought, 'Why did I waste so much of my life before this, in things that now seem meaningless?'  This might be a trick that evolution plays on our brains, but it works! Also, competent and effective parents can still find plenty of time to socialize, enjoy Game of Thrones, read, relax, etc. It's not nearly as easy to travel or go to Burning Man, but it's possible, especially with older kids.

Comment by Geoffrey Miller (geoffreymiller) on Announcing the Future Fund's AI Worldview Prize · 2022-09-25T17:17:26.741Z · EA · GW

Question about how judges would handle multiple versions of essays  for this competition. (I think this contest is a great idea; I'm just trying to anticipate some practical issues that might arise.)

EA Forum has a ethos of people offering ideas, getting feedback and criticism, and updating their ideas iteratively. For purposes of this contest, how would the judges treat essays that are developed in multiple versions?

For example, suppose a researcher posts version 1.0 of an essay on EA Forum with the "Future Fund worldview prize" tag. They get a bunch of useful feedback from other EA Forum members, refine their arguments, revise their essay, and post version 2.0 on EA Forum a couple weeks later (also with the tag). And so forth... through version 5.0 (or whatever). 

1. Which version of the essay would judges be evaluating -- version 1.0, or version 5.0, or would they partly also be judging the extent to which the researcher really strengthened their argument through the feedback?

2. If version 1.0 had been published before this contest was announced (on Sept 23), and version 2.0 was published after that date, would version 2.0 be eligible?

3. This versioning issue might raise adverse incentives on forums  - e.g. anyone developing their own competition entry might be incentivized to withhold praise or constructive feedback on someone else's essay, to downvote it, and/or to attack it with unusually incisive  or detailed criticism. Or, friends and allies of an essay's author might be incentivized to lavish it with praise, upvote it, and counter-attack against any criticism. 

4. This versioning issue might raise issues with credit assignment, prize money distribution, and potential resentments. For example, suppose a researcher's version 1.0 gets really helpful feedback on a couple of key points from other forum members, and incorporates their ideas into version 2.0 (possibly crediting them in some way, but not adding them as co-authors). What happens if version 2.0 then wins a big prize? It seems like the author would keep the prize money, but the people who helped them strengthen it might not get any reward, and might feel aggrieved. (And, perhaps anticipating this effect, they might not offer the helpful feedback in the first place.)

I have no great suggestions for how to solve these issues, but I suspect other people might be wondering about them. 

I guess the simplest solution would be to say: judges will only consider version 1.0 of any essay, so writers better make it as good as they can, before they get any feedback.

Comment by Geoffrey Miller (geoffreymiller) on EA forum content might be declining in quality. Here are some possible mechanisms. · 2022-09-24T23:12:22.711Z · EA · GW

I'm not convinced quality has been declining, but I'm open to the possibility, and it's hard to judge.

Might be useful to ask EA Forum moderators if they can offer any data on  metrics across time (e.g. the last few years) such as

  1. overall number of EA Forum members
  2. number who participate at least once a month
  3. average ratio of upvotes to downvotes
  4. average number of upvoted comments per long post

We could also just run a short poll of EA Forum users to ask about perceptions of quality.

Comment by Geoffrey Miller (geoffreymiller) on Announcing the Future Fund's AI Worldview Prize · 2022-09-24T22:12:13.722Z · EA · GW

tldr: Another way to signal-boost this competition might be through prestige and not just money, by including some well-known people as judges, such as Elon Musk, Vitalik Buterin, or Steven Pinker.

One premise here is that big money prizes can be highly motivating, and can provoke a lot of attention, including from researchers/critics who might not normally take AI alignment very seriously. I agree.

But, if Future Fund really wants maximum excitement, appeal, and publicity (so that the maximum number of smart people work hard to write great stuff), then apart from the monetary prize, it might be helpful to maximize the prestige of the competition, e.g. by including a few 'STEM celebrities' as judges. 

For example, this could entail recruiting a few judges like tech billionaires Elon Musk, Jeff Bezos, Sergey Brin, Tim Cook, Ma Huateng, Ding Lei, or Jack Ma,  crypto leaders such as Vitalik Buterin or Charles Hoskinson, and/or well-known popular science writers, science fiction writers/directors, science-savvy political leaders, etc. And maybe, for an adversarial perspective, some well-known AI X-risk skeptics such as Steven Pinker, Gary Marcus, etc.

Since these folks are mostly not EAs or AI alignment experts, they shouldn't have a strong influence over who wins, but their perspectives might be valuable, and their involvement would create a lot of buzz around the competition. 

I guess the ideal 'STEM celebrity' judge would be very smart, rational, open-minded, and  highly respected among the kinds of people who could write good essays, but not necessarily super famous among the general public (so the competition doesn't get flooded by low-quality entries.) 

We should also try to maximize international appeal by including people well-known in China, India, Japan, etc. -- not just the usual EA centers in US, UK, EU, etc. 

(This could also be a good tactic for getting these 'STEM celebrity' judges more involved in EA, whether as donors, influencers, or engineers.)

This might be a very silly idea, but I just thought I'd throw it out there...

Comment by Geoffrey Miller (geoffreymiller) on Announcing the Future Fund's AI Worldview Prize · 2022-09-24T21:27:24.097Z · EA · GW

Strongly endorsed this comment. 

If we really take infohazards seriously, we shouldn't just be imagining EAs casually reading draft essays, sharing them, and the ideas gradually percolating out to potential bad actors. 

Instead, we should take a fully adversarial, red-team mind-set, and ask, if a large, highly capable geopolitical power wanted to mine EA insights for potential applications of AI technology that could give them an advantage (even at some risk to humanity in general), how would we keep that from happening?

We would be naive to think that intelligence agencies of various major countries that are interested in AI don't have at least a few intelligence analysts reading EA Forum, LessWrong, & Alignment Forum, looking for tips that might be useful -- but that we might consider infohazards.

Comment by Geoffrey Miller (geoffreymiller) on Announcing the Future Fund's AI Worldview Prize · 2022-09-24T21:18:42.529Z · EA · GW

This is a pretty deep and important point. There may be psychological and cultural biases that make it pretty hard to shift the expected likelihoods of worst-case AI scenarios much higher than they already are -- which might bias the essay contest against arguments winning even if they make a logically compelling case for more likely catastrophes.

Maybe one way to reframe this is to consider the prediction “P(misalignment x-risk|AGI)” to also be contingent on us muddling along at the current level of AI alignment effort, without significant increases in funding, talent, insights, or breakthroughs. In other words, probability of very bad things happening, given AGI happening, but also given the status-quo level of effort on AI safety.

Comment by Geoffrey Miller (geoffreymiller) on Announcing the Future Fund's AI Worldview Prize · 2022-09-24T21:09:45.006Z · EA · GW

I'm partly sympathetic to the idea of allowing submissions in other forums or formats.

However, I think it's likely to be very valuable to the Future Fund and the prize judges, when sorting through potentially hundreds or thousands of submissions, to be able to see upvotes, comments, and criticisms from EA Forum, Less Wrong, and Alignment Forum, which is where many of the subject matter experts hang out. This will make it easier to identify essays that seem to get a lot of people excited, and that don't contain obvious flaws or oversights.

Comment by Geoffrey Miller (geoffreymiller) on Public-facing Censorship Is Safety Theater, Causing Reputational Damage · 2022-09-24T00:38:28.418Z · EA · GW

Yep, fair enough. I was trying to dramatize the most vehement anti-censorship sentiments in a US political context, from one side of the partisan spectrum. But you're right that there are plenty of other anti-censorship concerns from many sides, on many issues, in many countries.

Comment by Geoffrey Miller (geoffreymiller) on (My suggestions) On Beginner Steps in AI Alignment · 2022-09-23T21:30:27.898Z · EA · GW

These are helpful suggestions; thanks. 

They seem aimed mostly at young adults starting their careers -- which is fine, but limited to that age-bracket.

It might also be helpful for someone who's an AI alignment expert to suggest some ways for mid-career or late-career researchers from other fields to learn more. That can be easier in some ways, harder in others -- we come to AI safety with our own 'insider view' of our field, and those may entail very different foundational assumptions about human nature, human values, cognition, safety, likely X risks, etc. So, rather than learning from scratch, we may have to 'unlearn what we have learned' to some degree first. 

For example, apart from young adults often starting with the same few bad ideas about AI alignment, established researchers from particular fields might often start with their own distinctive bad ideas about AI alignment -- but those might be quite field-dependent. For example, psych professors like me might have different failure modes in learning about AI safety than economics professors, or moral philosophy professors.

Comment by Geoffrey Miller (geoffreymiller) on Summary: the Global Catastrophic Risk Management Act of 2022 · 2022-09-23T21:15:49.504Z · EA · GW

It's hard to judge whether this bill will go anywhere (I hope it does!); it seems to have gotten very little press coverage.

If we can't get a strong bipartisan consensus on reducing GCRs, then our governance system is broken.

Comment by Geoffrey Miller (geoffreymiller) on Public-facing Censorship Is Safety Theater, Causing Reputational Damage · 2022-09-23T21:10:32.982Z · EA · GW

This is a very good post that identifies a big PR problem for AI safety research. 

Your key takeaway might be somewhat buried in the last half of the essay, so let's see if I draw out the point more vividly (and maybe hyperbolically):

Tens (hundreds?) of millions of centrist, conservative, and libertarian people around the world don't trust Big Tech censorship because it's politically biased in favor of the Left, and it exemplifies a 'codding culture' that treats everyone as neurotic snowflakes, and that treats offensive language as a form of 'literal violence'.  Such people see that a lot of these lefty, coddling Big Tech values have soaked into AI research, e.g. the moral panic about 'algorithmic bias', and the increased emphasis on 'diversity, equity, and inclusion' rhetoric in AI conferences.

This has created a potentially dangerous mismatch in public perception between what the more serious AI safety researchers think they're doing (e.g. reducing X risk from AGI), and what the public thinks AI safety is doing (e.g. developing methods to automate partisan censorship,  to embed woke values into AI systems, and to create new methods for mass-customized propaganda). 

I agree that AI alignment research that is focused on global, longtermist issues such as X risk should be careful to distance itself from 'AI safety' research that focuses on more transient, culture-bound, politically partisan issues, such as censoring 'offensive' images and ideas. 

And, if we want to make benevolent AI censorship a new cause area for EA to pursue, we should be extremely careful about the political PR problems that would raise for our movement.

Comment by Geoffrey Miller (geoffreymiller) on Crypto 'oracle protocols' for AI alignment with real-world data? · 2022-09-23T18:49:55.796Z · EA · GW

This is a helpful comment; thanks. 

I'm also somewhat skeptical about whether Chainlink & other oracle protocols can really maximize reliability of data through their economic incentive models, but at least they seem to be taking the game theory issues somewhat seriously. 

But then, I'm also very skeptical about the reliability of a lot of real-world data from institutions that also have incentives to misrepresent, overlook, or censor certain kinds of information. (with Google search results being a prime example)

I take your point about the difficulty of scaling any kind of data reliability checks that rely on a human judgment bottleneck, and the important role that AIs might play in helping with that.

Thanks for the suggestion about looking at data poisoning attacks!

Comment by Geoffrey Miller (geoffreymiller) on The religion problem in AI alignment · 2022-09-23T16:46:24.806Z · EA · GW

Zach -- I may be an AI alignment newbie, but I don't understand how 'alignment' could be 'mostly not sensitive to what humans want'. I thought alignment with what humans want was the whole point of alignment. But now you're making it sound like 'AI alignment' means 'alignment with what Bay Area AI researchers think should be everyone's secular priorities.

Even CEV seems to depend on an assumption that there is a high degree of common ground among all humans regarding core existential values -- Yudkowsky explicitly says that CEV could only works 'to whatever extent most existing humans, thus extrapolated, would predictably want* the same things'.  If some humans are antinatalists, or Earth First eco-activisits, or religious fundamentalists yearning for the Rapture, or bitter nihilists, who want us to go extinct, then CEV won't work to prevent AI from killing everyone. CEV and most 'alignment' methods only seem to work if they sweep the true religious, political, and ideological diversity of humans under the rug.

I also see no a priori reason why getting from (1) AI killing every one  to AI not killing everyone would be easier than getting from (2) AI not killing eveyone to AI doing stuff everyone thinks is great. The first issue (1) seems to require explicitly prioritizing some human corporeal/body interests over the brain's stated preferences, as I discussed here .

Comment by Geoffrey Miller (geoffreymiller) on The religion problem in AI alignment · 2022-09-23T16:35:50.478Z · EA · GW

zdgroff -- that link re. specific preferences to the 80k Hours interview with Stuart Russell is a fascinating example of what I'm concerned about. Russell seems to be arguing that either we align an AI system with one person's individual stated preferences at a time, or we'd have to discover the ultimate moral truth of the universe, and get the AI aligned to that. 

But where's the middle ground of trying to align with multiple people who have diverse values? That's where most of the near-term X risk lurks, IMHO -- i.e. in runaway geopolitical or religious wars, or other human conflicts, amplified by AI capabilities. Even if we're talking fairly narrow AI rather than AGI. 

Comment by Geoffrey Miller (geoffreymiller) on Crypto 'oracle protocols' for AI alignment with real-world data? · 2022-09-23T16:27:40.734Z · EA · GW

Thanks for this suggestion about fetch.ai; I'd vaguely heard of them, but wasn't sure what they were up to. 

I know that SingularityNet (by Ben Goetzel) is building some kind of AI blockchainy thing on the Cardano protocol, but I haven't ever understood quite how it works.

Comment by Geoffrey Miller (geoffreymiller) on What Do AI Safety Pitches Not Get About Your Field? · 2022-09-21T21:36:20.893Z · EA · GW

Well, human brains are about three times the mass of chimp brains, diverged from our most recent common ancestor with chimps about 6 million years ago, and have evolved a lot of distinctive new adaptations such as language, pedagogy, virtue signaling, art, music, humor, etc. So we might not want to put too much emphasis on cumulative cultural change as the key explanation for human/chimp differences.

Comment by Geoffrey Miller (geoffreymiller) on What Do AI Safety Pitches Not Get About Your Field? · 2022-09-21T21:32:32.687Z · EA · GW

Aris -- great question. 

I'm also in psychology research, and I echo your frustrations about a lot of AI research having a very vague, misguided, and outdated notion of what human intelligence is. 

Specifically, psychologists use 'intelligence' in at least two ways: (1) it can refer (e.g. in cognitive psychology or evolutionary psychology) to universal cognitive abilities shared across humans, but (2) it can also refer (in IQ research and psychometrics) to individual differences in cognitive abilities. Notably 'general intelligence' (aka the g factor, as indexed by IQ scores) is a psychometric concept, not a description of a cognitive ability. 

The idea that humans have a 'general intelligence' as a distinctive mental faculty is a serious misunderstanding of the last 120 years of intelligence research, and makes it pretty confusing with AI researchers talk about 'Artificial General Intelligence'. 

(I've written about these issues in my books 'The Mating Mind' and 'Mating Intelligence', and in lots of papers available here, under the headings 'Cognitive evolution' and 'Intelligence': 

Comment by Geoffrey Miller (geoffreymiller) on The religion problem in AI alignment · 2022-09-21T19:38:38.275Z · EA · GW

Danielle -- good points. 

  1. For net increase or decrease in religiosity in the next decades, you're right that we'd want a more precise demographic model of births, deaths, rates of vertical vs. horizontal cultural transmission for specific religions, etc.
  2. re. Hinduism, I resonate with your sense that lots of Hindus are less inclined to think they're in the 'one true religion' than people in other religions. But I have low confidence in that -- I've only spent 2 weeks in India, have interacted mostly with highly educated Indians, and don't know much about Hindu vs. Muslim conflicts over history, or what they reveal about degree of religious exclusivity.
  3. The issue of 'earning' one's way into heaven has been a source of much contention over the centuries, e.g. the Catholic emphasis on good works vs. the Protestant emphasis on faith. Certainly for religious people who emphasize moral behavior in this life, there might be minimal conflict between religious values and EA values. However, many religious people (perhaps especially outside the US/UK/Europe) might put a heavier emphasis on the afterlife (e.g. in cases of religious martyrdom.)
Comment by Geoffrey Miller (geoffreymiller) on Aligning AI with Humans by Leveraging Legal Informatics · 2022-09-21T19:31:40.081Z · EA · GW

Thanks for this reply; it all makes sense.

Regarding cross-cultural universals, I think there's some empirical research on cross-cultural universals in which kinds of violent or non-violent crime are considered worst, most harmful, and most deserving of punishment. I couldn't find a great reference for that in a cursory lit search, but there is related work on the evolutionary psychology of crime and criminal law that might be useful, e.g. work by Owen Jones: https://onlinelibrary.wiley.com/doi/abs/10.1002/9780470939376.ch34

Also David Buss (UT Austin) has written a lot about violent crime, esp. murder, e.g. https://labs.la.utexas.edu/buss/files/2015/09/Evolutionary-psychology-and-crime.pdf

Comment by Geoffrey Miller (geoffreymiller) on The religion problem in AI alignment · 2022-09-21T19:23:32.871Z · EA · GW

Oh cool, thanks for the link to the EA for Jews facebook group. Sorry I missed it!