Posts

Love and AI: Relational Brain/Mind Dynamics in AI Development 2022-06-21T07:09:18.041Z

Comments

Comment by JeffreyK on My notes on: A Very Rational End of the World | Thomas Moynihan · 2022-06-26T18:19:00.059Z · EA · GW

Thanks for this very helpful for a post-religious person like myself.

Comment by JeffreyK on Should we produce more EA-related documentaries? · 2022-06-26T00:23:49.423Z · EA · GW

I think the world would be very interested in both EA and Longtermism. AI continues to grow in public awareness. A few thoughts from other comments: If a film is well funded so the interview subjects feel they will look good in a good film, there's a high chance of interviewing some big names. There are plenty of documentary crew people always looking for work, what you need most of all is an experienced documentary producer who will know how to line up cast and crew. Often the producing partnership would be an EA person co-producing with a veteran producer, that way you have an EA insider at the top. I think a lot of the concepts in EA and AI and Longtermism really mix well and you could easily cover a lot of it with big names being interviewed and quotes from good books, clips of interesting stuff much of which could be borrowed with permission from other films and news reports. 

And finally, on the question of how much EA/AI/Longtermist should it be? Well that's an easy one, this is EA and if an EA grantor funds it, it should be about what the grantor is meant to give money for...if we're not the one's to do it, who is?   You go with what you know.  EA makes EA films, really what this question is asking is how can EA make a really interesting film? Talented writers and filmmakers can do it if you give them the money.

Comment by JeffreyK on EA Creatives and Communicators Slack · 2022-06-24T21:04:37.970Z · EA · GW

Hello, I'm Jeffrey. I was an artist, then became a social activist/community organizer who gathered and organized and coached/mentored a lot of artists in Manhattan over 13 years, I'd love to join the slack. Thanks.

Comment by JeffreyK on Effective altruism is similar to the AI alignment problem and suffers from the same difficulties [Criticism and Red Teaming Contest entry] · 2022-06-24T05:51:15.466Z · EA · GW

Thanks!

Comment by JeffreyK on The Maker of MIND · 2022-06-24T01:53:36.378Z · EA · GW

Thank you for this story, very revealing. I'll need some time to digest it. Let's hear more. :)

Comment by JeffreyK on Effective altruism is similar to the AI alignment problem and suffers from the same difficulties [Criticism and Red Teaming Contest entry] · 2022-06-23T00:55:49.671Z · EA · GW

Can you point me to more writing on this and tell me the history of it.

Comment by JeffreyK on Effective altruism is similar to the AI alignment problem and suffers from the same difficulties [Criticism and Red Teaming Contest entry] · 2022-06-23T00:49:16.143Z · EA · GW

..."Even now there is more art created then I can consume by many orders of magnitude, and it is embarrassing for the creators as well as for me".   ...besides the typo, I don't actually agree with this sentence. For me it would be like saying, "There are too many people I don't know talking to each other"...I was never meant to know everyone, or to hear every conversation. Some art is global, much art is local. Your child's drawing on the refrigerator is what I call "familial art" it's value is mainly to the parents, to them it is precious, to everyone else it just looks like millions of other kid drawings, cute but not remarkable in any way. It's a hyper-local art. Much art is cultural only understood by the people of that culture. Just as that special form of humor you and your best friend have, it's only for the two of you. There can never be enough art to satisfy the human need for beauty, just as there can never be enough human conversations. 

Comment by JeffreyK on EA-break, EA-slow · 2022-06-22T18:35:51.918Z · EA · GW

Fantastic!  It's well known that saving the world is tiring. 

Comment by JeffreyK on Effective altruism is similar to the AI alignment problem and suffers from the same difficulties [Criticism and Red Teaming Contest entry] · 2022-06-22T05:47:12.747Z · EA · GW

turchin! You're just killing me with these ideas, I'm absolutely blown away and excited by what I'm reading here. I commented above to Harrison Durland's comment questioning resurrecting Mammoths and already dead human brains??  Is it that if you happened to die before EA/Longtermists develop the tech to preserve your brain for future re-animation then your just screwed out of being in on the future?  Or from an idea I developed in one of my stories for a different purpose - what if every single movement on the earth created waves which emanated outward into space, every motion, every sound, every word uttered, and even every brain wave...and it is an indelible set of waves forever expanding outwardly...and we develop a reader...and then a player...we could watch scenes of everything that ever happened. Eventually we could reconstruct every brain wave so we could rebuild and then re-animate all humans who ever lived.  Wow. Is this the resurrection? Is this the merging of science and religion, atheism and spirituality?

Comment by JeffreyK on Effective altruism is similar to the AI alignment problem and suffers from the same difficulties [Criticism and Red Teaming Contest entry] · 2022-06-22T05:37:02.837Z · EA · GW

What if we can develop future technology to read all the vibrations emanated from the earth from all of human history...the earliest ones will be farther out, the most recent ones near...then we can filter through them and recreate everything that ever happened on earth, effectively watching what happened in the past...and maybe even to the level of brain waves of each human, thus we could resurrect all previously dead humans by gathering their brain waves and everything they ever said...presumably once re-animated they could gain memory of things missed and reconstruct themselves further.  Of course we could do this with all extinct animals too. 

This really becomes a new version of heaven.  For the religious; what if this was G-d's plan, not to give us a heaven but for us to create one with the minds we have (or have been given) this being the resurrection...maybe G-d's not egoistic and doesn't care if we acknowledge the originating gift meaning atheism is just fine.  We do know love doesn't seek self benefit so that would fit well since "G-d is love". I like being both religious and atheist at the same time, which I am. 

I would like to thank turchin the author for inspiring this idea in me for it is truly blowing my mind. Please let me know of other writings on this. 

Comment by JeffreyK on Effective altruism is similar to the AI alignment problem and suffers from the same difficulties [Criticism and Red Teaming Contest entry] · 2022-06-22T05:15:20.252Z · EA · GW

true altruism vs. ulterior motive for social gain as you mention here, as well as legible vs. illegible above...I am less cynical than some people...I often receive from people only imagining they seek my good...and I do for others truly only seeking their good...usually...the side benefits that accrue occasionally in a community are an echo of goodness coming back to you...of course people have a spectrum of motivations, some seek the good, some the echo...but both are beneficial so who cares?  Good doing shouldn't get hung up on motivations, they are trivial...I think they are mostly a personal internal transaction...you may be happier inside if you are less self seeking at your core...but we all have our needs and are evolving. 

Comment by JeffreyK on Effective altruism is similar to the AI alignment problem and suffers from the same difficulties [Criticism and Red Teaming Contest entry] · 2022-06-22T04:54:18.425Z · EA · GW

typo: Even now there is more art created when I can consume by many orders of magnitude (should be "then"?)

Comment by JeffreyK on Love and AI: Relational Brain/Mind Dynamics in AI Development · 2022-06-22T04:29:03.299Z · EA · GW

Also, let me add from a private message exchange: in my previous comment on love seeking to benefit another rather than use them to benefit yourself...let's get a little meta...This whole EA movement seeks to benefit humanity...and especially if seeking to benefit humans far in the future, that is really really selfless...EA people today will work hard to benefit someone they don't know 400 years from now...Wow, that is about as lovingly unselfish as it gets. In other words this is a massively love based movement, even though it doesn't see itself in the mirror that way.  When I searched the topic tags you can add to your post, I searched all 740 of them and the word love is not found there. From a meta perspective it says a lot when love is your core motivation yet you never even speak of love. Psychologists might have some interesting comments on that. I have a more urgent one...process begets process...if I want to model being open about your shit to people, I should be open about my shit in front of them.  You don't see many beer bellied personal trainers. If my whole huge goal is to Align AI so it doesn't destroy humanity, and I'm motivated by love to do that, and I agree that pretty much all humans just really want to love and be loved, and that's just how our brains are wired...wouldn't it kinda seem obvious that trying to print/copy a human brain into a digital version might include the very thing at the core driving actual human brains? Namely love.  And essentially as the last line in my post, if AI had love for humans as it's motivation, if it was the new "Man's best friend"...all would be well...EA/Longtermism would be a triumphant success. 

Comment by JeffreyK on Love and AI: Relational Brain/Mind Dynamics in AI Development · 2022-06-22T04:17:39.374Z · EA · GW

Commenting on my post to add to it: In Ajeya Cotra's paper on Holden's Cold Takes: "Why AI alignment could be hard with modern deep learning"  She speaks of the three paths deep learning could go down, Saint, Schemer, Sycophant...and both humans guiding the process as well as possibly other AI's also keeping other AI's " in check". But they relate to each other at odds and with an assumption of manipulativeness...why not place them in an orientation of seeking the others benefit? This is an aspect of my definition of love...love is seeking to benefit another. The opposite of love is seeking to benefit yourself at the expense of another. 


 

Comment by JeffreyK on Ways money can make things worse · 2022-06-21T17:47:09.665Z · EA · GW

Often if no money we are forced to do things the hard way which has the effect of slow and strong building...having money opens faster channels which sometimes are too easy and don't build much.  But frequently money opens doors you didn't have access to. This is a classic problem. I find leadership character is the biggest factor...knowing when to wait and build slower...knowing when to go ahead and pay to move faster or higher quality.  

For example, you can get people to do things because they are passionate to volunteer...now you have a solid compatriot who does it from the heart...maybe the quality is better...for a while...later because they're not getting paid the quality may go down as their life situation forces them to prioritize earning income.  A movement like this with many young people will have a natural evolution...when everyone is young, in school, all will work with passion for free...later many get married...have a child...work/life balance gets harder...now probably you should pay them...but now also you have more funding, so it kind of works out. 

The character issue is biggest right at the juncture between being a poor student and a well funded org...you can finally go out to eat and maybe buy a car...with many decisions, just because you can doesn't mean you should...but sometimes you should. Since this is EA I'm sure character won't be a major factor. Coaching recent windfall recipients is very helpful. Could be a grant funder technique...give a grant and include a coach for free. 

Comment by JeffreyK on What is the overhead of grantmaking? · 2022-06-16T03:08:20.157Z · EA · GW

This kind of reminds me of the music biz...A&R reps would scour clubs for bands and sign them to a record deal. Most of the bands would lose money but a few would hit it big paying for the rest. Similar also VC funds funding startups. It's the nature of the beast. In this way you can also understand that there will always be less and more effective charities and while seeking to be more efficient is good, the lesser still play their role by populating the range and hopefully evolving forward...for every starting player there always needs to be a bench.

Comment by JeffreyK on Community builders should focus more on supporting friendships within their group · 2022-06-16T02:54:00.253Z · EA · GW

Wonderful post...I was in a similar community to EA with lots of local chapters meeting all over and  online forums...and we found this friendship thing happening that was both personally fulfilling and also led to new things being created, exactly as you've posted here, and we coined the term, "Generative Friendship" to describe it. It's a real genuine friendship, but also generates something good in the world. I've always loved this term.