Posts

Sharmake's Shortform 2022-05-07T00:33:19.110Z

Comments

Comment by Sharmake on All moral decisions in life are on a heavy-tailed distribution · 2022-07-05T17:33:07.622Z · EA · GW

A note that this is my subjective perspective, not an objective perspective on morality.

Basically, if you're concerned about everyday moral decisions, you are almost certainly too worried and should stop worrying.

Focus on the big decisions for a moral life, like career choices.

You are ultimately defined morally by a few big actions, not everyday choices.

This means quite a bit of research to find opportunities, and mostly disregard your intuition.

On downside risk, a few interventions are almost certainly very bad, and cutting out the most toxic incentives in your life is enough. Don't fret about everyday evils.

Comment by Sharmake on Will faster economic growth make us happier? The relevance of the Easterlin Paradox to Progress Studies · 2022-06-30T22:40:35.034Z · EA · GW

The biggest reason that the Easterlin Paradox isn't a paradox is to look at what has changed the least: Human biology and nature have changed very little throughout the centuries of progress, especially with reward, so there is no real paradox in that more stuff doesn't cause the brain to update on that nearly as much.

So progress studies need to get genetic engineering, nanotechnology, whe brain emulation and more researched more.

Comment by Sharmake on What success looks like · 2022-06-28T21:20:16.601Z · EA · GW

Because it's possible that even in unstable, diverse futures, catastrophe can be avoided. As to the long-term future after the Singularity, that's a question we will deal with it when we get there

Comment by Sharmake on Kurzgesagt - The Last Human (Longtermist video) · 2022-06-28T21:04:44.593Z · EA · GW

Basically, yes. Assuming civilization survives the Singularity, existential risks are effectively zero thanks to the fact that it's almost impossible to destroy an interstellar civilization.

Comment by Sharmake on What success looks like · 2022-06-28T20:36:08.778Z · EA · GW

I'd argue it's even less stable than nukes, but one reassuring point: There will ultimately be a very weird future with thousands, Millions or billions of AIs, post humans and genetically engineered beings, and the borders are very porous and dissolvable and that ultimately is important to keep in mind. Also we don't need arbitrarily long alignment, just aligning it for 50-100 years is enough. Ultimately nothing needs to be in the long term stable, just short term chaos and stability.

Comment by Sharmake on What success looks like · 2022-06-28T16:59:49.891Z · EA · GW

My mainline best case or median-optimistic scenario is basically partially number 1, where aligning AI is somewhat easier than today, plus acceleration of transhumanism and a multipolar world both dissolve boundaries between species and the human-AI divide, this by the end of the Singularity things are extremely weird and deaths are in the millions or tens of millions due to wars.

Comment by Sharmake on How accurate are Open Phil's predictions? · 2022-06-16T16:03:20.049Z · EA · GW

On Number 3's criticism, that's just using approximated Solonomoff Induction, which is indeed a vaild method. Of course, we do have biases that lead us astray, which is the problem with Solonomoff Induction.

Comment by Sharmake on Lifeguards · 2022-06-11T13:53:44.833Z · EA · GW

The real problem is that in large scale problems like AI safety, progress is usually continuous, not discrete. This we can talk about partial alignment problems, which realistically is the best EA/LessWrong can do. I don't expect them to ever be able to get AI to be particularly moral or not destabilize society, but existential catastrophe is likely to be avoided.

Also, I'm going to steal part of Vaidehi Agarwalla's comment and improve upon it here:

Your post links to 2 articles from Eliezer Yudkowsky's / MIRI's perspective of AI alignment, which is a (but importantly, not the only) perspective of alignment research that is an outlier in it's direness. We have good reason to believe that this caused by unnecessary discreteness in their framing of the AI Alignment problem.

Comment by Sharmake on Responsible/fair AI vs. beneficial/safe AI? · 2022-06-03T11:30:23.498Z · EA · GW

The big differences arise in two areas: Politics and the question of AI timelines/takeoff speed.

The Responsible/Fair AI faction is political to the hilt, and on the leftist side of politics to boot. The beneficial/safe AI faction is non-political and focuses more on the abstract side of AI.

Another difference is in AI timelines/takeoff speed. The Responsible/Fair AI faction views takeoff as not happening and AGI more than 50-100 years away. The beneficial/safe AI faction views a hard takeoff as fairly likely and AGI only 30-50 years away.

Comment by Sharmake on What should we actually do in response to moral uncertainty? · 2022-05-31T17:45:34.503Z · EA · GW

The real issue is unrealistic levels of coordination and a assumption that moral objectivism is true. While it is an operating assumption in order to do anything in EA, that doesn't equal that's it's true.

Comment by Sharmake on Introducing Asterisk · 2022-05-27T15:45:49.320Z · EA · GW

Is there a link to your website, because I didn't see it in this post.

Comment by Sharmake on Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg · 2022-05-22T01:28:31.194Z · EA · GW

The surrender was really the Emperor having a way out, and giving "the most cruel bomb" statement via a discontinuous power scale. Even so, a group of 20 year olds tried to continue the war, and the reason it failed was the Emperor chose surrender, and to Japan, the Emperor was basically as important as the God Emperor of Mankind has in the Imperium of Man from 40k. Japan had up to this point despite steadily getting worse still couldn't surrender, and I think it was the fact that everything got worse continuously, there was no moment where it was strong enough as a rupture moment to force surrender into their heads.

I'll grant you this though, this isn't inevitable as a scenario. Obviously without hindsight and nukes it's really hard to deal with, but it may not happen at all.

Comment by Sharmake on 10 non-EA books you might find interesting · 2022-05-21T00:30:22.709Z · EA · GW

I must say, I wince at 1 book here, and I'll explain why.

On the Emperor's new mind book, I see a wealth of wrongness in the book. The misuse of Godel's Incompleteness Theorems is astounding, and the problem is 'human understanding' is really a way to say that there are hidden inconsistencies in your proof and not complete (it'd require having uncountably infinite proofs to be solved by humans in a finite time, which is very exceptional as a claim.)

The Chinese Room issue is that a look up table understanding Chinese is only physically impossible due to storage limits, information limits and thermodynamic issues with heat dissipation, not logically impossible. If you're willing to accept that thermodynamics is utterly broken, then you can arbitrarily add more energy to get more cases in the look up table until you learn Chinese, or arbitrarily force the efficiency beyond 100% until you get every rule down with a minimum of computing. The Chinese Room is a philosophical toy, nothing more.

Microtubules are known what they do, and they're not quantum. Actually, the quantum brain has severe problems, and is basically an attempt to claim there's a soul in a physical sense.

Re determinism and quantum mechanics, there is a variant called super determinism which says there is no free will, and all actions are pre-staged. It is just as computational, if not more than the free will quantum version.

This book is the perfect example of expertise in one area doesn't equal expertise in all areas.

Comment by Sharmake on Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg · 2022-05-20T17:27:46.526Z · EA · GW

The big difference is Japan doesn't even exist as a nation or culture due to Operation Downfall, starvation and insanity. The reason is without nukes, the invasion of Japan would begin, and one of the most important characteristics they had is both an entire generation under propaganda, which is enough to change cultural values, and their near fanaticism of honorable death. Death and battle was frankly over glorified in Imperial Japan, and soldiers would virtually never surrender. The result is the non existence of Japan in several years.

Comment by Sharmake on A hypothesis for why some people mistake EA for a cult · 2022-05-13T23:22:44.616Z · EA · GW

The real issue is really the fact that AI tends to be both the public facing side of EA, and one where there's a lot of existential claims that sound similar to cultish claims like "If AGI happens, we'll go extinct." We really need specific cause areas for new EAs to make it less a personal identity.

Comment by Sharmake on Sharmake's Shortform · 2022-05-07T14:40:13.738Z · EA · GW

My guess is I don't think the tech for nukes is dual use or easily hidden, unlike other existential risks because they require enrichment levels so high it's easy to distinguish them from peaceful uses and it's probably not going to be so easy that every state can make a nuke. That said, agree with the other parts of your comment.

Comment by Sharmake on Sharmake's Shortform · 2022-05-07T00:33:19.214Z · EA · GW

The War in Ukraine that started on February 24th has some important consequences for EA. Specifically, nuclear war within 2 years likelihood is still very low, despite Russian threats of nukes. On the other hand, long term the nuclear warfare existential risk has increased. This is because Russia invaded Ukraine and used it's nuclear warheads as a shield. This deals a significant blow to arms control creates a more unstable international order. It also will incentivize more states to take up nukes. What this means could be expanded though.

Comment by Sharmake on Virtue signaling is sometimes the best or the only metric we have · 2022-05-06T13:41:49.888Z · EA · GW

I agree somewhat, but I think this represents a real difference between rationalist communities like LessWrong and the EA community. Rationalists like LessWrong focus on truth, Effective Altruism is focused on goodness. Quite different goals when we get down to it.

While Effective Altruism uses a lot more facts than most moral communities, it is a community focused on morality, and their lens is essentially "weak utilitarianism." They don't accept the strongest conclusions of utilitarianism, but there is no "absolute dos or don'ts", unlike dentologists.

The best example is "What if P=NP?" was proven true. It isn't, but I will use it as an example of the difference between rationalists and EAs. Rationalists would publish it for the world, focusing on the truth. EAs would not, because one of the problems we'd be able to solve efficiently is encryption. Essentially this deals a death blow to any sort of security on computers. It's a hacker's paradise. They would focus on how bad such an information hazard it would be, this for the good of the world, they wouldn't publish it.

So what's all those words for? To illustrate point of view differences between rationalists like LessWrong and EAs on the question of prioritization of truth vs goodness.