Why the Orthogonality Thesis's veracity is not the point:

post by Etoile de Scauchy · 2020-07-23T15:40:28.015Z · score: 1 (1 votes) · EA · GW · None comments

Contents

  It's not THAT important to be right
  The communication about it
  Conclusion
None
No comments

When the topic of the possibility of AGI changing the world comes up, there are three usual opinions:

In the EA environment, the classic response to the first opinion is, on the one hand, to emphasize that a growing body of evidence tends to show that it’s not so unrealistic, on the other hand, to debunk the implicit biases underlying such views. (From a Bayesian perspective, this debate approach is like trying to improve both likelihood and prior of the interlocutor.)

The key next argument is, if you agree to consider AGI as credible , you should worry about its consequences, because, by definition, AGI could have a major impact on humanity, even if you put on it only a 5% probability.

Thus this major impact -also known as technological Singularity- leads to choose between the second and the third view, that is, to deny or embrace the Orthogonality Thesis. This is why I'm writing this post: we don't have to, and we shouldn't- pick side about it.

It's not THAT important to be right

Let's imagine the futures according to these two hypotheses, if AGI became real:

It is very important to note that in both cases, the question of existential risks is still relevant (just as important and just as likely). So this part of the fight is identical.

The difference is whether the prevention of existential risks is sufficient or not to ensure the emergence of a truly beneficial AGI. So, unless to be very confident about the falsity of the Orthogonality Thesis, we cannot ignore the scenario of dystopian AGI.

As you may have noticed, it is the same argument for not ignoring the emergence of AGI scenario which leads us not to ignore the plausibility of the Orthogonality Thesis, even without granting it lots of credit.

The communication about it

It seems to me that the Orthogonality Thesis is popular among EA people, and its popularity tends to grow. That's good news. The more the issue is recognized, the more it is likely to be solved. An explaining factor to this could be that from an informatician or mathematical analysis, the Orthogonality Thesis seems to be a triviality, because we can put every objective function we want.

The real problem is not to be too confident about the Orthogonality Thesis, because this leads to the right mindset about it, but it is to show too confidence about it [EA · GW]. I personally don't put a very high -or very low- probability on the Orthogonality Thesis. A lot of people could have a similar opinion about it, and agree with the high importance of preventing the issue.

To such people, to consider the Orthogonality Thesis unlikely could lead them to reject to think about it, which is a loss for the EA, especially for its needs to recruit. And this loss is not even justified because there is no need to be fully convinced by the Orthogonality Thesis to see why it matters.

Without the habit of prudence, these people could infer a negative image of the defenders of the Orthogonality Thesis and of the EA movement in general. I think it's particularly true for the non-scientific public.

Conclusion

True or not, the Orthogonality Thesis is a useful mindset and tool to discuss the AGI. However, to express only this thesis as a literal orthogonality with high confidence about it could lead to lose support from new people and harm EA unity.

Besides, to the question "Is the Orthogonality Thesis true ?", there is not necessarily a 0-1 answer. Maybe there is truth on both sides, and a high intelligence could be positively correlated with beneficial goals, but only from an asymptotic point of view. Maybe not. The point is: it's not the point.

None comments

Comments sorted by top scores.