The Superintelligence Dilemma: Can We Control AI or Are We Doomed?






Controlling Superintelligence: How Likely Is It?

The idea of ​​artificial intelligence overthrowing humanity has been discussed for decades – and programs like ChatGPT have renewed those fears.

So, how likely is it that we will be able to control the superintelligence of a high-level computer? Scientists have calculated the numbers in 2021. The answer was: almost certainly not. The fact is that the control of the supermind, which is far beyond the limits of human understanding, requires the modeling of the supermind that we can analyze. But if we are not able to understand this, then it is impossible to create such simulations.

Norms like “don’t harm people” can’t be set if we don’t understand what scenarios AI will come up with, as the authors suggest. Once a computer system is running at a level beyond the capabilities of our programmers, we can no longer set limits.

“Superintelligence presents a fundamentally different problem than that which is usually studied under the slogan of “robot ethics,” the researchers wrote in 2021. This is due to the fact that the superintelligence is multifaceted and therefore potentially able to mobilize various resources in order to potentially achieve goals, this is incomprehensible to a person, let alone control him.

Part of the team’s reasoning comes from Alan Turing’s 1936 stopping problem. The problem is centered on whether the computer program will come up with an answer (so it will stop).

As Turing proved with some tricky math though, we can know that it is logically impossible for any particular program to find a way that allows us to know this for every possible program that could be written at any given time.

And this brings us back to artificial intelligence, which, in the case of superintelligence, can simultaneously hold almost any possible computer program in its memory. Any program written to prevent AI from harming people and destroying the world, for example, can come to a conclusion (and stop) or not – in any case, it is mathematically impossible to be absolutely sure, which means that it cannot be contained.

“This effectively renders the containment algorithm unusable,” said computer scientist Iyad Rahwan of the Institute for Human Development. Max Planck in Germany.

The researchers say the alternative to teaching AI some ethics and telling it not to destroy the world—something no algorithm can ever be sure of—is to limit the capabilities of the superintelligence. They can be cut off from parts of the Internet or from certain networks, for example.

The 2021 study also dismisses this idea, suggesting that it would limit the capabilities of AI – the argument being that if we’re not using it to solve problems outside of humans, why would we need to create it at all?

And if we are going to advance in the field of artificial intelligence, we may not know when a superintelligence will appear beyond our control. And that means we need to start asking serious questions about where we’re headed.

In fact, earlier this year, tech giants including Elon Musk and Apple co-founder Steve Wozniak signed an open letter asking humanity to put AI on hold for at least 6 months until its safety is examined.

“AI systems with human competitive intelligence could pose a serious danger to society and humanity,” reads the open letter titled Suspending Giant AI Experiments. “Strong AI systems should only be developed when we are confident that their impact will be positive and their risks will be under control.”

Study published in Journal of artificial intelligence research.

Source: Science Alert


Follow AsumeTech on

More From Category

More Stories Today

Leave a Reply