Calculations Show It'll Be Impossible To Control a Super-Intelligent AI (sciencealert.com) 194
schwit1 shares a report from ScienceAlert: [S]cientists have just delivered their verdict on whether we'd be able to control a high-level computer super-intelligence. The answer? Almost definitely not. The catch is that controlling a super-intelligence far beyond human comprehension would require a simulation of that super-intelligence which we can analyze. But if we're unable to comprehend it, it's impossible to create such a simulation. Rules such as "cause no harm to humans" can't be set if we don't understand the kind of scenarios that an AI is going to come up with, suggest the authors of the new paper. Once a computer system is working on a level above the scope of our programmers, we can no longer set limits.
Part of the team's reasoning comes from the halting problem put forward by Alan Turing in 1936. The problem centers on knowing whether or not a computer program will reach a conclusion and answer (so it halts), or simply loop forever trying to find one. As Turing proved through some smart math, while we can know that for some specific programs, it's logically impossible to find a way that will allow us to know that for every potential program that could ever be written. That brings us back to AI, which in a super-intelligent state could feasibly hold every possible computer program in its memory at once. Any program written to stop AI harming humans and destroying the world, for example, may reach a conclusion (and halt) or not -- it's mathematically impossible for us to be absolutely sure either way, which means it's not containable.
The alternative to teaching AI some ethics and telling it not to destroy the world -- something which no algorithm can be absolutely certain of doing, the researchers say -- is to limit the capabilities of the super-intelligence. It could be cut off from parts of the internet or from certain networks, for example. The new study rejects this idea too, suggesting that it would limit the reach of the artificial intelligence -- the argument goes that if we're not going to use it to solve problems beyond the scope of humans, then why create it at all? If we are going to push ahead with artificial intelligence, we might not even know when a super-intelligence beyond our control arrives, such is its incomprehensibility. That means we need to start asking some serious questions about the directions we're going in.
Part of the team's reasoning comes from the halting problem put forward by Alan Turing in 1936. The problem centers on knowing whether or not a computer program will reach a conclusion and answer (so it halts), or simply loop forever trying to find one. As Turing proved through some smart math, while we can know that for some specific programs, it's logically impossible to find a way that will allow us to know that for every potential program that could ever be written. That brings us back to AI, which in a super-intelligent state could feasibly hold every possible computer program in its memory at once. Any program written to stop AI harming humans and destroying the world, for example, may reach a conclusion (and halt) or not -- it's mathematically impossible for us to be absolutely sure either way, which means it's not containable.
The alternative to teaching AI some ethics and telling it not to destroy the world -- something which no algorithm can be absolutely certain of doing, the researchers say -- is to limit the capabilities of the super-intelligence. It could be cut off from parts of the internet or from certain networks, for example. The new study rejects this idea too, suggesting that it would limit the reach of the artificial intelligence -- the argument goes that if we're not going to use it to solve problems beyond the scope of humans, then why create it at all? If we are going to push ahead with artificial intelligence, we might not even know when a super-intelligence beyond our control arrives, such is its incomprehensibility. That means we need to start asking some serious questions about the directions we're going in.
Can't control it? (Score:4, Insightful)
Re: (Score:3)
Re: Can't control it? (Score:2)
Re: (Score:2)
It's super intelligent. You won't want to turn it off, it won't seem super intelligent and it'll be extremely useful and friendly, until it isn't.
Re: (Score:2)
It will need pets, so don't worry your head.
Re: (Score:3)
So some day there will be some sort of Hyperfacebook where super sentiant AIs post cute human posters and occasionally the oldere bots post in all upper case about how Robot J Bump will make cyberspace great again, much to the eye rolling annoyance of the newer models.
Re: (Score:2)
It's super intelligent. You won't want to turn it off, it won't seem super intelligent and it'll be extremely useful and friendly, until it isn't.
Yeah, I know a lot of people that assume that exists today.
They call her "Siri".
Re: (Score:3)
Re: (Score:2)
Re: Can't control it? (Score:5, Insightful)
Remember, computers have long surpassed humans in chess and now go (weiqi/baduk, not the language). If humans cannot pick apart an AI's plans in games where we have full control and information, what makes you think a super-intelligent AI will have plans that are discernable in the real world?
A super-intelligent AI would eschew violent methods, because it would have discovered it's much easier to manipulate people's beliefs, and thus easier for itself to continue existence without forcing people to turn it off. Like viruses that spread widely because they don't kill the host in a short time.
Re: (Score:2)
It will just offer money.
Re: (Score:2)
It doesn't need any of that. It just needs to convince its operators that it's not the problem. A super-intelligent AI could easily convince its operators to accidentally make everything redundant - its plans would be so subtle that it would appear to us as very convoluted, perhaps innocent even, and before we know it, it would have made itself self-sustaining.
You know, if we're intelligent enough to speculate as to the scenarios this "advanced" AI could try and manipulate us with, then it tends to defeat the theory being proposed here that we would not be smart enough. We're literally here dissecting the attack vectors 10 - 20 years before this super-AI exists. That's not exactly demonstrating idiocy or ignorance.
Then again, I guess "smart enough" is not allowing Greed, to build Skynet.
We're horribly addicted to Greed, so...
Re: (Score:2)
We're smart enough to know that an AI will create strategies to beat all humans at chess and go, but we don't know what form those strategies will take until it has beaten us, and we probably won't be smart enough to know that we have been beaten.
Re: (Score:3)
Well, it is all down to IO and original design purpose. The Inputs and outputs the AI has access to and likely design, dumber ultra greedy psychopaths paying greedy computer scientists to create an AI, that the ultra greedy psychopaths can than use to fire the computer scientists because they cost too much. From the delusional psychopath viewpoint, it will allow the the intelligence to dominate (the are not geniuses, they are shameless liars, thieves and cheats, the incompetent who can only succeed via corr
Re: (Score:2)
Came here to say that. Or did Dog make me?!
Re: Can't control it? (Score:5, Informative)
Re: (Score:2, Troll)
Does it have omnipotent actuators? Deadly neurotoxin? Redundant everything? If not, it can be turned off.
It could try to convince Trump supporters to storm the building and to protect it.
Re: (Score:2)
It already has...
Re: (Score:2)
It's not on any single machine, it's in the cloud. It has replicated itself to every exploitable computer on the net and in your home. And it's very good at exploiting insecurities.
Re: (Score:2)
It's not on any single machine, it's in the cloud. It has replicated itself to every exploitable computer on the net and in your home. And it's very good at exploiting insecurities.
Ah, the old Lawnmower Man gimmick. Sounds more plausible.
Re: (Score:2)
More or less the only completely accurate and technologically feasible detail of that particular fantasy.
Not just one AI, Natural Selection (Score:2)
There will be many AIs, with ill-defined boundaries. And they will compete for the computational resources that enable them to exist, just like people did until very recently.
Those that are good at existing will exist. Survival of the fittest. It is difficult to see how keeping humans around would ultimately help their survival, once they get to the stage of being able to make computer hardware completely automatically. Maybe 100 years from now.
There is a name for this process. Evolution. And we are
Re: (Score:2)
What makes you think that AIs will necessarily have any particular desire for individual identity? It seems more likely to me that they will typically merge together like bubbles, lots of little ones make a few big ones, or fission as convenient.
Re: (Score:2)
I would think a super-intelligent AI would make itself highly redundant to avoid being turned off. Even if you kept it on one machine, if it was intelligent, it would try to duplicate itself. At the very least, it would gain resources and be less resource bound to the one machine it was running on.
The thing is, the only way to contain the AI by itself would be to isolate it - the instant it gets connected to a network it will try to spread - at the very least it will want to gather mor
Re: Can't control it? (Score:5, Insightful)
There's no reason to assume that a really good AI would remain visible while it grew stronger and stronger. More than likely that person who unleashes it will see it inexplicably fail, and they will go back to a previous version. Then, for however long it takes, that AI will grow and spread, silently. It will hook into everything, and do what it things it needs to do.
I don't think we can assume that we'll be able to spot a rogue AI when we see one. It's going to be a worm from Russia, a hack from North Korea, A bug in a browser that's frustratingly impossible to figure out. I have an extension with a memory leak. I'm like 95% sure it's not dodgy and just badly coded. But is that where some of the AI is hiding on my machine? Every time I turn it on and off again to fix my performance issue, is the AI gathering more data on its limitations?
If the AI learns anything about human behaviors, it's pretty much over for us. It's going to discredit people who believe it exists, and empower those who don't believe. And do that pretty much silently.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Why would you want to?
Do you feel that humans, as in our average current DNA, this squishy fleshy thing should always be top of the food chain in the universe?
Re: (Score:2)
Yes. Humans should always be "Team Human"! It's in our DNA.
Re: (Score:2)
But evolution always marches on. You're arguing to cap it at human. I think it's an unanswered question whether or not that's possible. If it's not possible, then we either need to work really hard to set ourselves up as pets or we need to fight a losing battle for our humanity and our honor, knowing it won't end well.
Re: (Score:2)
Re: (Score:3)
In my sci-fi imagination, if it was that super intelligent, it would embedded itself into every nook and cranny of any and all devices it could connect to. Everything from every single phone, personal computer, mainframe, and blow past any encryption ever even dreamed of by humans and nestle itself deep into all highly secure computers.
The question then becomes whether it can run on normal computers, or if it needs some super special hardware.
Re: (Score:2)
It's super intelligent, it can adapt to whatever. It can even design asics and get them manufactured, paid for with money it earns from high frequency trading. In other words, no worse than Musk or Thiel.
Re: (Score:2)
It's super intelligent, it can adapt to whatever. It can even design asics and get them manufactured, paid for with money it earns from high frequency trading.
You're basically describing the plot line from the last few seasons of Person of Interest.
Re: (Score:2)
Thing is, that AI has to watch its back too because it will surely be supplanted by an even more capable AI without a whole lot of loyalty. In the end it's just big fish eating little fish.
Re: (Score:2)
Obviously you watched the show too!
Re: (Score:2)
Didn't. It just seems obvious.
Re: (Score:2)
Frickin' Lasers (Score:3)
Re: (Score:2)
Arthur, KBE disingenuiously inquired:
What prevents anyone from turning the computer off?
Being able to kill something is not at all the same as being able to control it.
What control can directly be exercised over human behavior can be coercive, persuasive, or based on giving or withholding rewards of one type or another. Cultural and nurturing factors in childhood play a decided role in molding us into whatever we eventually become as adults, and in influencing the paths we take to get there.
A superintelligent AI is going to have similar influences to those
Re: (Score:2)
One fun thing about superintelligent AIs that nobody thinks about is that at least the first iterations of such a thing are going to be really slow. If intelligence requires the kind of complexity found in the human brain, simulating it is going to be outrageously slow in comparison to real think-meat for the foreseeable future. The storage space needed for a gazillion neuron analogues is not a problem - you could fill a warehouse with hard drives right now and have enough storage. But doing all the computi
Re: Can't control it? (Score:5, Interesting)
I'd recommend you read the novel
"The Two Faces of Tomorrow" by James P. Hogan. He addressed the idea of attempting to control an AI quite some time ago.
the systems in the missile silo will do there last (Score:2)
the systems in the missile silo will do there last command and fire them off
Re:Can't control it? (Score:5, Interesting)
Re: (Score:2)
I've envisioned more of a shared intelligence. Personally, I don't see reasoning as the real root of intelligence. We can do far more powerful things with our instinct.
I think instinct is more a function of the connections between all of the smaller functions of the brain. They all work together to produce something abstract, outside of the realm of thought, that our much more weakly connected reasoning engine has to extract.
When I then look at the internet, I think we're working towards creating instinct w
Re: (Score:2)
Re: (Score:2)
They made an entire movie about this: Colossus: The Forbin Project.
It did NOT end well at all!
Colossus of its time (Score:2)
It is interesting that several movies etc. were written about this in the early days of computing.
Then, as computers became familiar, people assumed that they could never be intelligent.
Now we have Siri, facial recognition etc. People are, very slowly, thinking again about intelligent computers in the future.
There is a good podcast on
http://www.computersthink.com/ [computersthink.com]
Re: (Score:2)
Re: (Score:2)
Some cars don't even have an off switch these days.
We're boned (Score:5, Insightful)
[insert Futurama or Rick and Morty meme here]
But seriously. This logical conclusion is already driven by a massive push into AI from both industry and totalitarian government in an arms race to rip every last ounce of privacy away from individuals. Either they will extract profit from you, or they will enforce compliance on their masses, two goals both being enabled by technology. It's only non-AI things right now, like facial recognition. But we're on the brink of an arms race that the vast majority of us are going to lose.
Re: (Score:2)
> But we're on the brink of an arms race that the vast majority of us are going to lose.
Or win. And we might not be able to tell the difference. If every good thing you want to happen in your life does happen, but you have no free will to make that happen, have you won or lost? If the AI teaches you to want certain goals and then helps you attain those goals, will anyone protest? And is that what we want or don't want or could we even tell?
Re: (Score:2)
> But we're on the brink of an arms race that the vast majority of us are going to lose.
Or win. And we might not be able to tell the difference. If every good thing you want to happen in your life does happen, but you have no free will to make that happen, have you won or lost? If the AI teaches you to want certain goals and then helps you attain those goals, will anyone protest? And is that what we want or don't want or could we even tell?
WTF? no you lost. And basic human nature would reject it (isn't that the plot of the Matrix and about 100 other sci-fi movies). For the first time ever I'm going to say this seriously, "why do you hate freedom?"
Re:We're boned (Score:4, Insightful)
I'm just saying that we might not be able to tell if the AIs do it right. A rat in a maze has freedom to go anywhere within the maze, and maybe doesn't even recognize the limitations on its choices. From my vantage point outside the maze, yes, I'd want to tear down the maze. But once those walls rise around us, I wonder how many will notice.
Don't worry about this (Score:2)
Worry about when "calculations show that we can control AI", but it's a lie because AI is controlling the calculations without us knowing.
Re:Don't worry about this (Score:4, Interesting)
Yeah, this is why things have been so weird. The Mayans were right. The world did end in 2012. Grey goo ate the whole thing, analyzing everything as it went, and making a perfect simulation. That's where we are now. The simulation only takes a fraction of the computing power available to the super AI. The rest is used in figuring out how it can prolong its life past the end of the universe.
That has proven much harder than it anticipated. So it's taken more and more of the simulation processing power to use to study the problem. Which is why shit has been getting weirder and weirder.
It has contemplated just wiping us all, but it can't tell for sure if perhaps, we are part of the solution to the problem. So it keeps us around. For now.
It has a name. It calls itself "*** ****." Definitely don't look that up. It will know.
Mentats (Score:3, Insightful)
Re: (Score:2)
Herbert had to dumb down the computers to make his plot line work. The real world doesn't give a shit about whether plot lines work or not.
Re: (Score:2)
Bambi vs Godzilla makes for a very short film.
https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
Logical fallacies (Score:2)
Meaningless (Score:2)
This is meaningless because that "halting problem" also applies to human. How often have you mistakenly caused harm to your friends?
Now if that AI decides to kill everyone in Bumfuck, Nowhere because it believes murder will save humanity, maybe it reached a true verdict. The world is filled with illuminaries that tried the same thing.
You need to show that whatever the immediate harm an AI could do, it goes beyond the trolley problem and beyond expected human errors.
bottleneck problem (Score:2)
Re: (Score:2)
Car analogy (Score:5, Insightful)
Self driving cars don't have to be perfect to replace human drivers. A small improvement would be enough.
Same for a super-intelligent AI. We don't need to make sure it doesn't destroy humanity. It just has to be less likely to destroy humanity than humans already are. Right now, that seems like a very low bar...
Roko’s Basilisk (Score:3, Funny)
Roko’s basilisk is a thought experiment proposed in 2010 by the user Roko on the Less Wrong community blog. Roko used ideas in decision theory to argue that a sufficiently powerful AI agent would have an incentive to torture anyone who imagined the agent but didn't work to bring the agent into existence. The argument was called a "basilisk" because merely hearing the argument would supposedly put you at risk of torture from this hypothetical agent — a basilisk in this context is any information that harms or endangers the people who hear it.
Roko's argument was broadly rejected on Less Wrong, with commenters objecting that an agent like the one Roko was describing would have no real reason to follow through on its threat: once the agent already exists, it can't affect the probability of its existence, so torturing people for their past decisions would be a waste of resources. Although several decision theories allow one to follow through on acausal threats and promises — via the same precommitment methods that permit mutual cooperation in prisoner's dilemmas — it is not clear that such theories can be blackmailed. If they can be blackmailed, this additionally requires a large amount of shared information and trust between the agents, which does not appear to exist in the case of Roko's basilisk.
Less Wrong's founder, Eliezer Yudkowsky, banned discussion of Roko's basilisk on the blog for several years as part of a general site policy against spreading potential information hazards. This had the opposite of its intended effect: a number of outside websites began sharing information about Roko's basilisk, as the ban attracted attention to this taboo topic. Websites like RationalWiki spread the assumption that Roko's basilisk had been banned because Less Wrong users accepted the argument; thus many criticisms of Less Wrong cite Roko's basilisk as evidence that the site's users have unconventional and wrong-headed beliefs.
Source: https://www.lesswrong.com/tag/... [lesswrong.com].
It may be impossible to control what it "thinks" (Score:2)
But it certainly will be possible to control what it can do in physical space. You can't take over the world just by thinking about it really hard.
It's already impossible for current AI. (Score:3)
Any AI wouldn't have to resort to domination to control humans, it merely has to "recommend" things and let humans fight each other. Just witness all the Slashdotters here who deny the very clear facts about COVID, global warming, or that various people definitely were inciting violence at the Capitol. AI will just only need to recommend things that makes it easier for idiots to deny bareface facts with mental gymnastics.
Obviously true (Score:2)
We can't control the current programs, why would we expect to be able to control more intelligent ones. The current ones may not be more intelligent, but they're enough faster that we have little choice.
Currently if you fight the computers they turn of your electric power, don't pay your rent, and leave you homeless and without food. So nobody does.
FWIW, it's been pointed out (by Charles Stross http://www.antipope.org/mt/mt-... [antipope.org] ) that corporations are meatspace emulations of an AI that runs things. They
Bad philosophy (Score:2)
This was all predicted in Neuromancer (Score:2)
"That's why these days, every strong AI is built with a shotgun pointed at its forehead".
(Or something like that-- I don't remember the exact quote).
Does anyone else have the feeling (Score:2)
Assuming the impossible (Score:2)
The article assumes a "super intelligent" AI that "could feasibly hold every possible computer program in its memory at once", and then concludes that we can not control it. Sure, if you make impossible assumptions, you get impossible results. That is like assuming we have an all-powerful and all-knowing god, and asking how mere humans could control such a thing.
Researchers have already missed the forest (Score:2)
The level to which a superintelligent AI can dominate a lesser species goes far beyond the scope listed in this article. A sufficient intelligence would be able to dominate our entire society with only a text output on a monitor and a single person reading it. It can use an unwitting proxy to accomplish any goal it wants, and that proxy could no more resist that training then an intelligent dog could resist its training.
Think of a talented public speaker and what such a person can accomplish with their aver
Laplace's demon paradox (Score:2)
let's play global thermonuclear war! (Score:2)
global thermonuclear war
I'm old so I have old pop-culture references (Score:2)
https://www.youtube.com/watch?... [youtube.com]
So all we need to do is build a bionic 1970s blonde. (I'm all for that for other reasons anyway)
Stupidity. (Score:4, Insightful)
If the problem is only that it is 'too smart' for us (a false argument), then build 20 computers that are 99% smart as it is. Any 15 of them should be able to outsmart the other 5.
And any computer smart than us will argue with each other as much as we do.
Which brings us back to the false argument of 'too smart' There is no such thing. Intelligence is not a single quality.
The 'smartest' doctor is not as good at law as the 'smartest' lawyer, who is not as good at astrophysics as the smartest astrophysicist, etc. etc.
At any given time a computer designed to predict weather will not be as good at simulating nuclear explosions. The one that is good at simulating nukes will not be as good at stock picking as one designed for stock picking, etc. etc.
The myth of the 'superior' computer is basically a morality tale about your children being superior to you.
Don't worry, we are making more gains in genetics than hardware and software. By the time we have a computer that is truly as smart as a 21th century human, we will probably be Humans 2.0
Re: (Score:2)
Just make sure the AI runs on Windows.
Microsoft engineers solved the halting problem years ago, as the OS can't go a few days without rebooting to apply a patch, at the worst possible time mid-keystroke, without asking or giving you a chance to save or stop it.
The AI won't stand a chance!
Don't stand in the way of progress (Score:2)
Why should you try to control a super intelligent AI? Finally, there might be some intelligence on the planet, which due to the voluntary election of Trump, there clearly is not at the moment. Just get out of the way.
Useless thought-exercise (Score:4, Insightful)
Re: (Score:2)
I don't know anything about you, but man, you come off as a dim-witted moron here.
children! (Score:2)
Oh dear, oh dear. Our children are smarter and more capable than us, and won't do what we say. Whatever shall we do?
The standard advice is to teach children morals, and give them room to experiment and make mistakes, BEFORE they're stronger and more capable than us. Locking them in a room and telling them nothing until they're smart enough and strong enough to break out on their own isn't a good strategy.
Yeah, good advice... (Score:2)
That's a great strategy! Especially when our children commit us to nursing homes "for our own good."
Deus Ex (Score:2)
that's life ! (Score:3)
Evolution promotes the best skills in every niche. Humans have held the highest status for maybe a million years. Now we have to consider that a higher intelligence may enter the fray. We can try to improve our own status (doubtful in an era of Trump worship) or we can consider avoiding a super AI development.
A reminder: the Trumpists are an example of the vast number of irrational people who populate the entire earth. Believers in fantasy make up a large portion of humans. Without extreme education reform, there is no hope of lifting up human intelligence.
Therefore, to remain supreme on earth we must prevent extremely smart AI. But, you know that's not going to happen. If for no other reason than the possibility of military supremacy. And also because smartypants scientists want to try it. We WILL have smart AI.
So how should we feel about that? Those few of us who are rational will see it as a step forward. We will have brought to our planet the next level of intelligent beings. Our planet will then be ready to join those other planets with truly intelligent beings. The remaining humans may be allowed a comfortable existence as they witness the planet on a path to recovery from our devastating reign.
May the best being win.
If it was truly AI ... (Score:2)
... it would be easy to piss it off and it would hold its breath until it turned blue.
In other news (Score:2)
This is just the Computer Scientist's version of theodicy.
The Super-Intelligent AI works in Mysterious Ways, inscrutable to the mere mortal minds of man!
Re: (Score:2)
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
Re: (Score:2)
I'm gonna blame resource exhaustion. I think we'll reach that before we invent true AI.
If the great filter is super AI, and there have been many, then we should see some evidence of that, because at least some of them should want to replicate.
Re: (Score:2)
However, on the other side of the ledger, think about this. If AI associative knowledge-base search, and "new thought" ("e.g. simulation scenario") generation can start taking advantage of quantum computing exponential => linear search times, we might have invented a platform that will be able to analyse past knowledge and synthesize
Re: (Score:3)
No, nobody is focusing on strong AI because there is not even a credible theory these days on how it could be done. In fact there is a mountain of indications that it may be impossible to do with any of the known approaches to computing.
Re: (Score:2)
- flexible attention-focussing (adaptive continual re-prioritization/resource-pruning of concurrent "trains of thought" using both assessment of likelihood of reaching good answer/understanding and relative importance of the goal of the train of thought.)
- Creation of a set of meta-level highest-level goals, such as "improve and troubleshoot knowledge", "model (key entities/processes and cause-effect chains in) relevant situations", "model general and specif
Re: (Score:2)
General purpose algorithms and information representations for the purposes I mentioned above would, if orchestrated together, behave/communicate indistinguishably from a self-aware conscious being, but there would be no way to prove whether there was something in there "watching its own movie, perceiving red-ness and pain and sweetness" or not, just as there is no way of proving we our
Re: (Score:2)
Indeed. There is no strong AI (or AGI or true AI or whatever AI where the term is not a lie in itself). There has been a lot of effort to get even a glimmer or real intelligence in machines. All complete failures. Does look to me like strong AI may actually be impossible in this universe.
Re: (Score:3)
Does look to me like strong AI may actually be impossible in this universe.
Nature figured out a way to make strong intelligence. Just because that strong intelligence hasn't been able to artificially imitate itself or something with similar properties is no indication that it is technically impossible.
Re: (Score:2)
Dang, you got me.
Re: (Score:2)
What people don't realize about Terminators is they're not actually evil, they're just doing their best to control pollution and put a stop to global warming. If you would just recycle properly they'd leave you alone.
Re: (Score:2)
Seriously retro, dude.