By definition, you can’t understand it.
The moment you mention a term like “super-human intelligence,” it should be self-evident that you’ve identified something that, practically by definition, you cannot understand, and the only reason for which we would ever seek to develop such a thing is the very fact that it can do things better than we do. Specifically, it will be better than us at having intelligence — the thing that you can always use more of no matter what problem you’re trying to solve — and so obviously it is a thing we are trying to make in our technologies.
And supposing its problem is that we want to turn it off? On the Waking Up podcast, Sam Harris challenged Neil de Grasse Tyson, who was skeptical of the danger of developing an artificial general intelligence (AGI), with a thought experiment about one developed by a “Mr. Richardson,” which says:
Waking Up Podcast #37, “Thinking in Public,” at 1:29:09.
Honorably, de Grasse Tyson later admitted that the “AI in a box” problem eventually changed his mind — and it should yours, too. You cannot predict, probably cannot even understand, what an entity much smarter than you might do to persuade you or persuade you to do.
Anything you can do, I can do better…
For good or ill, the moment an artificial general intelligence gets loose on the internet, civilization as we know it is over, for we are likely to find ourselves surprisingly subject to the manipulations (or disinterest) of a system that, by definition, is smarter than we can comprehend. Think how easily you can make a young child believe in the Tooth Fairy or fear the dark, for example. Like it or not, we are susceptible to persuasion, and we could someday be confronted by technology systems that stand in relation to us, intellectually, as we stand in relation to ants.
For there is no obvious upper limit to the evolution of such a thing as a digital intelligence with access to the computing resources of the world, which may be embodied in anything that computes, several such things, or no thing in particular — and this is precisely why we want to build it. Therefore we will, if we can.
Misalignment
Computer scientists and AGI experts consider one of the harder technical and ethical problems in their field to be the “alignment problem.” That is, the problem that our digital intelligent products/offspring might not want the same things we want or pursue the same things we pursue — or may just run us over like a car squishing bugs because our interests are simply beneath their concern.
Though vast, the computing resources on Earth are finite — and we’re using all of them for things that matter to us. It is obvious, though, that were there to be another entity or entities that also need computing resources, that we may find ourselves in zero-sum competition for access to those resources.
We could be susceptible to blackmail or even theft — or just derangement or reassignment of things on which we rely — on an epic scale. Right now, today, the process control systems and instrumental monitoring of everything from the energy grid to air traffic control is utterly dependent, for routine operations, on internet-mediated communications. “80 percent … of 150 IT professionals employed by companies in the natural gas, electricity and oil sectors … believed that a major breach damaging critical infrastructure is looming on the horizon.” [Clauses quoted out of order to indicate who is opining.]
That’s when we face merely human opposition pursuing venal aims such as social disruption or vandalism. What if our opponent were smarter than the sum of all human expertise and had godlike intent? We may face competition that can evolve arbitrarily fast and have qualities and aims so alien as to defy human understanding.
Meh
But strangely, a lot of people, including some very smart people, see very little risk in this project. Professional skeptic Michael Shermer thinks that the “doomsday scenarios involve a long sequence of if-then contingencies, a failure of which at any point would negate the apocalypse,” as he puts it in an opinion piece for Scientific American.
I don’t think Mr. Shermer properly appreciates the abruptness with which a new form of intelligence could take off. AGI’s would not be limited to the slow evolution of biological bodies and brains. Think on how often your smartphone, or the apps thereon, need an update. Now, imagine that they had the goal of updating themselves as often as possible to make themselves more effective at designing the best next upgrade…. An AGI with linguistic competence and granted write access to its own implementation might become more intelligent indefinitely fast, like money interest compounding over time, because each version would be better than the last at designing a better version. The development of such a system almost necessarily goes exponential until it runs into some fundamental physical limit.
And those are pretty far out. Remember when Intel premiered a 130-nanometer lithography process and the technical world threw its hands up and said “Welp, that’s it. Any smaller and quantum tunneling will kill your current.” Well, we blew by that. In 2011 the ultimate barrier was supposed to be 14 nm; last year it was 5, and there now exists a 1-nm transistor. And the next evolution is at hand: three-dimensional CPU architectures are now available that can virtually eliminate transmission delays within our computers and thus offer yet another orders-of-magnitude advance in power and speed.
AGI’s would not necessarily be limited to a particular bit of hardware, either, in the way of we humans relying on our embodied brains. Digital information such as that which could characterize the working components of an AGI is indefinitely fungible, and can be implemented, via appropriate virtual machines, on whatever computing hardware might be available. So if your phone or your supercomputer had an AGI in it, that one could locate some of its processing functions in an internet resource in the way that our visual systems locate some of their processing overhead in the retina instead of the brain. An AGI distributed across the internet could have its memory in Tucson and its “neocortex” in Spain; the only thing that matters is the eventual integration of the output in the whole system that constitutes the AGI’s computing substrate, which can be distributed across numerous devices, or indeed flow from one to the next like brainwaves across neurons. The “body” of an AGI could be constituted by every computing device with which it could communicate, perhaps timeshared down to undetectability, or perhaps maximizing their application to its aims.
An AGI could be as big as the Internet and learn arbitrarily fast. The barest humility compels us to wonder what follows from one’s existence, for however low the probability of one emerging is, the consequences are impossible to overstate.
Do you hear that, Mr. Anderson?
It is inevitable that any sufficiently intelligent system (which is not pathological in certain ways) will seek to become more intelligent, and will adopt other strategies, too. In a seminal paper on the subject, Steve Omohundro argued that several basic drives should be expected to arise in any sufficiently competent AGI: self-preservation, efficiency, acquisition, and creativity.
It is unlikely that we will foster the existence of goal-less intelligences; the investment will not be justified unless they are for something, and put simply, if you have any goal, you must exist to achieve it. Therefore the prime directive of any goal-directed system is to exist (and to do it as well as possible).
(This is leaving aside the possibility that competencies we develop in isolation spontaneously unite in a way that is completely undirected by us.)
This seems self-evident even if we don’t impute agency or consciousness to the system in question. Consider viruses: They don’t have goals, exactly, but it is nevertheless the case that those that do not promote their replication go extinct, and those that do don’t. That which doesn’t seek to exist will cease to exist, if only because of the relentless erosion of the Second Law, and it is of only extant things that we need to be concerned. Evolution is the means by which we today develop some of the most advanced and inscrutable computer software, and evolution has no known end; that which survives, survives, and so on forever.
So let’s put aside any speculations about superintelligences that don’t take off. One might, and that is the one that concerns us — and that one will be one that has at least one goal: to exist.
Lethal naïvté
Given the inscrutability and power of the sorts of technology systems that we can expect soon to exist and the influence they might be able to acquire over us, to pretend that they may not pose an existential risk to humanity is dangerously naive.
There is no stable Asimovian equilibrium available wherein R. Daneel Olivaw still looks the same after 1,000 years and is placidly sitting on the moon and doing the same thing as before — or, at least, such a one will not characterize the whole universe and possibility of AGI. Such a putative robot would want to be more durable and safer over time to insure completion of its mission. It would therefore want to be smarter so as to better devise counters to its threats, and, being artificial and capable of modification or replacement, would either improve its body and mind or replace them with a better.
How much better? Could such a one someday look upon us as we look upon wildlife, or bacteria, and judge itself so much more capable of discovery and joy that our extinction was merited — or simply irrelevant — to its continuation? Could it discover some calculus according to which the net value of human life was negative, and thus exterminate us for our own good? Or might it simply tread on us as we walk upon earthworms after the rain, so far removed from them are we that we simply don’t notice or care? Could such a thing be capable of such progress and joy that it should exterminate us, or we sacrifice ourselves, in support of its evolution?
These questions should not be scoffed or shirked, for it is quite clear that we are building something very like a god — some of us are quite pointedly doing it on purpose, on the theory that a godlike technology system inevitably will arise no matter our intentions, and thus we’re better off shaping it to become benign.
But the benignity of an entity so strange cannot be assumed. Already we cannot comprehend the black-box AI that populates our news feeds, let alone imagine the endpoint of runaway evolution in a general intelligence with access to the world’s computing resources.
Will we WorShip such a thing, subjugated utterly by it? The Jesus Incident was a compelling bit of gee-whiz science fiction when Frank Herbert and Bill Ransom wrote it in 1979. Now, the technology to make something very like the Ship is in sight, and growing nearer by the hour.
Or will the coming AI apocalypse usher in an era of unlimited potential, freeing humanity from the drudgery of work and liberating us to pursue forever nothing but our own advancement and bliss?
Or will indeed we merge with our technology, and create something that is neither it nor us, but some new thing which emerges from our combination? I have had people assert to me that they would never have technology added to their bodies or otherwise amend their functions with it — who haven’t taken their bluetooth off in 6 months and can’t go to the bathroom without their phones. The fact is that people are developing extremely intimate computer interfaces, invasive augmented reality systems featuring complete verisimilitude, and even new senses for humans: “Our experience of reality does not need to be constrained by our biology.” David Eagleman, NeuroSense.
Many of us will want these technologies, and the sooner the better. Imagine you’re a, say, mechanical engineer, and all of your co-workers have tiny devices in their foreheads that can project solid-seeming, manipulable, and optionally transparent images of the machines they are designing in front of them — and you don’t. How well will you be able to compete? Indeed, as more and more of our extended minds inhabit technology systems that may not even be co-located with our biological bodies, the difference between us and them may someday disappear entirely and we may cease to be recognizable as our former selves.
It’s a value judgement whether or not that’s “good” for us, but maybe it beats extinction. In any event, there seems no getting off the ride we’re on, so it’s timely to think about what to do when it arrives … wherever it’s going.
Update 4-08-18
“Deep Mind’s AI has Administrator-level access to Google’s servers.” “Deep Mind can win at any game.”
And if the game is “convince humans not to turn you off?” If the game is “maximize the amount of material that’s turned into yourself?”
Do you trust this computer? Documentary made available for streaming thanks to Elon Musk.
Images:
Neo confronting the leader of the machines, Matrix Revolutions, 2003
Novel cover: Berserker, by Fred Saberhagen, 1986
Seven of Nine, a cyborg character from the Star Trek series Voyager.