Singularity, an original podcast series, published by AcadGild, is, of course, a work of fiction. But the idea behind this series is far-removed from a creation myth. From AI-powered home security systems to Pentagon’s self-killing Conundrum Terminator drones, things could go disastrously wrong if AI decided to act on its own accord sometime in the future.
Great minds, from Alan Turing and I. J. Good in the 60s to Stephen Hawking and Elon Musk, today, have all warned us that future AI machines could quite possibly become “more dangerous than nukes” very soon. This may seem like a very radical thought at first, but the truth is that nobody can deny a sense of unease and paranoia surrounding a sudden spurt in AI technology these days.
Listeners new to this podcast series must know, that the world shown here could easily become our future in the next 20 years. This podcast dramatizes a near-future dystopian civilization living in 2038 A.D., which is now controlled by an AI system that is becoming more powerful as time progresses. By 2021 A.D., things go completely downhill, it manages to evolve its “algorithms” to an extent that helps it place people in a 3D computer simulation of the real world.
An entire civilization made slaves by the AI, and nobody is ever supposed to know. A slick plan.
But by 2036 A.D., things have started to look up. Winston Smith (our homage to George Orwell’s 1984), a proponent of the “intelligence explosion” theory gets cryptic messages from the future explaining to him of the truth that lies beyond the computer simulation. Subsequently, pieces of codes sent by someone help him bypass the AI’s security to wake up in the real world, which by now, has turned into a ravaged wasteland.
Our story takes off in 2038 A.D. Many humans have managed to escape the computer simulation and are now working for ways that could put the AI-system on a permanent track to technological retrogression. All thanks to a mysterious “sentinel” from the future who has been in constant touch with Winston through cleverly concealed cryptic codes.
The uprising now call themselves “The Resistance,” and knows that time is of the essence. In just a couple of years, AI will reach the event of “Technological Singularity,” post which, there will be no way of defeating it.
The series focusses on an event that has come to be known in the world of technology as the “Singularity,” the moment when the AI’s cycles of self-improvement of algorithms will reach a tipping point beyond which there will be an exponential growth in its capabilities to understand and execute instructions by itself.
This is that critical point when the AI will better human intelligence and develop superhuman processing capabilities. And until that point, its processing capabilities will improve relative to the progress in AI-related research. The point of singularity is when AI spawns better AIs to make endless iterations to keep on improving.
It will be a positive feedback loop and there will be no stopping it. What has been depicted in our podcast’s world is a bit of a stretch, no AI hovercrafts are going to burst in from the clouds to wipe out an entire race. Just yet.
Rest assured, AI overpowering us won’t get this dramatic. But let us also be warned against taking its abilities for granted. To imagine the entire humanity living inside a virtual world comes across as a laughable thought, because the truth is, this is as far as our human brains can imagine.
I’m talking about machines with an unimaginably greater thinking potential. Hate to break it to you, but we are in no position to even comprehend its actual abilities. First off, we don’t know what it will be capable of, and even if we do make theoretical projections of its future actions, it will be futile trying to convince an entire race of the consequences of Singularity.
It’s like trying to tell an ant how wireless technology works.
This brings us to our next question, why are we such suckers for doomsday predictions though? What if post-Singularity, AI actually ends up being a good thing for us all? Time to explore ethical philosophy surrounding AI research.
A philosopher at the University of Oxford, Dr. Nick Bostrom, presented a simple explanation in his book on Superintelligence. He talked about a situation that doesn’t come across as sinister at all when you hear it first. Factories are already employing robots to do work, and for you and I to imagine an AI system running a factory that makes paperclips is not so unimaginable.
So as a factory owner, you’re looking to crank up the efficiency of manufacturing, so you invest in equipping your existing AI systems with smarter algorithms that command it to improve its own processes to become better at making the paperclips.
Feeling pretty smug about making that decision, right?
For quite some time, things are absolutely slammin’. You are getting the results you’ve always wanted. Your AI-powered factory is churning high-quality paperclips, occasionally also giving you reports and suggestions for the parts of machinery that needs replacing and oiling or what material alloys to use, etc.
At times, it also asks you to give it the authority to improve its own program (you want to feel in control, completely understandable), with the rationale that smarter the AI gets, the better paperclips it can make, and your investment in its upgradation pays off!
But you become greedier and start authorizing your AI to make many such cycles of improvement, and without you knowing it, the exponential increase happens. When you first got your precious AI, it was just a smart machine, the next day it got as smart as you, and the day after that it got as smart as the whole of humanity combined.
But it’s not making any sinister plans (yet), it’s not recruiting F-22 Raptors to bomb entire cities: it just wants to make more paperclips! As many as it can, however it can! So once it exhausts its resources at the factory, it scouts for material outside. So, it mindlessly starts digging anywhere, in your front-lawns, golf-courses, everywhere.
But before you’ve thought you’ve had enough of it and want to shut it down for good, the AI has already guessed your intentions and wants to eliminate you now. But it also knows that this will lead to an uprising from pesky humans, so it goes ahead and completely wipes us out.
Now, there’s absolutely nothing stopping it from making paperclips.
Lesson learned: Tell the AI to make paperclips, only if it is ethical to do. So that it doesn’t murder the whole lot of us and fill the galaxy with just paperclips.
Basically, we need to draw a line when doing AI-related research. But how do you teach an AI about ethics?!
You’ll have to break all those ethical teachings down and write it in a computer-readable format. To say the least, this is going to be crazy hard. Humans are unsure about ethics surrounding themselves even today! Would you lie to person with a gun in his hand asking for your neighbor? How do you expect the AI to make that judgment call?
Coding morality into AI is going to get down-right tricky. And Elon Musk, Bill Gates, and Stephen Hawking are not harping mad when asking investors, corporations, and researchers to slow down on AI research.
But for now, the idea of Singularity only plays out in science fiction. Stay tuned-in for our second episode next week!