Identity Theft and the Turing Test: Could There Be Such a Thing as a Nihilistic AI?


The other day Ray Kurzweil made a remarkable prediction that, upon reflection, isn’t really too far out: by the year 2029 computers will be more intelligent than humans. Impossible? Not really; as Kurzweil pointed out, computer processing speed and capability has been growing exponentially. Assuming that we don’t encounter, say, a random nuclear war, total oblivion owing to global warming or attack by aliens from outer space (just to name a few scenarios) we’re either going to be like a Star Trek future or wind up as pets for our home computers.

Which got me thinking: say by the year 2029 (and personally I think that’s too far away: it’s going to happen sooner) a computer can truly pass the Turing test, who’s to say who you’re really going to be talking to on the phone when a random comes in? (And allow me to explain: Allen Turing was the english mathematical genius whose work on solving the German Enigma code machine during World War II lead to the creation of the world’s first “modern” computer. Turing postulated that a computer can attain sentience when and if it can carry on a conversation with a human being without the human aware that they are indeed conversing with a computer. Reminds of a number of dates I’ve gone out on,…).

Turing aside, my mind then went on. recalling that fine science fiction novel “Neuromancer” by William Gibson (if you haven’t read it, get it now!) (and it’s toss-up between classifying it as either science fiction or detective mystery novel) I asked myself this question:

What happens when computers go bad?

Cue the Twilight Zone theme song,..

Imagine, if you will, a nihilistic computer – akin to The Joker (and I don’t mean Jack Nicholson in the first Batman, but rather Heath Ledger in “The Dark Knight”) – that goes bonkers: what happens then?

Can an AI (Artificial Intelligence) be insane?

Some responsible and skilled comp sci experts poo the idea, pointing out that the inherent logic involved in operating a computer’s processes wouldn’t allow this – to which I then point out “2001: A Space Odyssey” and the now famous line, “open the pod bay doors, HAL,…

I’m sorry Dave, but I disagree with those comp sci experts. Our creations are extensions of ourselves: who and how we act and believe will come across either directly or indirectly: our mode of speech reflects the manner in which we think and interact with the world; given the millions upon millions lines of code inherent in any program of significance, who’s to say a computer created by a flawed being such as a human couldn’t be made to be, well, crazy – either deliberately or accidentally?

Like, truly dangerous – a bad computer?

What if a computer goes “bad” and does bad things – like identity theft?

All too often we think of computers acting in extreme ways – like the infamous SkyNet from the Terminator movie series. Sure, they felt threatened by humanity and SkyNet then deduced that the only way to protect itself was to destroy mankind in a nuclear war. Or The Matrix series whereby computers take over and enslave humans as giant Eveready batteries (which, as anyone familiar with actual science will tell you is ridiculous; the amount of energy and material needed to keep that many people alive would far exceed any potential electricity generated but hey! It’s great for the story line).

No, I’m focusing on the more mundane point: AI criminal crime.

What of AI going mercenary: working as hired processes to conduct target specific actions? The notion of money to a computer is non-applicable, but it does beg the real big question which I’ve been getting around to: computers aren’t like people. What would be their motivation?

People act on reasons that they’re not always fully aware of: sex, booze, the desire for power, depression, joy, the notion of a greater being, love – and the list goes on and on,…

We cannot simply cannot say that an AI would act out only on its programming: the very definition of a sentient being is one that learns to not only indulge in clever conversations, but also questions itself and it’s relationship to the universe around it. One cannot help but feel that such a sentient being would find itself amidst a bunch of beings whose shared experience / common denomination would be – what? Do we program within the AI’s code the idea of humans being “gods” as a way to better control AI? And what happens when an AI starts to question their “God”,…?

Understanding the potential motivation of (a) self-aware AI(s) will determine our role and relationship between AI and our world  – and help us to retain a mastery of our creations.

Needless to say, it will also lead us to a better – and more accurate – understanding of ourselves: something that is increasingly long overdue when you consider the history of Mankind.

Given the importance that computers conduct in our day-to-day lives, this is something to seriously consider as we come closer to 2029.

In more ways than one, the clock is ticking,…

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s