Tag Archives: Turing Test

The Age of Alexa

 

As a birthday gift for my daughters, Flora and Cora, their grandfather purchased for them an Amazon Echo (aka “Alexa”).

If you’re not familiar, the Amazon Echo / Alexa is a voice command controlled free-standing computer (with a nice speaker system to boot!) that links with your wireless network. Measuring some 9.3 inches with a circumference of 3.3 inches, Alexa can just about fit anywhere. In addition, it looks attractive (in a manner of speaking) and readily plugs into a regular wall outlet for power (it connects to your wireless network; no wired connections are required). With some adjustments and minimal amount of programming (took me all of 15 minutes to get it going) you’ll be able to give direct voice commands. Alexa can either answer your inquiries or (depending on your set up) control the lights in your house, control your thermostat, give you automatic news and sports updates as well as tell you the weather, your commuting time or even where the nearest restaurant (down to the type – Belgium ale house, Indian, Chinese, etc.) is to your house.

But doing a little research and experimentation, Alexa can do a lot more – and not just for your home (more on this in a moment).

As an old-timer, I’m amazed at this recent technological development if for no reason than I can appreciate what’s involved. First off, I’ve been working with voice command / recognition software since it first came out back in the 1990’s: things have come a long way. Used to be you had to spend about an hour just to ‘train’ the software/computer to recognize your voice (what with your inflections, accents, voice idioms, etc.) and then more time spent on getting it to do what you wanted it to do – open files, do basic computer commands, etc. And even then, it was rarely perfect: if you were hitting 95% accuracy, you were down great.

With Alexa, there was no hesitation: no training. Alexa was out of the box and running down the road in mere minutes.

Damn; that’s powerful.

No matter who you are, so long as you speak the language that it’s set for, it’ll respond. So literally out of the box, I and both my daughters were taking and using Alexa. Even now, my guests – upon visiting – now ask Alexa for the weather or for sport scores, along with local news as a matter of course, just as they would ask anyone else.

But aside from Alexa being able to give you a host of information – such as cooking recipes, bartending (excuse me, “Mixology”) recipes for drinks or for random facts (‘on this date,…’), with some adjustments and hardware / interface additions, Alexa can water your lawn, control /monitor your house alarms.

Sometimes, amusing situations can arise – such as when my younger daughter asked “Alexa: how old is the Earth?”

Alexa replied “The Earth is 5.35 Billion years old.”

“I knew it! Those people who keep saying that the Earth is only 7,000 years old don’t know what they’re talking about!”

So it’s all fun and games, right?

Not when you check out the IFTTT page for Alexa (IFTT – “If / then” user programming routines). Alexa comes with an ability for folks to program basic interface commands enabling users to link Alexa to various apps and also create routines. Want something done automatically? With a little bit of simple programming, anyone can make their Alexa do things automatically and with a mere voice command.

The potential for Alexa can go beyond just a cool item for the average household: the potential for business applications is also well worth considering. Aside from stock indexes, one could create business services and routines both for the average user and for the business / service end of things. Already, there are ‘recipes’ for users to link to their Evernote and Todoiast, along with dictating short emails (sending them out) or dictating voice message for your Skype. As one example, I can set up and schedule calendar events on my Google calendar just by using my voice – and it’ll appear on all of my calendars (phone, computer, etc. simultaneously).

I would not be surprised to see businesses – especially those who profess the notion of being ‘lean and mean’ – installing Echoes in their offices as means to better streamline operations (not to mention that Echoes could also be of good use for non-profit and governmental agencies as well).

In a manner of speaking, although this is not exactly new technology, the way it’s being recast is nothing short of remarkable. It shouldn’t surprise anyone that the Echo came from Amazon. After all, as I had previously written, Amazon and the United States Central Intelligence Agency (CIA) have been quietly working together for seveal years now, with Amazon’s in-house computer network now being the repository of the CIA’s records – and ground zero for a development project based in Vancouver, Canada for true AI (Artificial Intelligence) development (https://shockwaveriderblog.wordpress.com/2012/10/11/the-cia-and-jeff-bezos-working-together-for-our-the-future/) utilizing quantum computing. Feel free to read my past posting on this subject matter: it’s well worth the read and helps one to better appreciate what’s taking place now.

I cannot help but wonder if Alexa is but one minor result  / spin-off from that ongoing effort. And granted, Alexa may sound awesome and smart, but it’s certainly not about to pass the Turing Test.

If Alexa is any indication, we are indeed entering a new age  – the Age of Alexa.

Advertisements

“All The News That’s Fit To, Er, Write,…?”

 

 

Image

 

So by now you’ve probably heard about how the Associated Press is rolling out a new means of reporting: an automated writing system (http://www.complex.com/tech/2014/06/ap-machines):

The AP has announced that it will soon switch to an automatic version of writing and reporting for its corporate earnings stories. A computer program that will be able to take a company’s numbers and produce a 150-300-word article about it will soon be tasked with penning the quarterly corporate earnings reports. The switch will be made in part because the program is now able to take the “by the numbers” information and produce a readable format suitable for its users.

The article then went on to assure us ‘no employees will face termination’ as a result of bringing about this change. 

Now to be clear, this does not mean robots will be appearing at press conferences to ask questions, take notes and write to us stories about the events they’re covering. Rather, all this involves is the implementation of a / series of computer program(s) reviewing and writing up linguistically basic stories directly based upon mere corporate earnings reports. For the AP, what this means is that now AP can “create more than 10 times the number of earnings reports in comparison to past quarters.” Kind of like feed the computer earning reports, pull out a few adverbs and random additives and viola! A cut and dry (read: boring) news report about corporate earnings. 

But what does all this really mean?

Does this mean Skynet (via “The Terminator” movie fame) is taking over the media? Our news will soon be filtered by way of a series of electronic systems, removing the human element whereby we will only hear and read ‘good news’ as opposed to knowing and seeing how bad things really are? Can we soon expect to see messages scrolling across our television screens of “I’m sorry Dave, but you’ve seen enough porn for tonight?” Can we expect to see robots and machines inhabiting our news broadcasting services (although some may argue that’s already happening now) thus totally removing humans from the media broadcast altogether? (Cue the evil laughter: “MWAHAHAHAHHAAAAA!”).

Hardly.

But is the beginning of a greater trend, reflective of the manner by which we view and interact our world and each other. 

Understand, in many ways this is a rather bold step in a domain traditionally held as sacrosanct, safe from the realm of robots / AI (Artificial Intelligence): writing.

We are (generally) taught writing is thinking: you cannot write without some kind of thought or notion – however minimal it is. Whether one is writing about tag team wrestling or the philosophical nature of quantum mechanics, writing – even a mere grocery list – takes some thought.

Some wags would argue perhaps it’s not so much machines are getting smarter, but rather humans are getting dumber, with robots writing about things that really don’t take a whole lot of thought – but this is not an accurate way of looking at things. Consider: the necessity of corporate earning reports now as compared to what they were some 70 years ago; they didn’t exist anywhere in the form we now consider normal – nay, necessary – in today’s modern world, yet they are another vital thing done as part of our daily, modern lives. Now with this new program it’s one less thing off of our collective plates.

Look around you: what is happening as civilization becomes more and more complex, the details which matter – making sure reports are produced, there’s enough fuel calculated in a jet plane prior to flying or that the triggers on our nuclear bombs are actually locked and secure – all of these and more are getting done automatically because frankly, we got a lot on our plates already. We carry on, safe in our assumptions that all is being handled properly by machines, safe from the dreaded ‘human error’ factor. After all, the last thing we need to hear while eating our peanuts in the tourist class is the co-pilot telling the pilot somewhere over the Atlantic Ocean “I thought you figured out the fuel consumption rate!?”

But could this mean the end of writing as we know it?  Not likely, but it raises an interesting question as it relates to the Turing Test: could we tell if something we’re reading is written by a machine or by a human? In which case, does this recent innovation introduced by the Associated Press presage the end of Expository Writing 101 and the pain of attending 7:30 am college freshman writing class? (By the way, if anyone can test this specific hypotheses, please do so and contact me ASAP to share your results; I do, however, deny any responsibility). And would such a trend possibly suggest that soon associate professors teaching such courses will be replaced by robots as well? (Not likely; even robots would find the work and pay insufferable).

Rest easy, we got awhile to go before a machine written item passes the Turing Test (at least for anything not involving 7:30 am Exposition Writing 101).

Speaking as a professional researcher, however, I do note we are crossing a new threshold with regard to our automated tools: the border between the mundane and the dynamic. Consider: something considered boring and “dry” reading – say, for example, corporate earning reports – are based on dynamic and fluid events, often involving complex factors even diehard experts find baffling with unexpected results.

And this is exactly the border where the Associated Press’s new program is going to cross over. Fortunes are made and lost on such reports so we better hope that those “machines” are up to the task – and more importantly, the humans overseeing such services are making sure the correct and proper (read: accurate) adjectives and adverbs are being applied – otherwise some folk are going to find themselves flying over rough seas with no fuel. 

 

 

The Race is On: Developing Quantum Computers (and Alternative Universes for Good Measure)

hitchhikers-guide_786_poster

And so the race is on. Actually, it’s been on for some time now; it’s only now that we’re starting to see the ripples on the surface of what is otherwise a very deep and dark pool filled with very large creatures jostling for position.

It’s about processing.

It’s about the future.

Quantum computers would be able to solve problems much faster than any regular computer using the best currently known algorithms (such as those established via various neural network models).  Quantum computers are totally different and unlike anything we’ve developed before. Give a regular computer enough power, it could be made to simulate any quantum algorithm – but it wouldn’t be anything like a quantum computer.

Regular computers use bits; quantum computers use qubits, which are really funky, powerful things.  The computational basis of 500 qubits that would be found on a typical quantum computer, for example, would already be too large to be represented on a classical computer because it would require 2500 complex values to be stored; this is because it’s not just about the information that qubit is displaying, but the state of being where it (the qubit) is carrying that information which also plays into it’s creating an answer to any given query.

Bear with me, now,…

Although it may seem that qubits can hold much more information than regular bits, qubits are only in a probabilistic superposition of all of their states. This means that when the final state of the qubit is measured (i.e., when an answer is derived), they can only be found in one of the possible configurations they were in before measurement.

Here’s an analogy: take a regular computer bit with its black/white 0/1 configuration as a rubber ball with one side black, and the other side white. Throw it into the air: it comes back either as Black/0 or White/1. WIth qubits, it’s likely to land as either a Black/0 or White/1 but during the process will have changed into the colors of the rainbow while you’re watching it fly through the air. That’s the kicker with qubits: you can’t think of qubits as only being in one particular state before measurement since the fact that they were in a superposition of states before the measurement was made directly affects the possible outcomes of the computation. (And remember: the act of your watching the ball fly in the air also can influence the result of the ball’s landing – a point we’ll discuss very shortly regarding our old buddy Werner Heisenberg,…).

Quantum computers offer more than just the traditional ‘101010’ ‘yes no yes no yes no’ processing routines (which is also binary for the number 42, just in case anyone is reading this). Quantum computers do a (in a manner of speaking) ‘no yes maybe‘ because in quantum physics it’s more than just whether or not any given particle is there or not: there’s also the issue of probability – i.e., ‘yes it’s there’, ‘no it’s not’ and ‘it could be’. Quantum computers share similarities with non-deterministic and probabilistic computers, with the ability to be in more than one state simultaneously.

Makes you wonder what happens if we turn on a quantum computer: would it simply disappear? Or conversely, can we expect to see quantum computers appear suddenly in our universe for no apparent reason?

Doing homework will clearly never be the same with a quantum computer.

As Ars Technica points out (http://arstechnica.com/science/2013/03/quantum-computer-gets-an-undo-button/):

This (uncertainty) property of quantum mechanics has made quantum computing a little bit more difficult. If everything goes well, at the end of a calculation, a qubit (quantum bit) will be in a superposition of the right answer and the wrong answer. 

What this also translates to is that quantum computers offer a greater realm of questions and exploration, offering greater opportunities for more answers and more options and superior processing capabilities. Likely we’ll wind up asking questions to a quantum computers and get answers we didn’t expect lending to more avenues of thought.

In other words, you’re not going to see a quantum computer at your nearby Radio Shack any time soon.

So now let’s revisit that hairy dog notion of Heisenberg’s Uncertainty Principle as this plays directly into the heart of quantum computers:

One of the biggest problems with quantum experiments is the seemingly unavoidable tendency of humans to influence the situati­on and velocity of small particles. This happens just by our observing the particles, and it has quantum physicists frustrated. To combat this, physicists have created enormous, elaborate machines like particle accelerators that remove any physical human influence from the process of accelerating a particle’s energy of motion.

Still, the mixed results quantum physicists find when examining the same particle indicate that we just can’t help but affect the behavior of quanta — or quantum particles. Even the light physicists use to help them better see the objects they’re observing can influence the behavior of quanta. Photons, for example — the smallest measure of light, which have no mass or electrical charge — can still bounce a particle around, changing its velocity and speed.

Think about it: now we’re introducing computers based – in large part – upon this technology.

We’re approaching Hitchhiker’s Guide to the Galaxy technology here: the kind of thing where we ask one question and get an answer that’s not what we’re expecting.

Improbability drive, anyone?

The race for quantum computers is big; this isn’t just some weird science fiction notion or discussion in some obscure blog.  As we reported here at ShockwaveRiderblog back in October of 2012, the CIA and Jeff Bezos of Amazon were working on a formal agreement to develop a quantum computer. Now, it was just announced that the CIA is going to ‘buy’ a good portion of Amazon’s storage services (http://www.businessinsider.com/cia-600-million-deal-for-amazons-cloud-2013-3). Meanwhile, (as also reported in this blog last week) Google bought out the Canadian firm, DNNResearch expressly to work on the development of neural networks (and with Google’s rather substantial storage capacity this is also an interesting development). Meanwhile, the founders of Blackberry just announced an initiative to pump some $100 million into quantum computing research (http://in.reuters.com/article/2013/03/20/quantumfund-lazaridis-idINDEE92J01420130320). Gee, you’d think they’d pump money into keeping Blackberry afloat, but apparently there’s more money to be made elsewhere,…

And throughout all of this is what some scientists who are involved in this business won’t tell you up front (but are quietly saying this in their respective back rooms over their coffee machines) is that nobody really knows what happens if / when we develop a quantum computer and we turn it on.

Understand: we’re potentially talking about a computer where if/when we attempt to undertake a Turing Test with it, we could ask it how the weather is and get answers that seemingly don’t make any sense – until later on when we realize that it’s been giving us the answers all along: we were just too dumb to realize it was telling us what the weather’s likely to be the next month.

Note the distinction: we ask how the weather is and the (potential) quantum computer tells us an answer that we didn’t expect because we didn’t frame the question in a manner appropriate for that given moment.

Quantum computing is going to be a very strange place indeed.

Maybe the final answer is indeed going to be 42.

There is a theory which states that if ever anyone discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable. There is another theory which states that this has already happened.

– Douglas Adams, author of The Hitchhiker’s Guide to the Galaxy

Google and Neural Networks: Now Things Are Getting REALLY Interesting,…

sim2

Back in October 2002, I appeared as a guest speaker for the Chicago (Illinois) URISA conference. The topic that I spoke about at that time was on the commercial and governmental applicability of neural networks. Although well-received (the audience actually clapped, some asked to have pictures taken with me, and nobody fell asleep) at the time it was regarded as, well, out there. After all, who the hell was talking about – much less knew anything about – neural networks.

Fast forward to 2014 and here we are: Google recently (and quietly) acquired a start-up – DNNResearch – whose primary purpose is the commercial application and development of practical neural networks.

Before you get all strange and creeped out, neural networks are not brains floating in vials, locked away in some weird, hidden laboratory – ala The X Files – cloaked in poor lighting (cue the evil laughter BWAHAHAHA!) but rather high level and complicated computer models attempting to simulate (in a fashion) how we think, approach and solve problems.

Turns out there’s a lot more to this picture than meets the mind’s eye – and the folks at Google know this all too well. As recently reported:

Incorporated last year, the startup’s website (DNNResearch) is conspicuously devoid of any identifying information — just a blank, black screen. 

That’s about it; no big announcement, little or no mention in any major publications. Try the website for yourself: little information can be gleaned. And yet, looking into the personnel that’s involved we’re talking about some serious, substantial talent here:

Professor Hinton is the founding director of the Gatsby Computational Neuroscience Unit at University College in London, holds a Canada Research Chair in Machine Learning and is the director of the Canadian Institute for Advanced Research-funded program on “Neural Computation and Adaptive Perception.” Also a fellow of The Royal Society, Professor Hinton has become renowned for his work on neural nets and his research into “unsupervised learning procedures for neural networks with rich sensory input.”

So what’s the fuss? Read on,…

While the financial terms of the deal were not disclosed, Google was eager to acquire the startup’s research on neural networks — as well as the talent behind it — to help it go beyond traditional search algorithms in its ability to identify pieces of content, images, voice, text and so on. In its announcement today, the University of Toronto said that the team’s research “has profound implications for areas such as speech recognition, computer vision and language understanding.”

This is big; this is very similar to when Nicolai Tesla’s company and assets / models (along with Tesla agreeing to come along) got bought out by George Westinghouse – and we all know what happened then: using Tesla’s Alternating Current (AC) model, the practical development and application of large-scale electrical networks on a national and international scale took place.

One cannot help but sense that the other Google luminary – Ray Kurzweil – is somehow behind this and for good reason; assuming that we’re talking about those who seek to attain (AI) singularity, neural networks would be one viable path to undertake.

What exactly is a neural network and how does it work? From my October 2002 URISA presentation paper:

Neural networks differ radically from regular search engines, which employ ‘Boolean’ logic. Search engines are poor relatives to neural networks. For example, a user enters a keyword or term into a text field – such as the word “cat”. The typical search engine then searches for documents containing the word “cat”. The search engine simply searches for the occurrence of the search term in a document, regardless of how the term is used or the context in which the user is interested in the term “cat”, rendering the effectiveness of the information delivered minimal. Keyword engines do little but seek words – which ultimately becomes very manually intensive, requiring users to continually manage and update keyword associations or “topics” such as
cat = tiger = feline or cat is 90% feline, 10% furry.

Keyword search methodologies rely heavily on user sophistication to enter queries in fairly complex and specific language and to continue doing so until the desired file is obtained. Thus, standard keyword searching does not qualify as neural networks, for neural networks go beyond by matching the concepts and learning, through user interface, what it is a user will generally seek. Neural networks learn to understand users’ interest or expertise by extracting key ideas from the information a user accesses on a regular basis.

So let’s bottom line it (and again from my presentation paper):

Neural networks try to imitate human mental processes by creating connections between computer processors in a manner similar to brain neurons. How the neural networks are designed and the weight (by type or relevancy) of the connections determines the output. Neural networks are digital in nature and function upon pre-determined mathematical models (although there are ongoing efforts underway for biological computer networks using biological material as opposed to hard circuitry). Neural networks work best when drawing upon large and/or multiple databases within the context of fast telecommunications platforms. Neural networks are statistically modeled to establish relationships between inputs and the appropriate output, creating electronic mechanisms similar to human brain neurons. The resulting mathematical models are implemented in ready to install software packages to provide human-like learning, allowing analysis to take place.

Understand, neural networks are not to be confused with AI (Artificial Intelligence), but the approach employed therein do offer viable means and models – models with rather practical applications reaching across many markets: consumer, commercial, governmental and military.

And BTW: note the highlighted sections above – and reread the paragraph again with the realization that Google is moving into this arena; you’ll appreciate the implications.

But wait; there’s more.

From the news article:

For Google, this means getting access, in particular, to the team’s research into the improvement of object recognition, as the company looks to improve the quality of its image search and facial recognition capabilities. The company recently acquired Viewdle, which owns a number of patents on facial recognition, following its acquisition of two similar startups in PittPatt in 2011 and Neven Vision all the way back in 2006. In addition, Google has been looking to improve its voice recognition, natural language processing and machine learning, integrating that with its knowledge graph to help develop a brave new search engine. Google already has deep image search capabilities on the web, but, going forward, as smartphones proliferate, it will look to improve that experience on mobile.

So, let’s recap: we’re talking about:

* a very large information processing firm with seriously deep pockets and arguably what is probably one of the largest (if not fastest) networks ever created;

* a very large information processing firm working with folk noted for their views and research on AI singularity purchasing a firm on the cutting edge with regard to neural networks;

* a very large information processing firm also purchasing a firm utilizing advanced facial and voice recognition.

I’m buying Google stock.

What’s also remarkable (and somewhat overlooked; kudos to TechCrunch for noting this) is that Google had, some time ago, funded Dr. Hinton’s research work through a small initial grant of about $600,000 – and then goes on to buy out Dr. Hinton’s start-up company.

Big things are afoot – things with tremendous long-term ramifications for all of us.

Don’t be surprised if something out in Mountain View, California passes a Turing Test sooner than anybody expects.

For more about Google’s recent purchase of DNNResearch, check out this article:

http://techcrunch.com/2013/03/12/google-scoops-up-neural-networks-startup-dnnresearch-to-boost-its-voice-and-image-search-tech/

To read my presentation paper on neural networks and truly understand what this means – along with some of the day to day applications neural networks offer, check out this link:

http://www.scribd.com/doc/112086324/The-Ready-Application-of-Neural-Networks