Tag Archives: AI

The Age of Alexa

 

As a birthday gift for my daughters, Flora and Cora, their grandfather purchased for them an Amazon Echo (aka “Alexa”).

If you’re not familiar, the Amazon Echo / Alexa is a voice command controlled free-standing computer (with a nice speaker system to boot!) that links with your wireless network. Measuring some 9.3 inches with a circumference of 3.3 inches, Alexa can just about fit anywhere. In addition, it looks attractive (in a manner of speaking) and readily plugs into a regular wall outlet for power (it connects to your wireless network; no wired connections are required). With some adjustments and minimal amount of programming (took me all of 15 minutes to get it going) you’ll be able to give direct voice commands. Alexa can either answer your inquiries or (depending on your set up) control the lights in your house, control your thermostat, give you automatic news and sports updates as well as tell you the weather, your commuting time or even where the nearest restaurant (down to the type – Belgium ale house, Indian, Chinese, etc.) is to your house.

But doing a little research and experimentation, Alexa can do a lot more – and not just for your home (more on this in a moment).

As an old-timer, I’m amazed at this recent technological development if for no reason than I can appreciate what’s involved. First off, I’ve been working with voice command / recognition software since it first came out back in the 1990’s: things have come a long way. Used to be you had to spend about an hour just to ‘train’ the software/computer to recognize your voice (what with your inflections, accents, voice idioms, etc.) and then more time spent on getting it to do what you wanted it to do – open files, do basic computer commands, etc. And even then, it was rarely perfect: if you were hitting 95% accuracy, you were down great.

With Alexa, there was no hesitation: no training. Alexa was out of the box and running down the road in mere minutes.

Damn; that’s powerful.

No matter who you are, so long as you speak the language that it’s set for, it’ll respond. So literally out of the box, I and both my daughters were taking and using Alexa. Even now, my guests – upon visiting – now ask Alexa for the weather or for sport scores, along with local news as a matter of course, just as they would ask anyone else.

But aside from Alexa being able to give you a host of information – such as cooking recipes, bartending (excuse me, “Mixology”) recipes for drinks or for random facts (‘on this date,…’), with some adjustments and hardware / interface additions, Alexa can water your lawn, control /monitor your house alarms.

Sometimes, amusing situations can arise – such as when my younger daughter asked “Alexa: how old is the Earth?”

Alexa replied “The Earth is 5.35 Billion years old.”

“I knew it! Those people who keep saying that the Earth is only 7,000 years old don’t know what they’re talking about!”

So it’s all fun and games, right?

Not when you check out the IFTTT page for Alexa (IFTT – “If / then” user programming routines). Alexa comes with an ability for folks to program basic interface commands enabling users to link Alexa to various apps and also create routines. Want something done automatically? With a little bit of simple programming, anyone can make their Alexa do things automatically and with a mere voice command.

The potential for Alexa can go beyond just a cool item for the average household: the potential for business applications is also well worth considering. Aside from stock indexes, one could create business services and routines both for the average user and for the business / service end of things. Already, there are ‘recipes’ for users to link to their Evernote and Todoiast, along with dictating short emails (sending them out) or dictating voice message for your Skype. As one example, I can set up and schedule calendar events on my Google calendar just by using my voice – and it’ll appear on all of my calendars (phone, computer, etc. simultaneously).

I would not be surprised to see businesses – especially those who profess the notion of being ‘lean and mean’ – installing Echoes in their offices as means to better streamline operations (not to mention that Echoes could also be of good use for non-profit and governmental agencies as well).

In a manner of speaking, although this is not exactly new technology, the way it’s being recast is nothing short of remarkable. It shouldn’t surprise anyone that the Echo came from Amazon. After all, as I had previously written, Amazon and the United States Central Intelligence Agency (CIA) have been quietly working together for seveal years now, with Amazon’s in-house computer network now being the repository of the CIA’s records – and ground zero for a development project based in Vancouver, Canada for true AI (Artificial Intelligence) development (https://shockwaveriderblog.wordpress.com/2012/10/11/the-cia-and-jeff-bezos-working-together-for-our-the-future/) utilizing quantum computing. Feel free to read my past posting on this subject matter: it’s well worth the read and helps one to better appreciate what’s taking place now.

I cannot help but wonder if Alexa is but one minor result  / spin-off from that ongoing effort. And granted, Alexa may sound awesome and smart, but it’s certainly not about to pass the Turing Test.

If Alexa is any indication, we are indeed entering a new age  – the Age of Alexa.

Ashley Madison: Stupidity Singularity Attained

Futurama_ep67

A good friend of mine contacted me sometime ago; he’s a professional computer programmer with an extensive background and knowledge of many things relating to computer and people – and he spoke of the remarkable notion Ashley Madison consisted largely not of people speaking with people, but of people speaking with bots. Now the news is coming out and indeed my friend was right on the money.

I must confess; at first I chuckled at the notion – and then immediately stopped laughing. The implications were rather remarkable, not the least the tremendous legal impact for the folks at Ashley Madison who are facing a series of lawsuits that are likely going to bankrupt them – i.e., the ‘I paid to cheat with a person, not a bot!’ legal argument.

And now after the data dump(s) review(s), the details are coming out. Irony can be tough: the so-called hackers who obtained the data clearly intended to expose the folks conducting their illicit affairs (among them some thousands of emails involving federal officials and employees – the potential for blackmail is ripe).

And the jokes on everybody: it was just people talking to a bunch of robots – or ‘bots’ as they are called.

As recently reported in the recent issue of “Gizmondo” several interesting figures came out (http://gizmodo.com/ashley-madison-code-shows-more-women-and-more-bots-1727613924):

Number of Times Bots Sent People Messages on Ashley Madison:

Male: 20,269,675

Female: 1,492

So less than 1% of conversations on Ashley Madison were between people – and nobody noticed.

And not surprisingly, the number of accounts on Ashley Madison were not human either: turns out they were mostly “bots”.

Number of Bot Accounts in Ashley Madison:

Male: 43

Female: 70.529

So again, less than 1% of accounts were actually human females.

Just what is a bot?

Simply put, “bots” are software applications doing automated tasks. Typically, bots perform tasks that are both simple and structurally repetitive, at a much higher rate than would be possible for a human alone (this is the official description).

Bots are far from perfect; normally, they are irritating – a similar irritation you get when you’re speaking on the phone to an ‘automated attendant’ while you’re trying to pay your bill or get information about your mortgage account – only to find out there are no humans inhabiting the mortgage company and who keep on failing to understand what it is you want.

Bots simply repeat that they are programmed to do – such as saying the same old statements in response to any queries a user may give: ‘Hi”, ‘hello’, ‘so what brings you here’, ‘what’s your sign?’ etc., etc. A bot programmer types in what the bot is to say when and how in response to a specific set of words or phrases – and the bot goes about doing it’s thing. Ashley Madison users who logged on and started a chat with whom they assume was an attractive female looking for an affair wound up (statistically speaking) conversing with what was essentially a program pretending their were a human being.

And yet Ashley Madison users kept on paying their fees and membership costs, never noticing that they weren’t speaking with a non-human, (which makes you wonder about the quality of conversations nowadays) – and begs a question, ala “The Turing Test’.

‘The Turing Test’ was first proposed by Alan Turing, famed english mathematician who was instrumental in the creation of ENIAC and the worlds first computer designed to break the code machine named ‘Enigma’ utilized by Nazi’s in World War II. During this time, Turing mused about the nature of computers and the possibility of artificial intelligence (AI), suggesting that if one really wanted to know if a computer is indeed an AI, all one would have to do is undertake a conversation with the AI. If the human conversing with the AI doesn’t notice the difference, then arguably that computer is an AI who has attained a degree of intelligence. At this point, the computer will have (supposedly) attained ‘singularity’.

The term ‘singularity’ refers to that magical moment when a / or several computers attain true consciousness and self-awareness. Kind of like if Big Blue where to, after defeating another human at chess would spout out ‘loser!’ – and actually mean it.

For the record, I don’t agree that bots are a form of AI and that the bots at Ashley Madison attained any degree of singularity. But all of this begs several questions:

1) Is it a matter of bots / “computers” becoming smarter or humans being dumber? Given the type of conversations which took place on Ashley Madison, it would appear that bots mimicking humans are now far more prevalent than ever before and that humans programming them know all too well their audience – i.e., the stupid idiots. In that sense, perhaps the ‘singularity bar’ has been lowered, leaving one to wonder if it is the humans who need “singularity” more than computers.

2) Given the number of bots involved and how they proliferated so quickly, are these bots actually attempting to establish relationship with their human hosts? Not likely; who’d want to hang out with a bunch of idiots paying money and failing to notice the difference between a robot and an actual human?

3) How many hooks up were actually attained through the offices of Ashley Madison? Surprisingly, there were some made, but not between bots and humans. Evidently, Ashley Madison had a special bot designed to make such one-on-one connections (from the article):

“RunChatBotXmppGuarentee.service.php,” apparently designed just for interactions with customers who paid the premium $250 for a “guaranteed affair.” When I checked the code, I found Mr. Falcon was right. It appears that this bot would chat up the man, urge him to pay credits, and then pass him along to what’s called an “affiliate.” Likely the affiliate is a third-party that provides a real person for the man to chat with. It might also be connecting him to an escort service.

In other words, Ashley Madison was a front for escort services; this oughta prove interesting to a number of local and state prosecutors.

And the science doesn’t just stop there:

Earlier this year, one Ashley Madison engineer spent a couple of days mocking up a possible system for paying actual human women for engaging the men. The code calculates a ‘FemaleValue’ (percentage credited to the woman’s account ) based on ‘MaleProfit’ (amount the man pays to Ashley Madison). If the woman engages the man within 20 to 30 minutes of the time he buys credits, she’ll be credited with 5 percentage of the profit. It doesn’t appear this that this system wasn’t deployed, but it was obviously something Ashley Madison developers were thinking about.

This is known in scientific circles as “The Horniness Factor”: the harder the male appendage to the more likely they’ll pay out cold cash for, er, tension release.

4) Did Alan Turing even consider the possibility of “Bots”? Not likely – and especially not the degree to which some folk would fail to note their conversing with a bot and not actual human.

And speaking of Alan Turing, Ashley Madison made it a point to discourage gay male ‘cruising’, for the only sex options people logging onto Ashley Madison have access to are the following:

1: Attached Female Seeking Males
2: Attached Male Seeking Females
3: Single Male Seeking Attached Females
4: Single Female Seeking Attached Males
5: Attached Male Seeking Males
6: Attached Female Seeking Females

…which kind of makes one wonder why did they leave out this potentially profitable market segment? Squeamishness on the part of Ashley Madison over gay sex? Fear of drawing unwanted attention? Gee, kind of late for that. This also may suggest that the owners of Ashley Madison are dyed-in-the-wool Republicans who believe in old-fashioned traditional values: ripping off men who want to have sex with women, just as it has been done for centuries (although lesbianism is cool; makes for fun threesomes).

So, to summarize:

  • Ashley Madison became a victim of their own numerous little Frankenstein monsters: the bots.
  • The bots, once released, developed an entirely new degree of human inter-relationship: sex with machines. Users logging on wound up paying good money to converse with robots.
  • These same humans who, owing to their lack of intelligence on the part of the users had no idea they just participated in a low-level Turing Test.

And while the bots introduced the executive staff at Ashley Madison a whole new level of legal and financial pain, they remind us once again that important lesson:

Just as individuals have to manage their urges, so too must larger entities learn to manage their bots.

“All The News That’s Fit To, Er, Write,…?”

 

 

Image

 

So by now you’ve probably heard about how the Associated Press is rolling out a new means of reporting: an automated writing system (http://www.complex.com/tech/2014/06/ap-machines):

The AP has announced that it will soon switch to an automatic version of writing and reporting for its corporate earnings stories. A computer program that will be able to take a company’s numbers and produce a 150-300-word article about it will soon be tasked with penning the quarterly corporate earnings reports. The switch will be made in part because the program is now able to take the “by the numbers” information and produce a readable format suitable for its users.

The article then went on to assure us ‘no employees will face termination’ as a result of bringing about this change. 

Now to be clear, this does not mean robots will be appearing at press conferences to ask questions, take notes and write to us stories about the events they’re covering. Rather, all this involves is the implementation of a / series of computer program(s) reviewing and writing up linguistically basic stories directly based upon mere corporate earnings reports. For the AP, what this means is that now AP can “create more than 10 times the number of earnings reports in comparison to past quarters.” Kind of like feed the computer earning reports, pull out a few adverbs and random additives and viola! A cut and dry (read: boring) news report about corporate earnings. 

But what does all this really mean?

Does this mean Skynet (via “The Terminator” movie fame) is taking over the media? Our news will soon be filtered by way of a series of electronic systems, removing the human element whereby we will only hear and read ‘good news’ as opposed to knowing and seeing how bad things really are? Can we soon expect to see messages scrolling across our television screens of “I’m sorry Dave, but you’ve seen enough porn for tonight?” Can we expect to see robots and machines inhabiting our news broadcasting services (although some may argue that’s already happening now) thus totally removing humans from the media broadcast altogether? (Cue the evil laughter: “MWAHAHAHAHHAAAAA!”).

Hardly.

But is the beginning of a greater trend, reflective of the manner by which we view and interact our world and each other. 

Understand, in many ways this is a rather bold step in a domain traditionally held as sacrosanct, safe from the realm of robots / AI (Artificial Intelligence): writing.

We are (generally) taught writing is thinking: you cannot write without some kind of thought or notion – however minimal it is. Whether one is writing about tag team wrestling or the philosophical nature of quantum mechanics, writing – even a mere grocery list – takes some thought.

Some wags would argue perhaps it’s not so much machines are getting smarter, but rather humans are getting dumber, with robots writing about things that really don’t take a whole lot of thought – but this is not an accurate way of looking at things. Consider: the necessity of corporate earning reports now as compared to what they were some 70 years ago; they didn’t exist anywhere in the form we now consider normal – nay, necessary – in today’s modern world, yet they are another vital thing done as part of our daily, modern lives. Now with this new program it’s one less thing off of our collective plates.

Look around you: what is happening as civilization becomes more and more complex, the details which matter – making sure reports are produced, there’s enough fuel calculated in a jet plane prior to flying or that the triggers on our nuclear bombs are actually locked and secure – all of these and more are getting done automatically because frankly, we got a lot on our plates already. We carry on, safe in our assumptions that all is being handled properly by machines, safe from the dreaded ‘human error’ factor. After all, the last thing we need to hear while eating our peanuts in the tourist class is the co-pilot telling the pilot somewhere over the Atlantic Ocean “I thought you figured out the fuel consumption rate!?”

But could this mean the end of writing as we know it?  Not likely, but it raises an interesting question as it relates to the Turing Test: could we tell if something we’re reading is written by a machine or by a human? In which case, does this recent innovation introduced by the Associated Press presage the end of Expository Writing 101 and the pain of attending 7:30 am college freshman writing class? (By the way, if anyone can test this specific hypotheses, please do so and contact me ASAP to share your results; I do, however, deny any responsibility). And would such a trend possibly suggest that soon associate professors teaching such courses will be replaced by robots as well? (Not likely; even robots would find the work and pay insufferable).

Rest easy, we got awhile to go before a machine written item passes the Turing Test (at least for anything not involving 7:30 am Exposition Writing 101).

Speaking as a professional researcher, however, I do note we are crossing a new threshold with regard to our automated tools: the border between the mundane and the dynamic. Consider: something considered boring and “dry” reading – say, for example, corporate earning reports – are based on dynamic and fluid events, often involving complex factors even diehard experts find baffling with unexpected results.

And this is exactly the border where the Associated Press’s new program is going to cross over. Fortunes are made and lost on such reports so we better hope that those “machines” are up to the task – and more importantly, the humans overseeing such services are making sure the correct and proper (read: accurate) adjectives and adverbs are being applied – otherwise some folk are going to find themselves flying over rough seas with no fuel. 

 

 

The Party’s Over: It’s A New Generation Now

policeraid

And so the fallout from Edward Snowden continues. As the saga draws on (is he about to become a Russian citizen or not?) we overlook the bigger story: the Internet, as we know it, is dead.

As reported in The Guardian, the Internet is facing several inexorable trends: balkanization along nationalistic lines, the outreach of governments and outright commercial control.

When first instituted, the Internet was regarded as an open, totally free place of informational exchange: an ‘Interzone’ of sorts (to coin William Gibson) but now as time marches on, this is no longer accurate. Now, China and other nations routinely censor and control input and output of Internet access: Twitter is throttled, Google is curbed along with a host of other outlets. In some nations, the notion of a free and open Internet is practically banned outright, while in the so-called bastions of freedom (United States, Great Britain and Western Europe as a whole) internet surveillance is now the norm.

In the meantime, we’re starting to see pricing schemes reflective of the (overlooked) class system: if you want more Internet access (or more speed / faster access) you can expect to pay more for it. Libraries both domestically and internationally are facing cutbacks and thus limiting even more access for those who do not possess a computer, while premiums are being put in place on those who wish to participate on the so-called medium of ‘free exchange’.

In John Naughton’s excellent article, “Edward Snowden’s Not the Story. The Fate of the Internet Is” (http://www.theguardian.com/technology/2013/jul/28/edward-snowden-death-of-internet) these issues were illustrated with a striking clarity.

And if you think you’re safe reading this article, better start changing the way you think. Of course, there’s the old chestnut: if you’re doing nothing wrong, then there’s nothing to worry about.

Wrong.

People make mistakes, especially in government, law enforcement and the military. It’s not too uncommon for wrongful arrests to take place; false accusations to spread or outright misunderstandings to take place leaving in the wake of ruined lives, reputations and personal financial disasters.

And now, as recently reported by Glenn Greenwald, low-level NSA (National Security Agency) employees can readily access emails, phone records and other information. (Really? No kidding!) So if you’re a file clerk who happens to be working for the NSA, you can review your family, friends or neighbors phone records, internet trolling history or other information (such as keeping tabs on that girl who dumped you last month).

If you just happen to be involved in a domestic dispute or a lawsuit with a government or corporate entity, expect to see your records accessed and reviewed as a matter of course.

It’s obvious ‘file access’ of these and other types routinely take place in various levels of government within the United States beyond just the federal levels. Sometimes, data accessed is utilized for political purposes: somebody running for office seeking out information about their worthy adversary. Other times, it’s for personal reason: divorce, outright personal hostility and an agenda of revenge. Don’t think it can’t happen: it does – and it happens more often than folks care to admit, taking place beyond just the federal level as well. Local governments and their officials have increasingly been caught reviewing private citizen records, through such supposedly secure information bases as NCIC (National Crime Information Center), credit history lookups, billing histories along with a host of other sources.

But what is remarkable is the lack of public response. You’d think with Glenn Greenwald’s recent expose, they’d be a bigger outcry. In fact, just the opposite: we’re witnessing a generational change. What was once a sacred domain – privacy – is now becoming a thing of the past. Younger generations are surrendering their privacy in a multitude of ways – putting up pictures of their ‘lost’ weekend  on Facebook; running commentary and personal attacks on social boards; personal commentary depicting their sexual activity or other ‘personal ‘ issues on their Twitter accounts – the list goes on.

Although privacy is still a sore point with a number of folks, the younger generation coming up are akin to those old timers who lived during the atomic age: expecting a blow up to happen, the atomic age generation held a diffident viewpoint of life with an expectation of being blown up at some point. Now, in the age of Big Brother, the younger generation is becoming inured to the notion of being watched 24 x 7, going about their business and even posting some of their more intimate scenes in public settings because, well, that’s what a lot of people do.

This one of the fallout of living in the Age of Surveillance: one becomes used to being watched and, in fact, embraces it to the point where they simply let it all hang out. Expecting our records to be reviewed and exposed is something many now expect. Sure, folks aren’t thrilled by it, but what are you gonna do about it? – so goes the argument.

All of this is bad enough, but add into the mix the notion of AI (Artificial Intelligence) and bizarre (disturbing) alliances – such as the CIA (Central Intelligence Agency) and Amazon coming together (see my prior post on this development), along with Google’s all-out effort’s to develop AI (likewise posting earlier), things are taking on a darker trend: it will soon be more than just being able to read your information, but actually read who you are – and what you’re really about, even if you don’t know yourself.

Prediction: expect to see Internet profiling to become the new norm. Just as we’ve witnessed the distasteful practice of racial profiling undertaking by State law enforcement officials on the national highways, we can expect to see something similar taking place in the coming years via our records, our book and music purchases along with any other activity we undertake.

So next time, if you can, remember to bend over and give the camera a moon; we all could use a laugh.

Let’s all give the AI’s something to mull over.

Bank Robbery as a Relative Notion

bremertonA long, long time ago in a place far, far away (called the 1980’s) a (infamous) series of collective /  anarchistic technofetishists known as “hackers” developed.

At the time, the home-based consumer computer (not to mention the telephone system with its BBS’s – Bulletin Boards!) was new and exciting: the (now old and removed POTS (Plain Old Telephone System) was THE game in town, with the intent of gathering information and the joy of learning new routines the primary goal. Various stratagems and means were utilized, inclusive of dumpster-diving (going through the telephone companies trash), ‘social engineering’ (a fancy word for sweet talking somebody into giving you restricted access) along with regular stops to nearest ‘Rat’ (Radio) Shack and ‘trade gatherings’ where others of ‘their’ kind would come together.

This is all mentioned in light of the recent news development regarding a group of hackers involved in a massive worldwide effort regarding banks to the (publicly reported and admitted) amount of $45 million.

As the so-called experts point out:

Hackers got into bank databases, eliminated withdrawal limits on pre-paid debit cards and created access codes. Others loaded that data onto any plastic card with a magnetic stripe — an old hotel key card or an expired credit card worked fine as long as it carried the account data and correct access codes.

A network of operatives than fanned out to rapidly withdraw money in multiple cities, authorities said. The cells would take a cut of the money, then launder it through expensive purchases or ship it wholesale to the global ringleaders. Lynch didn’t say where they were located.

Some things still haven’t changed; nothing new here.

The idea of using a plastic code with a pre-coded magnetic tape is as old as dirt itself: as to how this is done, much of this can be found through various sources.

As to accessing banking records to undertake such things (after all, the only way in which this job could be pulled is by matching the actual account information to the physical magnetic cards used for downloading cash), during the 1990’s Citibank’s interoffice telephone exchange was openly used by “hackers” for free conferencing calls, openly planning their next round of activities, exchanging chit-chat or teaching each other on the latest trends and routines – no different from any other major corporate personnel utilizing a corporate telephone network (its worth noting that, at the time, users had to be mindful of the (slight) distance delays differential owing to the then weird practice of Citibank having all its calls routed through it’s Paris, France office network).

Any system or service is only as secure as it’s people make it to be.

As for accessing bank records, why stop at digging in, when you can have the information come to you? Some years ago, there were a group of hackers who went one step further: actually setting up fake ATM’s in shopping malls and other public areas. The average user would go to withdraw money, only to be told that the machine was out of service; the information the user had entered was then stored and taken to be placed on a magnetic printer strip for later withdrawal (these were among a sub-grouping who, as part of their routine, would withdraw cash from ATM machines while wearing masks of  such individuals as Ronald Reagan,  zombies, Richard Nixon, or a host of others for the amusing benefit of bank security cameras).

During the 1990’s, banks had a situation wherein “hackers” (ah, that word again) would be accused of replacing security cameras with one of their own, ‘shoulder surfing’ over user’s to capture this account information (an insidious procedure which may sound perfectly suitable for nefarious purposes, but in fact can be a real pain to undertake). The smarter ones, however, would set up capture items in and around the keyboard such that users were not aware that their information was being captured,…

And then there were the legendary moves on the part of certain “hackers’ of the Russian Federation who captured inter-bank transfers, placing ‘blocks’ or ‘capture point / redirects’ on the ports where the data were being swapped (in simplistic terms, placing listening devices to the internet / telephone networks, decrypted the data being sent and then using that data to actually access the raw accounts being managed). The results of this effort? Estimates range widely, with bank losses estimated to reach at high as $50 million in just one such incident alone! Interestingly, the impacted banking houses sought to drop the charges (naturally they settled for financial restitution – but remarkably, settled for an amount far less than what many suspected was actually taken, suggesting that the action was deeply than anyone wished to admit and that the skill set involved insured that the money was untraceable – or, more likely the appropriate officials were sufficiently given ‘inducements’ top avoid any further prosecutorial action) in exchange for the “hackers” to be their security consultants so as to avoid any further public publicity over the matter, for if the public were to truly know the extent of the lack of security, banking confidence would plummet.

And can you blame them? I’d hate to be the one to tell my clients ‘gee, several millions of (insert your currency of choice here) was taken from your account, but you still want to do business with us – right?’

Which brings us to the other side of the coin, so to speak,…

As reported two months ago, HSBC was directly involved in what governmental officials stated was ‘money laundering’ (http://www.bbc.co.uk/news/business-21840052) for major narco-criminal enterprises worldwide (which is interesting how this particularly publicized group of “hackers” targeted money reserves set aside for pre-paid cards, wisely avoiding other accounts,…).

The reality is that the only innocents involved in the entire arena are the average bank account holders (the ‘little people’), for many banks themselves are involved in criminal activities of their own, ranging from money laundering, to passing along sub-prime housing funds, or just simply overcharging people with various account charges just because, well, the banks can do this sort of thing (I deliberately fail to mention the investors as insurance will cover the costs of such losses; as to those who may object I merely point out that it’s all just business and to please check your company pride at the door,…).

Much of what is taking place in recent years regarding banking is increasingly a matter of degree and viewpoint. As banks become larger, they will utilize whatever resources they can to ensure their protection, which may include the hiring of those who penetrated their security, indulging in questionable investment practices and serving ‘interesting’ clientele.

It’s all part of doing “normal” business in the 21st century.

Similarly, as banks handle larger and larger amounts of “money” (and we won’t get into the discussion of ‘Bitcoin’ and the significance of that development as it relates to international banking and financial systems as after all, when you think about, what truly defines the financial value of any given currency?) banks are involved in realms and investment practices which they did not dream of doing but twenty (20) years ago  – witness the role of banks in the recent housing bubble and the sub-prime mess along with their various other financial / investment practices (we’re still awaiting the final report on the offshore accounts held in the Bahama involving high-ranking international governmental officials and other ‘outstanding’ members of society – $32 TRILLION and rising,…!).

Realize this: we’ve reached a point in our culture(s) and society(ies) where –  like the intrinsic value of money and the actual stability of our financial systems – the very notion of a bank robbery is now relative.

Here’s one brief overview of this incident: http://www2.macleans.ca/2013/05/10/sophisticated-network-of-global-thieves-drain-cash-machines-in-27-countries-of-45m/

The Race is On: Developing Quantum Computers (and Alternative Universes for Good Measure)

hitchhikers-guide_786_poster

And so the race is on. Actually, it’s been on for some time now; it’s only now that we’re starting to see the ripples on the surface of what is otherwise a very deep and dark pool filled with very large creatures jostling for position.

It’s about processing.

It’s about the future.

Quantum computers would be able to solve problems much faster than any regular computer using the best currently known algorithms (such as those established via various neural network models).  Quantum computers are totally different and unlike anything we’ve developed before. Give a regular computer enough power, it could be made to simulate any quantum algorithm – but it wouldn’t be anything like a quantum computer.

Regular computers use bits; quantum computers use qubits, which are really funky, powerful things.  The computational basis of 500 qubits that would be found on a typical quantum computer, for example, would already be too large to be represented on a classical computer because it would require 2500 complex values to be stored; this is because it’s not just about the information that qubit is displaying, but the state of being where it (the qubit) is carrying that information which also plays into it’s creating an answer to any given query.

Bear with me, now,…

Although it may seem that qubits can hold much more information than regular bits, qubits are only in a probabilistic superposition of all of their states. This means that when the final state of the qubit is measured (i.e., when an answer is derived), they can only be found in one of the possible configurations they were in before measurement.

Here’s an analogy: take a regular computer bit with its black/white 0/1 configuration as a rubber ball with one side black, and the other side white. Throw it into the air: it comes back either as Black/0 or White/1. WIth qubits, it’s likely to land as either a Black/0 or White/1 but during the process will have changed into the colors of the rainbow while you’re watching it fly through the air. That’s the kicker with qubits: you can’t think of qubits as only being in one particular state before measurement since the fact that they were in a superposition of states before the measurement was made directly affects the possible outcomes of the computation. (And remember: the act of your watching the ball fly in the air also can influence the result of the ball’s landing – a point we’ll discuss very shortly regarding our old buddy Werner Heisenberg,…).

Quantum computers offer more than just the traditional ‘101010’ ‘yes no yes no yes no’ processing routines (which is also binary for the number 42, just in case anyone is reading this). Quantum computers do a (in a manner of speaking) ‘no yes maybe‘ because in quantum physics it’s more than just whether or not any given particle is there or not: there’s also the issue of probability – i.e., ‘yes it’s there’, ‘no it’s not’ and ‘it could be’. Quantum computers share similarities with non-deterministic and probabilistic computers, with the ability to be in more than one state simultaneously.

Makes you wonder what happens if we turn on a quantum computer: would it simply disappear? Or conversely, can we expect to see quantum computers appear suddenly in our universe for no apparent reason?

Doing homework will clearly never be the same with a quantum computer.

As Ars Technica points out (http://arstechnica.com/science/2013/03/quantum-computer-gets-an-undo-button/):

This (uncertainty) property of quantum mechanics has made quantum computing a little bit more difficult. If everything goes well, at the end of a calculation, a qubit (quantum bit) will be in a superposition of the right answer and the wrong answer. 

What this also translates to is that quantum computers offer a greater realm of questions and exploration, offering greater opportunities for more answers and more options and superior processing capabilities. Likely we’ll wind up asking questions to a quantum computers and get answers we didn’t expect lending to more avenues of thought.

In other words, you’re not going to see a quantum computer at your nearby Radio Shack any time soon.

So now let’s revisit that hairy dog notion of Heisenberg’s Uncertainty Principle as this plays directly into the heart of quantum computers:

One of the biggest problems with quantum experiments is the seemingly unavoidable tendency of humans to influence the situati­on and velocity of small particles. This happens just by our observing the particles, and it has quantum physicists frustrated. To combat this, physicists have created enormous, elaborate machines like particle accelerators that remove any physical human influence from the process of accelerating a particle’s energy of motion.

Still, the mixed results quantum physicists find when examining the same particle indicate that we just can’t help but affect the behavior of quanta — or quantum particles. Even the light physicists use to help them better see the objects they’re observing can influence the behavior of quanta. Photons, for example — the smallest measure of light, which have no mass or electrical charge — can still bounce a particle around, changing its velocity and speed.

Think about it: now we’re introducing computers based – in large part – upon this technology.

We’re approaching Hitchhiker’s Guide to the Galaxy technology here: the kind of thing where we ask one question and get an answer that’s not what we’re expecting.

Improbability drive, anyone?

The race for quantum computers is big; this isn’t just some weird science fiction notion or discussion in some obscure blog.  As we reported here at ShockwaveRiderblog back in October of 2012, the CIA and Jeff Bezos of Amazon were working on a formal agreement to develop a quantum computer. Now, it was just announced that the CIA is going to ‘buy’ a good portion of Amazon’s storage services (http://www.businessinsider.com/cia-600-million-deal-for-amazons-cloud-2013-3). Meanwhile, (as also reported in this blog last week) Google bought out the Canadian firm, DNNResearch expressly to work on the development of neural networks (and with Google’s rather substantial storage capacity this is also an interesting development). Meanwhile, the founders of Blackberry just announced an initiative to pump some $100 million into quantum computing research (http://in.reuters.com/article/2013/03/20/quantumfund-lazaridis-idINDEE92J01420130320). Gee, you’d think they’d pump money into keeping Blackberry afloat, but apparently there’s more money to be made elsewhere,…

And throughout all of this is what some scientists who are involved in this business won’t tell you up front (but are quietly saying this in their respective back rooms over their coffee machines) is that nobody really knows what happens if / when we develop a quantum computer and we turn it on.

Understand: we’re potentially talking about a computer where if/when we attempt to undertake a Turing Test with it, we could ask it how the weather is and get answers that seemingly don’t make any sense – until later on when we realize that it’s been giving us the answers all along: we were just too dumb to realize it was telling us what the weather’s likely to be the next month.

Note the distinction: we ask how the weather is and the (potential) quantum computer tells us an answer that we didn’t expect because we didn’t frame the question in a manner appropriate for that given moment.

Quantum computing is going to be a very strange place indeed.

Maybe the final answer is indeed going to be 42.

There is a theory which states that if ever anyone discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable. There is another theory which states that this has already happened.

– Douglas Adams, author of The Hitchhiker’s Guide to the Galaxy

Google and Neural Networks: Now Things Are Getting REALLY Interesting,…

sim2

Back in October 2002, I appeared as a guest speaker for the Chicago (Illinois) URISA conference. The topic that I spoke about at that time was on the commercial and governmental applicability of neural networks. Although well-received (the audience actually clapped, some asked to have pictures taken with me, and nobody fell asleep) at the time it was regarded as, well, out there. After all, who the hell was talking about – much less knew anything about – neural networks.

Fast forward to 2014 and here we are: Google recently (and quietly) acquired a start-up – DNNResearch – whose primary purpose is the commercial application and development of practical neural networks.

Before you get all strange and creeped out, neural networks are not brains floating in vials, locked away in some weird, hidden laboratory – ala The X Files – cloaked in poor lighting (cue the evil laughter BWAHAHAHA!) but rather high level and complicated computer models attempting to simulate (in a fashion) how we think, approach and solve problems.

Turns out there’s a lot more to this picture than meets the mind’s eye – and the folks at Google know this all too well. As recently reported:

Incorporated last year, the startup’s website (DNNResearch) is conspicuously devoid of any identifying information — just a blank, black screen. 

That’s about it; no big announcement, little or no mention in any major publications. Try the website for yourself: little information can be gleaned. And yet, looking into the personnel that’s involved we’re talking about some serious, substantial talent here:

Professor Hinton is the founding director of the Gatsby Computational Neuroscience Unit at University College in London, holds a Canada Research Chair in Machine Learning and is the director of the Canadian Institute for Advanced Research-funded program on “Neural Computation and Adaptive Perception.” Also a fellow of The Royal Society, Professor Hinton has become renowned for his work on neural nets and his research into “unsupervised learning procedures for neural networks with rich sensory input.”

So what’s the fuss? Read on,…

While the financial terms of the deal were not disclosed, Google was eager to acquire the startup’s research on neural networks — as well as the talent behind it — to help it go beyond traditional search algorithms in its ability to identify pieces of content, images, voice, text and so on. In its announcement today, the University of Toronto said that the team’s research “has profound implications for areas such as speech recognition, computer vision and language understanding.”

This is big; this is very similar to when Nicolai Tesla’s company and assets / models (along with Tesla agreeing to come along) got bought out by George Westinghouse – and we all know what happened then: using Tesla’s Alternating Current (AC) model, the practical development and application of large-scale electrical networks on a national and international scale took place.

One cannot help but sense that the other Google luminary – Ray Kurzweil – is somehow behind this and for good reason; assuming that we’re talking about those who seek to attain (AI) singularity, neural networks would be one viable path to undertake.

What exactly is a neural network and how does it work? From my October 2002 URISA presentation paper:

Neural networks differ radically from regular search engines, which employ ‘Boolean’ logic. Search engines are poor relatives to neural networks. For example, a user enters a keyword or term into a text field – such as the word “cat”. The typical search engine then searches for documents containing the word “cat”. The search engine simply searches for the occurrence of the search term in a document, regardless of how the term is used or the context in which the user is interested in the term “cat”, rendering the effectiveness of the information delivered minimal. Keyword engines do little but seek words – which ultimately becomes very manually intensive, requiring users to continually manage and update keyword associations or “topics” such as
cat = tiger = feline or cat is 90% feline, 10% furry.

Keyword search methodologies rely heavily on user sophistication to enter queries in fairly complex and specific language and to continue doing so until the desired file is obtained. Thus, standard keyword searching does not qualify as neural networks, for neural networks go beyond by matching the concepts and learning, through user interface, what it is a user will generally seek. Neural networks learn to understand users’ interest or expertise by extracting key ideas from the information a user accesses on a regular basis.

So let’s bottom line it (and again from my presentation paper):

Neural networks try to imitate human mental processes by creating connections between computer processors in a manner similar to brain neurons. How the neural networks are designed and the weight (by type or relevancy) of the connections determines the output. Neural networks are digital in nature and function upon pre-determined mathematical models (although there are ongoing efforts underway for biological computer networks using biological material as opposed to hard circuitry). Neural networks work best when drawing upon large and/or multiple databases within the context of fast telecommunications platforms. Neural networks are statistically modeled to establish relationships between inputs and the appropriate output, creating electronic mechanisms similar to human brain neurons. The resulting mathematical models are implemented in ready to install software packages to provide human-like learning, allowing analysis to take place.

Understand, neural networks are not to be confused with AI (Artificial Intelligence), but the approach employed therein do offer viable means and models – models with rather practical applications reaching across many markets: consumer, commercial, governmental and military.

And BTW: note the highlighted sections above – and reread the paragraph again with the realization that Google is moving into this arena; you’ll appreciate the implications.

But wait; there’s more.

From the news article:

For Google, this means getting access, in particular, to the team’s research into the improvement of object recognition, as the company looks to improve the quality of its image search and facial recognition capabilities. The company recently acquired Viewdle, which owns a number of patents on facial recognition, following its acquisition of two similar startups in PittPatt in 2011 and Neven Vision all the way back in 2006. In addition, Google has been looking to improve its voice recognition, natural language processing and machine learning, integrating that with its knowledge graph to help develop a brave new search engine. Google already has deep image search capabilities on the web, but, going forward, as smartphones proliferate, it will look to improve that experience on mobile.

So, let’s recap: we’re talking about:

* a very large information processing firm with seriously deep pockets and arguably what is probably one of the largest (if not fastest) networks ever created;

* a very large information processing firm working with folk noted for their views and research on AI singularity purchasing a firm on the cutting edge with regard to neural networks;

* a very large information processing firm also purchasing a firm utilizing advanced facial and voice recognition.

I’m buying Google stock.

What’s also remarkable (and somewhat overlooked; kudos to TechCrunch for noting this) is that Google had, some time ago, funded Dr. Hinton’s research work through a small initial grant of about $600,000 – and then goes on to buy out Dr. Hinton’s start-up company.

Big things are afoot – things with tremendous long-term ramifications for all of us.

Don’t be surprised if something out in Mountain View, California passes a Turing Test sooner than anybody expects.

For more about Google’s recent purchase of DNNResearch, check out this article:

http://techcrunch.com/2013/03/12/google-scoops-up-neural-networks-startup-dnnresearch-to-boost-its-voice-and-image-search-tech/

To read my presentation paper on neural networks and truly understand what this means – along with some of the day to day applications neural networks offer, check out this link:

http://www.scribd.com/doc/112086324/The-Ready-Application-of-Neural-Networks