Tag Archives: AI (Artificial Intelligence)

How An AI Defines Customers

mccannjapan

Recently, in the Business Insider, a story spoke about how a Japanese Advertising agency hired an AI (see above picture) to do an ad campaign (http://www.businessinsider.com/mccann-japans-ai-creative-director-creates-better-ads-than-a-human-2017-3).

Surprising, it was rather successful.

The inventor, Shun Matsuzaka, “wanted to create the world’s first AI creative director, capable of directing a TV commercial”.

He did it. But before you can say “holy crap!” consider that the AI, like any electronically developed and programmed instrument, must be designed and have focus in order to do its job. You gotta tell it what to do and how to do it. So Matsuzaka’s team, “McCann Millennials” outlined two basic approaches necessary to capture an effective ad campaign:

The creative brief: The type of brand, the campaign goal, the target audience, and the claim the ad should make.

The elements of the TV ad: Including things such as tone, manner, celebrity, music, context, and the key takeout.

Confectionary corporation Mondelez took on the contract and hired the team’s AI, and so the contest was on. Selecting an industry expert to take on the challenge of creating a wining ad campaign against that of the McCann machine, the application approach was that the client was asked to fill out a form with all the elements they wanted to appear in the ad. The AI robot then scrambled the database for ideas (humans were required to actually produce the final creative).

The two spots would then be thrown to a nationwide poll, where consumers could vote for which ad they preferred.

The key phrase in which the ad was to revolve around was the following:

“Instant-effect fresh breath that lasts for 10 minutes.”

The winner?

Depends; 54% of the public participating in the vote voted for the human.

But for the ad executives, the AI won hands down. As the article stated: “when the 200-or-so advertising executives at the ISBA Conference were asked which they preferred, they voted for the crazy dog spot, directed by the robot. Clearly those advertising executives were not the target market for this particular campaign, but the experiment appeared to demonstrate just how creative — and funny — AI can be.”

Humor in AI?  Viewers familiar with science fiction will hear the common refrain that ‘robots can’t make people laugh.’ Guess that’s not the case anymore. Meantime, the McCann Millennials are at it again – this time, working on a “commercial database for the music industry to see if it can create the next pop smash hit.”

Somehow, I think  this latest project will be proved to be far easily for them to achieve.

(To see the ads, go to the link above and judge for yourself).

AI In Our Time?

AI (Artificial Intelligence) development has reached a major milestone: a machine that’s truly capable of learning on its own.

Google (or rather ‘Alphabet’ as the parent company is now known as) uses a comprehensive model / layout different from what has been developed before in the rapidly developing field of AI, developing its own version of AI – a machine known as ‘Deep Mind’. What Alphabet has done is to take the storage of conventional computers, and link them with a neural network capable of ‘parsing’ out the data, determining what is relevant and what is not in tars of problem solving.

This has often been the challenge of ‘learning machines’: determining what is junk and what isn’t. Now, working with a neural network and accessing large amounts of data, the AI model can more quickly access and sort through what would be ‘good’ data versus ‘bad’ data.

Neural networks aren’t new; they’ve been around for some time (see this article about neural networks to learn more: https://www.scribd.com/document/112086324/The-Ready-Application-of-Neural-Networks). Like a typical human brain, neural networks uses ‘nodes’ to activate specific points needed to solve a problem. In the case of Alphabet, the AI is streamlining itself to find the quickest route to solve a problem. And, as with a human brain, in time the AI will use the data obtained to become more efficient at finding the right answer to problems, growing in greater efficiency and ‘learning’ how to learn.

Or another way of putting it: ‘Deep Mind’ derives solutions based on prior experience, recovering the correct answer(s) from its internal memory on its own, rather than from human conditioning and direct programming and then proceeds based on its own ‘experience’.

Sounds awfully familiar, doesn’t it?

‘Deep Mind’, the AI which Google / Alphabet has been developing, was recently able to beat a human at the game of ‘Go’; no easy feat to do as the number of possible choices for each individual ‘stone’ playing piece being placed on the board – and the subsequent patterns thereafter – numbers in the millions, far more than the number of choices and the impacts from each individual choice/move a traditional Chess game can offer.

So combining Google’s vast database of files and server warehouses located internationally, linked to a neural network and overseen by a rudimentary form of AI, Google / Alphabet now has a machine capable of learning on its own.

The next step would be to pair a quantum computer to a network layout similar to what is described here – but then again, Amazon is already working on that.

Still got quite a ways to go, but singularity is looming ever closer.

The Business of Prediction

A king shall fall and be put to death by the English parliament shall be. Fire and plague comes to London in the year of 6 and 23. An emperor of France shall rise who will be born near Italy. His rule cost his empire dear – Pay-nay-loron his name shall be.

from the Quatrains of Nostradamus

Let’s face it; we want to know the future – and why not? Wouldn’t it be cool and save us a whole lot of trouble if we knew what tomorrow will bring? It’s remarkable to note that, during times of great uncertainly and dissociations, we increasingly  turn to prognosticators and seek out answers; this trend is evident on several recent events:

* Predicting the weather. Knowing when and where hurricane Sandy was going to land did not stop things, but it made for a far more effective response and coordinated effort. Compare Sandy to Katrina and you can well appreciate how far we’ve come in terms of emergency management and practical planning.

* Election results. No where is this more true than the numerous pundits who sought out the future and turned to a variety of models, castings and other such approaches.

* Business / economic trends and developments. Increasingly, Wall Street awaits the word from Washington and the Bureau of Statistics and the Department of Labor to learn of the latest trends and developments, seeking to know when and what the forecasts are in terms of employment, investment, trade, commodities and other market developments.

Now enter Big Data and AI.

As noted by such sites as fivethirtyeight blog (our kudos to Nate Silver!) it’s no longer so much what the pundits are saying: they’re only in it for the ratings so naturally, they’ll always have a slant (or, as my grandfather used to say ‘beware a person who believes in their own bullshit‘). Using the cold, hard facts and level-headed statistics – like those utilized by fivethirtyeight – demonstrated how well we’ve advanced just in the past ten years along in terms of prediction.

I have to note my own role in the business of prediction. Some fifteen years ago, I developed a means of predicting when and where crime would likely occur, offering a tool for local police to utilize (this was known as – surprise! – Predictive Crime Analysis). This was achieved through a means of data analysis vis-a-vis GIS (Geographical Information Systems) and did not require a large computer; rather, it required dedicated staff submitting accurate information, conducting close data review coupled with utilizing proper statistical determination / relevance to any given data set and then mapping same. The result? In one locale, we were able to reduce crime by over 40% in the first year.

Fantastic, right?

Wrong.

After awhile, nobody wanted it as it cut back on police overtime and, in some instances, forced the criminals to cross over into neighboring towns to conduct their activities with the neighboring towns grouping together and placing political pressure to stop this effort. In the end, it was removed, retired and now forgotten (although I do still have various articles and papers discussing this; please feel free to contact me if you want to learn more).

Author’s Note; Fast forward fifteen years later, and the irony is that those very same towns are being asked to “share” their police personnel to help deter the rising crime wave in the neighboring town where crime was once down 40%,…! Arguably, despite their best efforts to deter the future, the future came forth and changed them,…!

The point is that sometimes knowing the future is not always agood thing, as by knowing the future, we (sometimes) change the future (like that famous Twilight Zone episode, where the individual seeking to learn his future finds out that he will die in twenty-four hours; it is suggested that by his learning his fate, he only increased his chances of making it happen). It is a conundrum noted by Quantum Physics by the (famous) thought experiment known as Schroedinger’s Cat: the very act of looking into the box changes the outcome of what it is you’re seeking to understand as the very act of observing a physical phenomenon can affect the outcome.

Now, this is not to suggest that by predicting the future direction of a hurricane (or other large-scale natural events) that we can change things, but it is not too far to hazard a suggestion that by knowing the trends of business, commodities, trades, employment developments, voter perceptions, etc. – that we can also change the nature of what it is we seek to understand – or control.

It is an axiom that in conducting any type of precognition, you need to set aside your beliefs – both conscious and sub-conscious – if you’re going to do a good job. This is not easy for sometimes, we just don’t like what the future is telling us.

But the opportunity! We are in an age of Big Data and information review unlike any never seen before in the history of mankind. We now have the tools and processing power. We can download and obtain data on a multitude of subjects and developments, convert it and feed it into systems that we can readily program and design for any variety of applications. Now, more than ever before, we can predict our futures in ways never even realized. The trick is, doing it right – and accurately.

Now then, that being said – allow me to enter a prediction of my own.

We shall soon see an AI arising from the bulk of Big Data – sooner than we realize. And quite possibly, it may even already be operating amongst us (as noted in my prior posts),…

Perhaps Nostradamus wouldn’t be such a bad name for such a computer.

Ghost in The Machine: The Mysterious Wall Street Algorithm

Recently, an underreported development regarding Wall Street trading has been quietly making its rounds. Evidently, a mysterious computer program was busy placing “buy” orders – and then would just as quickly cancel said orders. As reported in CNBC’s “Behind The Money”” on Monday, October 8th:

The program placed orders in 25-millisecond bursts involving about 500 stocks, according to Nanex, a market data firm. The algorithm never executed a single trade, and it abruptly ended at about 10:30 a.m. ET Friday. “Just goes to show you how just one person can have such an outsized impact on the market,” said Eric Hunsader, head of Nanex and the No. 1 detector of trading anomalies watching Wall Street today. “Exchanges are just not monitoring it (emphasis ours).

So weird things pop on Wall Street, what else is new? These are the people who brought about the creation of ‘derivatives’ and ‘spiders’: Wall Street is a haven for taking notions of finances to the extremes. Except that in this situation, there’s more to the story.  The scariest part of this (single) program was that its millions of quotes accounted for 10 percent of the bandwidth that is allowed for trading on any given day (according to Nanex) with an overall impact for the weeks total trade on Wall Street amounting to 4% of the entire week!

So let’s get this right: a strange program pops out of nowhere, does a routine whereby nothing is truly traded, bought or sold and then jumps away only to disappear out of sight as if it never happened – and wouldn’t have even been noticed save by chance from a group of dedicated observers?

Earlier, we blogged about AI (Artificial Intelligence) and the coming singularity. To be certain, there is likely evidence that this algorithm placement was a deliberate act on somebody’s part to try and enact a greater degree of arbitrage through artificially ‘enhancing’ the number of sales / activities: and from all indications, they got away with it leaving yet another challenge for regulators to deal with.

But then again, in this day and age of the coming singularity, it does make one wonder: who’s to say that it was an actual person or people behind all of this,…?