And so the race is on. Actually, it’s been on for some time now; it’s only now that we’re starting to see the ripples on the surface of what is otherwise a very deep and dark pool filled with very large creatures jostling for position.
It’s about processing.
It’s about the future.
Quantum computers would be able to solve problems much faster than any regular computer using the best currently known algorithms (such as those established via various neural network models). Quantum computers are totally different and unlike anything we’ve developed before. Give a regular computer enough power, it could be made to simulate any quantum algorithm – but it wouldn’t be anything like a quantum computer.
Regular computers use bits; quantum computers use qubits, which are really funky, powerful things. The computational basis of 500 qubits that would be found on a typical quantum computer, for example, would already be too large to be represented on a classical computer because it would require 2500 complex values to be stored; this is because it’s not just about the information that qubit is displaying, but the state of being where it (the qubit) is carrying that information which also plays into it’s creating an answer to any given query.
Bear with me, now,…
Although it may seem that qubits can hold much more information than regular bits, qubits are only in a probabilistic superposition of all of their states. This means that when the final state of the qubit is measured (i.e., when an answer is derived), they can only be found in one of the possible configurations they were in before measurement.
Here’s an analogy: take a regular computer bit with its black/white 0/1 configuration as a rubber ball with one side black, and the other side white. Throw it into the air: it comes back either as Black/0 or White/1. WIth qubits, it’s likely to land as either a Black/0 or White/1 but during the process will have changed into the colors of the rainbow while you’re watching it fly through the air. That’s the kicker with qubits: you can’t think of qubits as only being in one particular state before measurement since the fact that they were in a superposition of states before the measurement was made directly affects the possible outcomes of the computation. (And remember: the act of your watching the ball fly in the air also can influence the result of the ball’s landing – a point we’ll discuss very shortly regarding our old buddy Werner Heisenberg,…).
Quantum computers offer more than just the traditional ‘101010’ ‘yes no yes no yes no’ processing routines (which is also binary for the number 42, just in case anyone is reading this). Quantum computers do a (in a manner of speaking) ‘no yes maybe‘ because in quantum physics it’s more than just whether or not any given particle is there or not: there’s also the issue of probability – i.e., ‘yes it’s there’, ‘no it’s not’ and ‘it could be’. Quantum computers share similarities with non-deterministic and probabilistic computers, with the ability to be in more than one state simultaneously.
Makes you wonder what happens if we turn on a quantum computer: would it simply disappear? Or conversely, can we expect to see quantum computers appear suddenly in our universe for no apparent reason?
Doing homework will clearly never be the same with a quantum computer.
As Ars Technica points out (http://arstechnica.com/science/2013/03/quantum-computer-gets-an-undo-button/):
This (uncertainty) property of quantum mechanics has made quantum computing a little bit more difficult. If everything goes well, at the end of a calculation, a qubit (quantum bit) will be in a superposition of the right answer and the wrong answer.
What this also translates to is that quantum computers offer a greater realm of questions and exploration, offering greater opportunities for more answers and more options and superior processing capabilities. Likely we’ll wind up asking questions to a quantum computers and get answers we didn’t expect lending to more avenues of thought.
In other words, you’re not going to see a quantum computer at your nearby Radio Shack any time soon.
So now let’s revisit that hairy dog notion of Heisenberg’s Uncertainty Principle as this plays directly into the heart of quantum computers:
One of the biggest problems with quantum experiments is the seemingly unavoidable tendency of humans to influence the situation and velocity of small particles. This happens just by our observing the particles, and it has quantum physicists frustrated. To combat this, physicists have created enormous, elaborate machines like particle accelerators that remove any physical human influence from the process of accelerating a particle’s energy of motion.
Still, the mixed results quantum physicists find when examining the same particle indicate that we just can’t help but affect the behavior of quanta — or quantum particles. Even the light physicists use to help them better see the objects they’re observing can influence the behavior of quanta. Photons, for example — the smallest measure of light, which have no mass or electrical charge — can still bounce a particle around, changing its velocity and speed.
Think about it: now we’re introducing computers based – in large part – upon this technology.
We’re approaching Hitchhiker’s Guide to the Galaxy technology here: the kind of thing where we ask one question and get an answer that’s not what we’re expecting.
Improbability drive, anyone?
The race for quantum computers is big; this isn’t just some weird science fiction notion or discussion in some obscure blog. As we reported here at ShockwaveRiderblog back in October of 2012, the CIA and Jeff Bezos of Amazon were working on a formal agreement to develop a quantum computer. Now, it was just announced that the CIA is going to ‘buy’ a good portion of Amazon’s storage services (http://www.businessinsider.com/cia-600-million-deal-for-amazons-cloud-2013-3). Meanwhile, (as also reported in this blog last week) Google bought out the Canadian firm, DNNResearch expressly to work on the development of neural networks (and with Google’s rather substantial storage capacity this is also an interesting development). Meanwhile, the founders of Blackberry just announced an initiative to pump some $100 million into quantum computing research (http://in.reuters.com/article/2013/03/20/quantumfund-lazaridis-idINDEE92J01420130320). Gee, you’d think they’d pump money into keeping Blackberry afloat, but apparently there’s more money to be made elsewhere,…
And throughout all of this is what some scientists who are involved in this business won’t tell you up front (but are quietly saying this in their respective back rooms over their coffee machines) is that nobody really knows what happens if / when we develop a quantum computer and we turn it on.
Understand: we’re potentially talking about a computer where if/when we attempt to undertake a Turing Test with it, we could ask it how the weather is and get answers that seemingly don’t make any sense – until later on when we realize that it’s been giving us the answers all along: we were just too dumb to realize it was telling us what the weather’s likely to be the next month.
Note the distinction: we ask how the weather is and the (potential) quantum computer tells us an answer that we didn’t expect because we didn’t frame the question in a manner appropriate for that given moment.
Quantum computing is going to be a very strange place indeed.
Maybe the final answer is indeed going to be 42.
“There is a theory which states that if ever anyone discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable. There is another theory which states that this has already happened.”
– Douglas Adams, author of The Hitchhiker’s Guide to the Galaxy
Switching gears for a moment,… I’ve been writing extensively on such large-scale mundane matters as AI (Artificial Intelligence), Neural networks, 3D printers, Singularities (think Skynet), quantum computers, hyperdrive / deep space exploration and, well, you get the idea.
Now let’s bring it all home.
No, I do not speak of buying a copy of IBM’s Watson or getting a neural network of your own, but there is a growing trend that shouldn’t be overlooked: home networks.
If you have a computer, a printer and a cable / FIOS modem, you have a home network – which is pretty describes a lot of U.S. domestic households (in point of fact, home broadband access has doubled in the past ten years, reaching now nearly 77% of the total US population); the same can also be said of many other international households.
Now add modern consumer technology.
In the past ten years (concurrent with a growing population utilizing home broadband services) we’ve seen the introduction of such things as:
* iPhone’s, Androids and Window smartphones;
* online music / movie / Television streaming services;
* remote storage and shared file services;
* explosive growth of tablet peripherals (iPad, HP’s, Samsung, etc.).
And increasing, all are being offered at rather inexpensive pricing; ten years ago the mere mention of several of these items for home use would’ve elicited a polite chuckle. Not any more.
It’s becoming increasingly obvious that it is to one’s advantage to both themselves – and their family – to seriously consider establishing a home based network.
To be honest, not everyone needs a cloud based service, but considering the growing diversity of technology and services, it’s not a bad idea to consider setting something up that’ll enable you and you family to take advantage of what’s out there. And if you have a home-based business or just do occasional work on the side, then you need to consider establishing a home based network.
Given the ever-decreasing cost of storage nowadays (you can buy an external 2 Terabit drive for $200 or under; two years ago the very idea of such storage at these low prices was considered absurd: not anymore! Now, one can literally set up a home network RAID -Redundant Array / hard drive systems – similar in scale to what the pro’s use for their operations).
Home networks can be as simple as simply connecting an external hard drive to your home cable / FIOS box router, or linking to your Play Station / Apple TV – or even more, depending upon the size and scope you wish to undertake. And incidentally, if you have a small business, the size and scale of a home network can also be utilized to service a small business environment (albeit with different considerations, depending upon the nature, size and scope of the small business involved).
There are a number of help guides out there for review (personally, I think this is probably the best I’ve come across so far, but there are many more: http://www.howtogeek.com/67015/how-to-plan-organize-and-map-out-your-home-network/); regardless of how you go about it, here are several key considerations you need to think about:
1) Ecology – what environment are you going to exist within? Are you a Windows-based household? Are you big on Android / Play Station or you’re an Apple household? This is important as you will find one dirty industry secret is that not all equipment functions well on all systems. Some external hard drives don’t play as well with, say, Apple as they would with Window’s based environments.
2) Purpose – what is your specific purpose for undertaking this? Just because I’m writing about this doesn’t mean you should go out and get yourself a home based network: it’s just an idea for consideration (albeit a rather good idea). Some things you can do rather well without the hassle – like backing up your files on Google docs (despite the fact that it was down the other day) or Dropbox (despite the fact that data stored can be accessed by some folk without your knowledge you can always encrypt your files) or Time Machine (which, BTW, can sometimes fail).
3) The Long Term – things change fast. Five years ago, the iPhone didn’t exist; now it’s everywhere – and with that, the introduction of ‘apps’ and the revolutionary change that’s brought to our world. Having a home based network – with some planning – can better enable you to deal with any new developments and take advantage of routines and services you never considered.
4) Ease. It’s gotten a whole lot easier to do these things on your own at home. What was something that required a CompSci degree / background some 15 years ago is now commonplace throughout a multitude of homes. Now, it’s relatively easy to take the next step and take control of your data on your terms and in your way.
There are other considerations worth checking out:
* DVD backups – at the risk of incurring the wrath of the RIAA and other associated members, one idea is to back up your various ‘items’ onto your home network (and mind, only for your own personal viewing!). In this manner, you could organize your ‘items’ into pre-specified folders and assign user rights / access. Want to keep the kids from watching your old Sam Peckinpah films? Here’s one way to do just that.
* True Multi-Media capability – your home network becomes your library, enabling you and your family / friends / colleagues to access files at will (within limits, if you so wish) at any time without imposing on anyone else. Your kids could have their own ‘homework folder’ – and you can have greater peace of mind knowing that their access to ‘questionable’ websites can be better controlled (insofar as you can do so) via the installation of various port access controls.
* VPN – With a home network, you now have the capability of a true Virtual Private Network. With your home network you could (depending on how you arrange your cable / FIOS system) enable access to your files remotely regardless of where you are in the world and view them while on the road so long as you have Internet access.
* Small business support – with a home network, your business can now be enhanced, allowing authorized / controlled access. To be sure, Dropbox / Google Docs are excellent, but with a home network, you can consider doing such things as iServer or a true business network functionality via a viable VPN that you alone can control at will.
* Entertainment options – in the coming decade (if not sooner) we can expect to see major changes within the cable / FIOs market as to how we view our various shows on our televisions / computers (note how the two are becoming more and more interchangeable). With a home network, you’ll be far better able to deal and take advantage of these changes as they develop.
* Additional Back-up – sure, you have Time Machine, Dropbox and even Google. But in the event that your outside internet goes down, wouldn’t you want to be able to still access movies, shows or music? Now with your home network – so long as the electricity keeps flowing – you can do this. And BTW: one important aspect to consider is including a UPS (Uninterrupted Power Supply) which can also serve as a surge protector and prevent your network from being ‘zapped’.
Homebrew networks reflect a greater, growing trend: the growing expansion and greater utilization of technology at home, creating a real platform for other practical uses and applications.
And you can have all of this for as low as $250 (assuming you already have broadband internet access and a computer).
Might as well get something back from that monthly subscription for cable / FIOS that you’ve been paying for.
(Photo courtesy of bmisitgs.wikispaces.com)
Back in October 2002, I appeared as a guest speaker for the Chicago (Illinois) URISA conference. The topic that I spoke about at that time was on the commercial and governmental applicability of neural networks. Although well-received (the audience actually clapped, some asked to have pictures taken with me, and nobody fell asleep) at the time it was regarded as, well, out there. After all, who the hell was talking about – much less knew anything about – neural networks.
Fast forward to 2014 and here we are: Google recently (and quietly) acquired a start-up – DNNResearch – whose primary purpose is the commercial application and development of practical neural networks.
Before you get all strange and creeped out, neural networks are not brains floating in vials, locked away in some weird, hidden laboratory – ala The X Files – cloaked in poor lighting (cue the evil laughter BWAHAHAHA!) but rather high level and complicated computer models attempting to simulate (in a fashion) how we think, approach and solve problems.
Turns out there’s a lot more to this picture than meets the mind’s eye – and the folks at Google know this all too well. As recently reported:
Incorporated last year, the startup’s website (DNNResearch) is conspicuously devoid of any identifying information — just a blank, black screen.
That’s about it; no big announcement, little or no mention in any major publications. Try the website for yourself: little information can be gleaned. And yet, looking into the personnel that’s involved we’re talking about some serious, substantial talent here:
Professor Hinton is the founding director of the Gatsby Computational Neuroscience Unit at University College in London, holds a Canada Research Chair in Machine Learning and is the director of the Canadian Institute for Advanced Research-funded program on “Neural Computation and Adaptive Perception.” Also a fellow of The Royal Society, Professor Hinton has become renowned for his work on neural nets and his research into “unsupervised learning procedures for neural networks with rich sensory input.”
So what’s the fuss? Read on,…
While the financial terms of the deal were not disclosed, Google was eager to acquire the startup’s research on neural networks — as well as the talent behind it — to help it go beyond traditional search algorithms in its ability to identify pieces of content, images, voice, text and so on. In its announcement today, the University of Toronto said that the team’s research “has profound implications for areas such as speech recognition, computer vision and language understanding.”
This is big; this is very similar to when Nicolai Tesla’s company and assets / models (along with Tesla agreeing to come along) got bought out by George Westinghouse – and we all know what happened then: using Tesla’s Alternating Current (AC) model, the practical development and application of large-scale electrical networks on a national and international scale took place.
One cannot help but sense that the other Google luminary – Ray Kurzweil – is somehow behind this and for good reason; assuming that we’re talking about those who seek to attain (AI) singularity, neural networks would be one viable path to undertake.
What exactly is a neural network and how does it work? From my October 2002 URISA presentation paper:
Neural networks differ radically from regular search engines, which employ ‘Boolean’ logic. Search engines are poor relatives to neural networks. For example, a user enters a keyword or term into a text field – such as the word “cat”. The typical search engine then searches for documents containing the word “cat”. The search engine simply searches for the occurrence of the search term in a document, regardless of how the term is used or the context in which the user is interested in the term “cat”, rendering the effectiveness of the information delivered minimal. Keyword engines do little but seek words – which ultimately becomes very manually intensive, requiring users to continually manage and update keyword associations or “topics” such as
cat = tiger = feline or cat is 90% feline, 10% furry.
Keyword search methodologies rely heavily on user sophistication to enter queries in fairly complex and specific language and to continue doing so until the desired file is obtained. Thus, standard keyword searching does not qualify as neural networks, for neural networks go beyond by matching the concepts and learning, through user interface, what it is a user will generally seek. Neural networks learn to understand users’ interest or expertise by extracting key ideas from the information a user accesses on a regular basis.
So let’s bottom line it (and again from my presentation paper):
Neural networks try to imitate human mental processes by creating connections between computer processors in a manner similar to brain neurons. How the neural networks are designed and the weight (by type or relevancy) of the connections determines the output. Neural networks are digital in nature and function upon pre-determined mathematical models (although there are ongoing efforts underway for biological computer networks using biological material as opposed to hard circuitry). Neural networks work best when drawing upon large and/or multiple databases within the context of fast telecommunications platforms. Neural networks are statistically modeled to establish relationships between inputs and the appropriate output, creating electronic mechanisms similar to human brain neurons. The resulting mathematical models are implemented in ready to install software packages to provide human-like learning, allowing analysis to take place.
Understand, neural networks are not to be confused with AI (Artificial Intelligence), but the approach employed therein do offer viable means and models – models with rather practical applications reaching across many markets: consumer, commercial, governmental and military.
And BTW: note the highlighted sections above – and reread the paragraph again with the realization that Google is moving into this arena; you’ll appreciate the implications.
But wait; there’s more.
From the news article:
For Google, this means getting access, in particular, to the team’s research into the improvement of object recognition, as the company looks to improve the quality of its image search and facial recognition capabilities. The company recently acquired Viewdle, which owns a number of patents on facial recognition, following its acquisition of two similar startups in PittPatt in 2011 and Neven Vision all the way back in 2006. In addition, Google has been looking to improve its voice recognition, natural language processing and machine learning, integrating that with its knowledge graph to help develop a brave new search engine. Google already has deep image search capabilities on the web, but, going forward, as smartphones proliferate, it will look to improve that experience on mobile.
So, let’s recap: we’re talking about:
* a very large information processing firm with seriously deep pockets and arguably what is probably one of the largest (if not fastest) networks ever created;
* a very large information processing firm working with folk noted for their views and research on AI singularity purchasing a firm on the cutting edge with regard to neural networks;
* a very large information processing firm also purchasing a firm utilizing advanced facial and voice recognition.
I’m buying Google stock.
What’s also remarkable (and somewhat overlooked; kudos to TechCrunch for noting this) is that Google had, some time ago, funded Dr. Hinton’s research work through a small initial grant of about $600,000 – and then goes on to buy out Dr. Hinton’s start-up company.
Big things are afoot – things with tremendous long-term ramifications for all of us.
Don’t be surprised if something out in Mountain View, California passes a Turing Test sooner than anybody expects.
For more about Google’s recent purchase of DNNResearch, check out this article:
To read my presentation paper on neural networks and truly understand what this means – along with some of the day to day applications neural networks offer, check out this link:
Here’s a chance to say goodbye to those students loans,…!
To learn more, check out this post.
Scientists connected two rats – one located in Brazil and the other in North Carolina, United States – and linked them via a brain-to-brain interface (BTBI) connected across the Internet. What one rat learned was shared with the other – in this case, by pulling a specific level and earning a reward, one lab rat was able to share this information with the other lab rat. The result was a 95% or greater accuracy rate which was far better than if the rats weren’t connected at all.
Before you get overly excited, however, let us understand something: this approach is surgically invasive and thus we can’t readily expect folks to have this kind of thing on the streets overnight.
But what this experiment did prove was that it is possible – and that given time, a new means of training and education (as but one example) can be offered. Remember those ‘Matrix’ films – more specifically, the parts where the protagonist Neal logs on the network and is able to ‘learn’ to do things simply through electronic means? You get the idea.
Other studies suggested that the brain – in close coordination with the body – can indeed learn and guide the body – via mental routines – acts which normally would not be readily done through ‘normal’ means (i.e., repetition, practice and physical action). In some studies, it’s been suggested we learn as we dream, reviewing the days prior events and going over what we’ve experienced in an effort to better cope with our surroundings.
So much for student loans, eh? Plug me in, you say.
Wait a minute; don’t hold your breath just yet. It’s going to take a while before we develop the means to create non-invasive means to connect our brains between ourselves. And given that our brains are substantially more complex than a rat’s, this is going to take a little time because where and how we wire up our brains is going to be another key determinant in making this a reality for people.
Minor issues of morality and invasive technology aside, all of this begs a number of questions – such (as but one) somebody’s gotta know how to do things, and unless you actually do them, how else will that knowledge be passed on from brain to brain? Does knowledge and practical experience fade over time, like badly Xeroxed copies? Or can it be passed on and on without end?
Imagine the legal liability of folks learning how to drive a car, only to also replicate the same quirks and ticks that the original ‘learned’ mind practiced and having it passed on and on (I can hear it now: ‘you drive just like your great-great-great grandfather!’).
Do emotions, wishes and desires from our subconscious also get passed on to others without our knowing it – like viruses and trojans passed between computers programs and files?
Could people, in time as they undergo these processes, be subconsciously controlled to certain beliefs without their knowledge, making them into ‘good little citizens’ trained to not question authority?
Or be trained as soldiers to act in specific ways and means without their full realization, learning to hate without reason and function with no fear at the cost of their lives?
Also, another very important point to ask is that potentially we could learn more quickly, but what of retaining that knowledge? In the long run, is it better to undertake this approach or is it better to do the tried and true route of repetition and practice to ensure that what we learn retains within our heads?
And what happens to us as a society when, over time, we learn more from the machines as opposed to learning on our own?
What kind of people do we become whereby the majority of our learning and experience is attained through plug-in modules and not thorough our own efforts?
These are just but some questions we need to consider before we start opening up those ‘educational centers’ in strip malls whereby we come on down, walk in the door, place our credit card on the counter and learn how to be brain surgeons.
Growing up, there was a saying installed within us as students while we memorized the multiplication tables and Shakesperean sonnets: the mind is a muscle – it must be exercised.
I recall one sonnet:
Nothing either good or bad, but thinking makes it so.
(Hamlet speaking with Rosenkrantz and GIldenstern; kind of ironic when you read the context that this quote is derived from,…)
Here is the link to learn more about this groundbreaking discovery: http://www.nature.com/srep/2013/130228/srep01319/full/srep01319.html