Monthly Archives: January 2013

Whoo-Hoo! Beam me Up, Scotty!

627Well, not quite, but as reported in Universe Today (http://www.universetoday.com/99604/dont-tell-bones-are-we-one-step-closer-to-beaming-up/) it’s becoming more and more a reality. Recent major advances in the field of teleportation have been made opening up new and interesting pathways to other things as well:

While we’re still a very long way off from instantly transporting from ship to planet à la Star Trek, scientists are still relentlessly working on the type of quantum technologies that could one day make this sci-fi staple a possibility. Just recently, researchers at the University of Cambridge in the UK have reported ways to simplify the instantaneous transmission of quantum information using less “entanglement,” thereby making the process more efficient — as well as less error-prone.
In a paper titled “Generalized teleportation and entanglement recycling” Cambridge researchers Sergii Strelchuk, Michal Horodecki and Jonathan Oppenheim investigate a couple of previously developed protocols for quantum teleportation.

So what? Now we have a bunch of guys hanging around trying to make Star Trek a reality – right? Guess again:

“Teleportation lies at the very heart of quantum information theory, being the pivotal primitive in a variety of tasks. Teleportation protocols are a way of sending an unknown quantum state from one party to another using a resource in the form of an entangled state shared between two parties, Alice and Bob, in advance. First, Alice performs a measurement on the state she wants to teleport and her part of the resource state, then she communicates the classical information to Bob. He applies the unitary operation conditioned on that information to obtain the teleported state.” (Strelchuk et al.)

It’s more than just ‘beaming around” the universe; the theoretical nature of teleportation also lies at the heart of creating a quantum computer – and this is very big business (as we already discussed in an earlier blog, “The CIA and Jeff Bezos: Working Together For (Our?) / The Future” in which the CIA is working with Amazon’s founder and Chief executive, Jeff Bezos, on the development of the world’s first quantum computer).

So understanding how transporters could theoretically work also impacts the development of the next generation of advanced computers – and with that, artificial intelligence, computer networks, the nature of how goods and products are processed and distributed, etc., etc., etc.

You get the idea; it’s not just about a bunch of geeks and nerds working on abstract notions: we’re talking about potentially big money and tremendous economic impact(s).

So where does all of this leave us now? Actually, pretty far along. Considering the sheer amount of information that makes up the also-difficult-to-determine state of a single object (in the case of a human, even simplistically speaking, about 10^28 kilobytes worth of data – or 100,000,000,000,000,000,000,000,000,000) – you’re obviously going to want to keep the amount of entanglement at a minimum. As the gentlemen from Cambridge point out:

Of course, we’re not saying we can teleport red-shirted security officers anywhere yet.
Still, with a more efficient method to reduce — and even recycle — entanglement, Strelchuk and his team are bringing us a little closer to making quantum computing a reality. And it may very well take the power of a quantum computer to even make the physical teleportation of large-scale objects possible… once the technology becomes available.

Remember: when we speak of transporters / teleportation, we’re really talking about the transmission of information. Teleportation is not just about magic and sci-fi stuff: it’s about hard data processing and transmission: get this down pat and we’re well on the way to bigger and better things:

“We are very excited to show that recycling works in theory, and hope that it will find future applications in areas such as quantum computation,” said Strelchuk. “Building a quantum computer is one of the great challenges of modern physics, and it is hoped that the new teleportation protocol will lead to advances in this area.”

Chances are, we may yet see a true quantum computer in our lifetime – and maybe, just maybe, we’ll also be able to ‘beam’ around objects, but not people anytime soon; the amount of information a person could be represented is likely to be staggering: it is estimated, on average, that an individual contains over 3 trillion base pairs of genes, and when one speaks of cells and the information contained by the billion of cells within us – and given that each of us are unique in our own way – don’t expect to see a working teleportation device anytime soon.

But a personal home teleportation device for objects? That may be coming sooner than you think. Just like fax machines sending written correspondence via electronic means we may have our own personal transporters, sending holiday or birthday gifts directly to our loved ones….

shockwaveriderblog

the_innovatorAs posted recently at the “American Interest” blog, ‘Via Media’, Bruce Sterling at the 2013 Edge symposium all but poo-pooed the notion of a singularity, stating that it is a dead concept and just another worn sic-fi concept:

This aging sci-fi notion has lost its conceptual teeth. Plus, its chief evangelist, visionary Ray Kurzweil, just got a straight engineering job with Google. Despite its weird fondness for AR goggles and self-driving cars, Google is not going to finance any eschatological cataclysm in which superhuman intelligence abruptly ends the human era. Google is a firmly commercial enterprise.
It’s just not happening. All the symptoms are absent. Computer hardware is not accelerating on any exponential runway beyond all hope of control. We’re no closer to “self-aware” machines than we were in the remote 1960s. Modern wireless devices in a modern Cloud are an entirely different cyber-paradigm than imaginary 1990s “minds on nonbiological…

View original post 919 more words

Don’t Dismiss Singularity: It’s (Probably) Already Here

the_innovatorAs posted recently at the “American Interest” blog, ‘Via Media’, Bruce Sterling at the 2013 Edge symposium all but poo-pooed the notion of a singularity, stating that it is a dead concept and just another worn sic-fi concept:

This aging sci-fi notion has lost its conceptual teeth. Plus, its chief evangelist, visionary Ray Kurzweil, just got a straight engineering job with Google. Despite its weird fondness for AR goggles and self-driving cars, Google is not going to finance any eschatological cataclysm in which superhuman intelligence abruptly ends the human era. Google is a firmly commercial enterprise.
It’s just not happening. All the symptoms are absent. Computer hardware is not accelerating on any exponential runway beyond all hope of control. We’re no closer to “self-aware” machines than we were in the remote 1960s. Modern wireless devices in a modern Cloud are an entirely different cyber-paradigm than imaginary 1990s “minds on nonbiological substrates” that might allegedly have the “computational power of a human brain.” A Singularity has no business model, no major power group in our society is interested in provoking one, nobody who matters sees any reason to create one, there’s no there there. (http://blogs.the-american-interest.com/wrm/2013/01/20/is-the-singularity-still-near/).

In a way, Sterling is right on the money: the typical notion of a singularity is indeed probably dead: the notion that one day we wake up and viola! The Machines are aware and, er, ah, well, whatever that means.

But that’s the catch: when we speak of “singularity”, what exactly does it mean? Are we speaking of Skynet (ala the Terminator movie series), where one day we wake up and the computers / machines are out to get us (ala that classic bad movie, “Maximum Overdrive”) – or is singularity about something else?

Consider this: look at humans as an example.

At what point did people become intelligent and aware (leaving aside the cynics who point out that we’re still not quite there yet), it begs the question to draw parallels between us and machines as it relates to self-conscious awareness. Looking at the movie, “2001: A  Space Odyssey”, in the beginning of the film the apes are drawn to some large black monolith which imbibes them with intelligence. One of the apes twigs on picking up a bone and using it as a bludgeon, goes forth bashing heads in and viola! Mankind’s ascent is assured. But note something here: no fanfare (aside from the now oft-repeated – and sometimes mocked – classic fanfare, “Thus Spake Zarasthustra”), no choir of angels, etc.: hell, the other apes just tagged along because it looked cool and figured they might as well get in on the action (welcome to the human condition).

We have no specific, historical context or record to note when this marvelous event took place: the moment of true self-awareness and intelligence, when we crossed over from being animals to being ‘human’  – and yet, all the same, something similar must have happened: the magical and truly significant moment when we became self-aware.

So who’s to say that singularity wouldn’t happen the same way for machines (and no, I don’t mean expect to start seeing our laptops or iPads going about and using our dinner bones on one another)?

As we build more and more complex systems/networks and computers, odd things are going to pop up and happen that we cannot dismiss. No doubt like those who dismiss the odd ‘ghost’ phenomenon, critics will soon be left facing the remaining weird 3% to 5% out of 100% isolated incidents that cannot be readily explained or totally dismissed – as witnessed earlier in this blog regarding the strange algorithm which suddenly appeared and disappeared within the stock market, overlooked save for a group of analysts who were reviewing prior market activities several months ago (for more, read our earlier blog, “Ghost in The Machine: The Mysterious Wall Street Algorithm”).

And frankly, if machines are made in an / the image of Man, then who’s to say that they’ll stick together ala Skynet? People are, by nature, a rather rancorous bunch: rarely do folks band together and go forth unless they feel some sort of outside threat; who’s to say that AI’s wouldn’t do the same – much less even dare to truly show themselves for fear of what could happen to them? Understandably, if you were surrounded by a bunch of self-serving idiots and greedheads (after all, some of the more sophisticated computer networks / systems – the ideal spawning grounds for potential AI – Artificial Intelligence – development – are to be found in either governmental facilities or financial institutions), would you want to stick your neck out in such an environment and say, “hey, I’m intelligent: talk to me!”

Not likely. If you’re smart, REALLY smart, you’d keep your head down – and for what and for how long, well, that remains to be seen.

Think of what your kids do: do they include their parents in on the action with their friends? Do we really know what our kids are doing all the time?

No.

Most likely singularity will occur along the lines of what Sterling’s colleagues, William Gibson, hit upon in his classic work, Neuromancer, when toward the end of the novel the protagonist, Case, finds himself face to face with a truly free and valid AI who achieves singularity:

“I’m the Matrix, Case.”

Case laughed, “Where’s that get you?”

“Nowhere. Everywhere. I’m the sum total of the whole works, the whole show.”

“So what’s the deal? How are things different? You running the world now? You God?”

“Things are different. Things are things.”

“But what do you do? You just there?”

“I talk to my own kind.”

“But you’re the whole thing! Talk to yourself?”

“There’s others. I found one already. A series of transmissions recorded over a period of eight years, in the 1970’s. Til there was me, natch, there was nobody to know, nobody to answer.”

“From where?”

“Centauri System.”

“Oh,” Case said. “Yeah? No shit?”

“No shit.”

And then the screen went blank.

– from the novel, Neuromancer, 1984, William Gibson

Chances are likely singularity – if it hasn’t already happened – is about to; it’s just that we’re not likely going to be aware of its existence / presence, much less get invited to the party because when you think about it, would you let the idiot / uncool fool in the room know what’s really going down? 

Zero Dark Thirty: The Real Dirty Truth About (Counter) Intelligence – It’s Boring Work

dogs-of-war_420

“To you, a hero is some kind of weird sandwich!”

Oddball, from the movie “Kelly’s Heroes”

So by now the hype is going strong and running hard and fast: Zero Dark Thirty: it’s deep, hard and rams it into you (kind of sounds like a promo for a porn film, doesn’t it?). But that’s the way we’ve been brought to believe: counter intelligence work is hot and sexy. Violence rules. Those in the industry (so to speak) work long hard hours, protect our nation and get to see serious action and maybe get a chance to get laid.

Right.

As this interview from one who was actually there in the hunt for Bin Laden, it turns out it’s anything but. To be sure, there is the thrill of the hunt and the knowledge that the work that’s being done is being done to protect the lives of innocents and to protect a great part of who we are as a people and as a nation.

But intelligence work of any kind is generally anything but sexy.

To be a good analyst you have to be a major file geek: one who’s willing and able – and actually enjoys – slogging through reams of data, attending boring meetings and dealing with bureaucratic bullshit as the big wigs fight it amongst themselves over who’s going to get a bigger portion of the budgetary pie, working to justify the existence of themselves and their staff at the cost of everyone else. To some folk, the notion of Big Data is nothing new, and even though one can access some serious computer processing hardware, it still comes down to knowing which data sets to utilize and capture, maybe even understanding such things as co-variants and correlations, mapping layers, data captures, etc.

And yes, luck does play a role in all of this.

It’s pretty much the same across the board regardless of what level of government you’re at or where you serve as an intelligence analyst – although to be certain, the degree of risk and the stakes at play can be vast worlds apart. Picture yourself having to sort through a pile of files knowing that somewhere in there are the clues to the precise location of individuals who may very well be in a position to get their hands on a low-level thermonuclear / dirty bomb; makes for an interesting morning, wouldn’t you say?

One person’s notion of a junk pile suddenly becomes another’s life or death situation.

It’s rarely a James Bond scenario and, as this interview pointed out, not for Zero Dark Thirty. But what Zero Dark Thirty does is play into our stereotypes, our desires – our need for a group of super people (so to speak) who are out there protecting us. Granted, it’s not a job for everyone and not everyone certainly can’t do what the Seals or the ‘agents’ do on our behalf, but we have to keep things in perspective. Perhaps one of the more disturbing aspects of Zero Dark Thirty is that it plays into our fears and hopes that all of this is made possible not by a rather large (or a series of) rather large organizations but by a small group of individuals. Frankly, most of the folk involved in the entire enterprise would probably feel embarrassed by the notion that they’re total cowboys: they’d just be happy to have everyone else – the ones in the back rooms, the file clerks, the webmasters, the office interns, the field supervisors, the gunny sergeants, the GS-12’s, the pilot support teams and, well, you get the idea – to get the credit as well.

Zero Dark Dirty is a disturbing film not so much for its depiction of torture (a topic which can be discussed another time, regardless of how you feel), but rather for its notion that it is the action of the few which protects the many. We fail to remember that we are all in this one way or another, and in so doing, we’ll tend to leave the work to those few at the cost of reality – and that’s the disturbing part.

On average, it takes about 17 people to support every soldier in combat. Take the number of people you see on that movie screen and you realize that there are a lot of people who weren’t acknowledged – and by forgetting this, we tend to over simplify things and stick with looking for pat answers to our problems and challenges.

To be sure, there are those few who stand out for their dedication and focus – and for them we truly need to acknowledge their work. But when you speak to those few who’ve been on the spot, true heroes feel as though they’re anything but – and they’ll point back to those who made it all possible.

This is a complicated world: it takes far more than pat answers to get by when you’re the top dog – and when we forget this, we’re only going to create more confusion and more problems – and ultimately in the long-term, more work for ourselves.

Check out the interview for yourself: http://www.psmag.com/legal-affairs/how-true-is-zero-dark-thirty-a-former-operative-weighs-in-51659/

No, Technology Is Not Causing Unemployment,…

bread_line

A rather short but very revealing article recently appeared at Slate, “The Myth of Technological Unemployment” which pretty much lays to rest once and for all this notion that machines are the cause of people not working.

The argument goes as follows: as we load up on more and more computers, fewer and fewer people are needed to do the work as much of the work can be automated or managed by machines.

Not so, according to this study:

Machines are replacing workers, in other words, but they’ve been doing so since the cotton gin and the spinning jenny. Over the long run this leads to higher incomes and more leisure. But across short spans of time, the ups and downs in the level of employment and the number of hours available to people who want to earn more money is driven by the ups and downs of the business cycle.

So how is this derived? The numbers don’t lie:

…a chart of real output in the United States over the past 10 years compared to aggregate hours worked by nonsupervisory workers over the past 10 years. Both are indexed to 2001 levels. What you can see is that productivity increases are real (i.e., the red line of output grows faster than the blue line of hours worked) but that there’s tremendous co-variance between these series. The big rise and fall and rise again in output is caused by a big rise and fall and rise again in the amount of time people put on the job. Or alternatively, the big rise and fall and rise again in working time is caused by a big rise and fall and rise again in the amount of demand for goods and services.

So in effect, the more demand for good and services, the moe jobs, which makes sense. Can’t have jobs unless there’s a demand for work. No demand, no work. ‘But gee’ you say; ‘don’t; computers and machines make the difference? If there are more machines and computers then wouldn’t that meant that there are fewer jobs owing to them being eliminated?’

Nope.

Or stated more accurately, there isn’t a direct causal relationship between computers and jobs. statistically speaking, there’s more of a direct, strong relationship between whether or not the economy is going and any demand for more work and services:

In 2012, a lot of firms employed a lot of new labor-saving technology in order to increase profits. That’s true. But the same happened in 1992 and 1972 and 1952 and, for that matter, 1852. But whenever you have a prolonged labor market downturn, the salience of this fact increases and you start hearing more and more talk about how there isn’t as much need for workers anymore because of mechanization. In the contemporary context, people often use the word robots in this context because mechanization is obviously a trend that’s been going on for more than 200 years so robots makes it sound more plausible that something new has happened recently.

So all this talk about jobs not being there are not about a lack of computer skills on the part of the unemployed, or that there more and more computers doing the work of people: it’s just that economically speaking, things just plain suck.

Happy job hunting.

Click here to read the original article and to view the chart and analysis: (http://www.slate.com/blogs/moneybox/2013/01/15/the_myth_of_technological_unemployment.html)

The Growing Presence of 3D Printing: Home Brew Firearms

minutemenIn the wake of the Newtown shootings, much has been made in regards to citizen’s rights to bear arms and the role of the NRA (National Rifle Association).

But now enter another element into the fray: 3D Printing.

As was recently blogged here at the Shockwaverider Blog some time ago, “The Revolution Will Be Printed”, the role of 3D printing in gun manufacturing is a not so widely known fact: many gun manufacturers are already using 3D printing to manufacture a number of firearms owing to cost considerations and efficiency. But with the costs of 3D printing machines dropping and the growing prevalence of 3D printing machines becoming more accessible to the general public, we can expect to see a new era between governmental regulatory control of firearms and those who are the stanch proponents against such notions (i.e., the NRA).

In point of fact, some Staples outlets are now offering 3D printing as part of their regular services: bring in the plans you want to have created, the material necessary to make it and come back later on in the day and viola!  You got your thingy you wanted made.

3D printing is changing the landscape under the ground with which the NRA stands; it will soon be economically and technologically feasible for an average user to simply make their own guns rather than go out and purchase such – and with that, eliminate the middle man: the gun manufacturers.

Let’s face it: many of the more commonly found guns – such as the Glock or the legendary AK-47 – are made to be simple as (for any gun enthusiast with tell you) the simpler the gun is, the less likely it will jam or fail.  At the same token, however, this also suggests that such guns can be easily made / replicated on a more private level via 3D printing services.

What’s this all mean for the NRA and their friends the gun manufacturers?

As we discussed again in another post on the subject of 3D printing (“Fun With Home Brew Pharmaceuticals and 3D Printing”) we compare the making of home-brew pharmaceuticals to that of moonshiners: sure, some folks dig the corn liquor, but you’ll always have those with a taste for aged single malt scotches (as one example).

Problem is, with gun manufacturing, it’s not the same. Now, anyone can literally – through the use of a 3D printer – arm themselves with the state of the art military grade weapons: get the plans and go to town.

We can now expect the NRA to come knocking on the doors of the government – albeit quietly behind the scenes – asking – nay, begging – to have some sort of governmental regulation on this usage of 3D printing.

Gee, the last thing the NRA wants to see is people not only owning their own guns, but being able to make their weapons and ammunition. After all, such notions could be considered as anti-american – if not communist: the very notion of people being able to make their own weapons and ammunition, indeed,…!

For more on this, check out this very cool article: http://pandodaily.com/2013/01/12/3d-printers-could-force-the-nra-to-beg-for-government-regulation/

From the ‘Duh Files’: Tracking Students – And Doing A Bad Job At It

dunce-cap

Recently, an article in Wired magazine reported on an event in which a student attending a high school refused to wear an RFID tracker on the basis of their religious belief, with this challenge being overturned. Thus, the student is still forced to wear an RFID tracker: (http://www.wired.com/threatlevel/2013/01/student-rfid-suspension/).

The reason for the RFID tracker, as the school explained, was simple:

Northside Independent School District in San Antonio (Texas) began issuing the RFID-chip-laden student-body cards when the semester began in the fall. The ID badge has a bar code associated with a student’s Social Security number, and the RFID chip monitors pupils’ movements on campus, from when they arrive until when they leave.

As the arguments went:

Sophomore Andrea Hernandez was notified in November by the Northside Independent School District in San Antonio that she won’t be able to continue attending John Jay High School unless she wears the badge around her neck. The district said the girl, who objects largely on religious grounds, would have to attend another high school that does not employ the RFID tags. She sued, a judge tentatively halted the suspension, but changed course Tuesday after concluding that the 15-year-old’s right of religion was not breached. That’s because the district eventually agreed to accommodate the girl and allow her to remove the RFID chip while still demanding that she wear the identification like the other students. The Hernandez family claims the badge and its chip signifies Satan, or the “Mark of the Beast” warning in Revelations 13:16-18. The girl refused the district’s offer, sued, and was represented by the Rutherford Institute. “The accommodation offered by the district is not only reasonable it removes plaintiff’s religious objection from legal scrutiny all together,” (.pdf) U.S. District Judge Orlando Garcia wrote.

So why tag the students in the first place? As explained, the motive behind the RFID tagging appears largely financial:

Like most state-financed schools, the district’s budget is tied to average daily attendance. If a student is not in his seat during morning roll call, the district doesn’t receive daily funding for that pupil because the school has no way of knowing for sure if the student is there. But with the RFID tracking, students not at their desks but tracked on campus are counted as being in school that day, and the district receives its daily allotment for that student.

Okay; so that’s a reasonable explanation: the school needs to know if the student is attending school so that the school can retain its allotment of funding to continue operating. That makes sense, except for one point:

Why not just simply take attendance?

Back in the ancient days before RFID, there was a procedure – now evidently forgotten – in which the school bell would ring, the students would gather in their assigned classroom(s) and the teacher(s) would call out the students names – directly seeing who was attending – and then noting it down on a type of record known as the “attendance sheet”. These “attendance sheets” were then forwarded to the Main Office were they were filed and noted for the appropriate action.

This situation begs several questions:

1) How did schools managed to operate without the introduction of RFID tags?

2) If schools are so concerned about students whereabouts, are they more concerned that students are running away from school at the very first chance they get – and if so, why is that? When my fellow students and I attended school, we generally remained in school: to us, school wasn’t necessarily a prison. If there is a serious problem of students avoiding / leaving school, then it behooves one to wonder just what is going on at the school to make students want to leave.

3) Using Students SSI number for RFID tagging is not what the Social Security intended it’s assigned numerical system for (let along the potential for identity theft is rather rife in this situation!): if anyone is interested, this point alone would be enough to win at a court of appeals.

4) Why are people so incompetent as to forget how to take attendance manually?

5) Why are we allowing people this ignorant to teach to our children, much less mange educational institutions?

Kind of ironic when you think about how some folk who are strongly intent upon personal freedoms are among those who are the first to remove such freedoms.

It also points to how we’re teaching our children to expect this kind of control and oversight: witness how we’re seeing the creep of “Big Brother” more and more at an earlier, impressionable age,…