As reported in Wired magazine, researchers at Cornell University and Microsoft are introducing a new data center design. As you can well understand, data centers are not only depositories of data, but also of wire – miles and miles of it – linking severs to each other, insuring against faults and failures while enabling rapid communications. Now, it seems a new design has come forward taking all of this one step closer to a new evolutionary level: a wireless data center.
Imagine a data center where there are no wires: communication’s chaos, eh? Considering how limited bandwidth can be, this is understandable. But what if the servers are close together – I mean, really close – then wouldn’t it be far more feasible?
Which is exactly what a former British mathematician, Arthur Cayley (http://en.wikipedia.org/wiki/Arthur_Cayley) proposed (and no, he wasn’t talking about data centers, despite the best efforts of Charles Babbage – and banish any thoughts of steampunk / “The Difference Engine” from thy mind!). Rather, Cayley’s works (among others; this is rather remarkable man who produced voluminous – and rather incredibly rich and powerful mathematical works of high precision and advanced thought) in this instance revolve around his notions of mathematical deigns. As the folks at Cornell explain:
“Caley’s responsible for showing that we have very strong connectivity,” says Hakim Weatherspoon, a professor with Cornell University who co-authored the paper. “So our wireless center can tolerate a very high level of server failure.” They call their creation the Cayley data center. It hasn’t been built yet, but if it does get funded, Weatherspoon believes that it will keep on working until 14 percent of the racks or 59 percent of the server nodes fail.
Networking companies have been working on 60GHz networking products for a few years now. These 60GHz transceivers operate at a much higher frequency than the Wi-Fi network you use at home. That means they’re speedier, but without the same range. By using a cylindrical rack design and reworking networking protocols, the Cayley researchers think they can cut down on outside interference and keep data pumping at about 10 gigabits per second. That’s remarkable, considering that 60GHz devices are supposed to operate in the 2- to 7-gigabits-per-second range.
Instead of engaging in back-and-forth communication chitchat you’d see in a typical wireless device, one Cayley server would connect with another, and then blast data, firehose-style to another, before signing off and waiting to receive information. Servers would talk to other machines within the rack using a transceiver on tip of the pie-shaped servers, and they’d reach out to other racks using a second transceiver on the back. So each server would be able to route data to the small number of other servers that it is set up to communicate with. That means every server is a kind of mini-switch — called a Y-switch — and none of the server racks need traditional networking switches for communications.
Wow; a fully functional data center that keeps on working despite a 14% server / 59% node failure? Tell me this isn’t Star Trek, why don’t you,…! The commercial applications are tremendous, but the potential for Cayley’s design goes beyond just regular data centers. Reading my prior posts regarding AI and quantum computers, you can well imagine – and appreciate – the power of convergence taking place: different aspects of technology meeting in ways and means heretofore not fully realized.
For more on this, read the Wired magazine article: http://www.wired.com/wiredenterprise/2012/10/cayley-data-center/?utm_source=twitter&utm_medium=socialmedia&utm_campaign=twitterclickthru