The GENI research network points towards a brand new Internet
I’ve touched on GENI (Global Environment for Network Innovations) a few times in this column, mostly regarding its significant to Chattanooga’s bandwidth mastery of the universe. It extends EPB’s one-gigabit-per-second Internet speed, still the biggest community-wide ultra-high bandwidth rollout, despite Comcast’s recent announcement of two-gig service in some cities, by connecting UT-Chattanooga to 60-plus research universities who make up their own ultra-high bandwidth network through their connection to GENI.
GENI has fascinating implications about a potential role for Chattanooga as a test kitchen for new bandwidth-intensive applications that need a working network for testing. But until I heard Andrew Armstrong, technologist-in-residence at Co.Lab and the Enterprise Center, speak about GENI at the inaugural Chattanooga Salon last month at Lamp Post Group, I was a little fuzzy on the actual technology of GENI.
Here’s how Armstrong explains it... filtered through my nontechnical understanding, of course. Any errors are definitely mine.
GENI is a platform for researching new methods of networking. As technical as that might sound, the crux of the need for a new Internet—which is what GENI is pointing toward, eventually—is that many of the features that made the Internet possible when it was created have become weaknesses as it’s grown to be vastly larger than any of its creators ever imagined.
In the old Internet model, when you click on a link, communication between your computer and the server hosting what you want to see is mediated by infrastructure computers called routers. “Your” information—whether it’s a 150-word book description on Amazon or a streaming video or a CT scan of your heart—gets broken up into smaller “packets” of data that include their ultimate destination (the web server you’re connected to) and instructions for reassembling the pieces. None of the millions of routers on the Internet know the route to your computer. They only know the routers near them and how to move data.
“When you push data on to the Internet, there’s immediately so many different routes it could travel,” says Armstrong. “That’s the first decision it makes. At each one of those next nodes, it has an equal number of decisions, and it grows so exponentially that it’s really hard to reason about where your data is going and how its being switched. It’s more ‘let’s see what happens’ than ‘this is what I think will happen.’”
If that sounds suspiciously like a bunch of stupid people getting together to do something smart, that’s not far from the truth. Remember: the Internet was created in 1969. Computers couldn’t do anywhere near what they can do now. A computer network of any kind was unusual, not the office staple it is now. The genius of the Internet was figuring out how to make widely distributed computer networking happen with extremely limited resources.
“When the Internet was built, this was the only way to make that work,” says Armstrong. “There just weren’t the tools, the computing, even the ability in silicon build switches that could take instructions from the outside. It was hardware. A switch took the traffic and did with it what it was supposed to, according to burned-in-silicon instructions. That was the only thing that could do it fast enough.”
The problem now is scale, because of the Internet’s explosive growth.
“Individual slices of technology have been upgraded to handle the scale but you still have approximately the same model,” says Armstrong. “Nobody had time to stop and say ‘we should be rethinking the fundamentals.’ They were so busy building the incremental technologies.”
Those non-deterministic packet pathways cause problems like video stuttering during video conferencing because packets are taking circuitous routes, plus lack of security because no one plans where the data will go—it may even be impossible to know where data has gone after the fact. It’s also inefficient.
GENI, sponsored by the National Science Foundation, is a test bed for researchers to build and test experimental network protocols and applications that use of them. When the network is defined by software, not hardware, all the mystery is gone.
From a distributed network in which data follows unknown—even unknowable—paths out of necessity, it looks like the Internet is heading toward an orchestrated state, in which a software defined network can determine the most efficient data pathway and send data more securely. But Armstrong hedges his bets.
“I don’t want to go on record saying an orchestrated Internet would be better, or the future will be this way,” he insists. “I don’t want to predict. I’ll be wrong. But these are the types of experimentation this platform will enable.”
Rich Bailey is a professional writer, editor and (sometimes) PR consultant. He covers local technology for The Pulse and blogs about it at CircleChattanooga.com. He splits his time between Chattanooga and Brooklyn.