Accelerating Network Speeds Bring New Set Of Challenges

For more than five decades one of the defining aspects of the IT industry has been Moore’s Law, which has been folded, spindled and mutilated far beyond its transistor roots to today’s generally accepted guideline that technology will continue to get more powerful and/or smaller and/or cheaper on an 18-24-month cycle. So now we have smartphones that are more powerful than mainframes from the ’80s and ’90s, free cloud storage that would have cost millions not so long ago, and speeds in excess of 100 gigabytes per second, orders of magnitude faster than the 300-baud speeds I started with in the early ’80s.

However, speed brings with it a new set of challenges, according to Jay Botelho, director of product management, WildPackets, a network analysis and monitoring vendor. Late last year the company released the results of a survey that showed that while massive increases of data on the network have led to increased demand for higher-speed networks (10G or higher), traditional approaches to network analysis are no longer feasible as organizations seek to capture data 24/7.

Almost all (92% of respondents) had already adopted 10G or higher network speeds, but their number one challenge in transitioning to faster networks was limited network visibility. In addition, a major challenge at 10G was the inability to have real-time statistics or perform network , which WildPackets stated is key in finding intermittent network or application errors as well as issues like Denial of Service attacks or advanced persistent threats (APTs).

“It’s not a surprise that customers going to 10Gb are finding challenges,” said Botelho in a recent interview. “People still want that kind of information (network analysis), but as we go to 40 and 100 Gb, it becomes an almost unfathomable challenge.” What was doable at 1Gb becomes a challenge at 10Gb, he said. “40Gb is a whole other challenge that nobody is going to crack yet… at least in a single platform and it’s only going to get worse from there.”

And if there are challenges in moving to 10, 40 and 100Gb, the next iterations are already moving forward. Sometime in the near future (2016-17?) we can expect the next major evolution of Ethernet networks, 400 Gb/s Ethernet (400GbE), and after that, 1.6TbE.

In the meantime, there are a number of reasons why network must get faster, including:

-soaring network traffic – 400% for datacenter traffic and 600% for cloud traffic by 2016;

-in 2013 the number of mobile app store downloads was expected to almost double to 102 billion, up from 2012’s 64 billion

-smartphone shipments were expected to surpass 1 billion units for the first time in a single year;

-vendors were to ship more than 1.8 billion mobile phones in 2013, growing to over 2.3 billion mobile phones in 2017;

-in 2009 there were 2.5 billion connected devices with unique IP addresses to the Internet, most of these were devices people carry such as cell phones and PCs; by 2020, there will be up to 30 billion devices connected with unique IP addresses, most of which will be products;

-the Internet of Things (IoT) technology and services spending will generate revenues of $8.9 trillion by 2020, almost double the $4.8 trillion of 2012.

A recent Brocade survey offered the following bad news:

-91% of IT decision-makers stated that their current IT infrastructures still require substantial upgrades;

-33% admitted that their organizations experience multiple network failures each week;

-61% said their corporate networks are not fit for the intended purpose; and ,

-41% said that network downtime has caused their business financial hardship either directly — through lost revenue or breached SLAs — or from their customers’ lack of confidence.

Throw in increasing network obsolescence and the need for more speed becomes imperative. However Moore’s Law won’t be enough to solve this problem. According to a recent Infonetics Research survey, a 2X cost/performance improvement every 18-24 months will no longer suffice.

“Our latest enterprise survey uncovered a solid outlook for network equipment spending, driven by the ever-growing demands placed on network infrastructure,” said Matthias Machowinski, directing analyst for enterprise networks and video at Infonetics. “But not all is well: there is a disconnect between the growth in network usage and enterprise budgets, and cost containment is one of the top priorities over the next year.”

According to WildPackets, which recently released the Omnipliance appliances for capturing and analyzing 1G, 10G, and 40G network data, network engineers and IT directors stated the number one feature they would like to have with their 10G network analysis solution is more real-time statistics. They are also looking for faster forensic search times. Network forensics is the practice of capture, recording and analysis of network events to discover and resolve the source of a security attack or other network problems, and while 85% said it is a necessity at 10G, only 31% are instrumenting for network forensics.

One of the trends Botelho noted is the growing need “to be smarter on how we analyze this data and how much of this data do we store. Customers will need to make smarter choices.”

It will come down to smarter filtering, and “just smarter decisions,” he said.

Other challenges include reducing the amount of data that is unnecessarily backed up on a regular basis, as well as dealing with security and . At the end of the day, though, Botelho said it mainly comes down to speed. “That is what still truly drives this market.”

 

Author: Steve Wexler

Share This Post On

Leave a Reply