VMware’s Intention to Acquire VeloCloud…

The announcement of VMware’s intention to acquire VeloCloud signals the broadening of the NSX Everywhere story. SD-WAN is a solution that offers agility, security, orchestration, and other business outcomes for remote and branch offices. It should not be considered just an MPLS replacement for the WAN with savings on bandwidth costs. At a core level, both NSX and VeloCloud’s products are based on an overlay network, which offers the flexibility to treat a logical network separately from the physical network, and this core concept has been popularized for many years via MPLS. Ironically, it’s the perceived lack of flexibility and costs of MPLS that have become the initial drivers for the popularization of SD-WAN, which promised to modernize the branch networks and WAN. VMware’s NSX Everywhere plan is similar to Cisco’s ACI Anywhere plan since it enables the core data center networks to reach out into other locations such as a public cloud. To read the complete article, CLICK...

Read More

Reducing Storage TCO with Private Cloud Storage

With the burgeoning growth of data, many legacy storage systems simply struggle to keep the total cost of ownership (TCO) in check. This article will look at the ways that Private Cloud Storage can address the TCO shortcomings of legacy storage. To read the complete article, CLICK HERE NOTE: This column was originally published in the Storage Switzerland Weekly...

Read More
Surveys Say: Network Apocalypse Looming!
Sep27

Surveys Say: Network Apocalypse Looming!

There aren’t many surprises in the latest market survey from networking vendor Emulex, but it certainly confirms that enterprise networks are facing massive changes to adapt to and support the anywhere, anytime, any-device reality of today’s hyper-connected world. The growing requirement for faster data-center networks isn’t a surprise, says Shaun Walsh, senior vice president of marketing and corporate development, but the rate at which bandwidth demand is increasing is. The biggest difference is that networks primarly ran up to 1Gb Ethernet for the better part of a decade, but over the next few years networks will be running at speeds of 10, 40 and 100Gb, concurrently, he says. “Today, 40% have already deployed 10Gb Ethernet, and in another four years, the majority of those networks will be operating at 100GbE. It’s truly unprecedented.” The survey of more than 1,500 IT executives in North America and Europe, conducted in August, found that 20% will require 10,000x faster networks by 2016. More than a quarter (27%) say the need for network I/O increases by 100% or more each year. Just over a third (37%) say they currently manage more than 1 petabyte of data or more; and 11% say they manage 100 petabytes or more. And close to a fifth (19%) say their networks will run at 1 Tbps or faster by 2016. These findings are consistent with a number of recent reports. A study from Dimension Data, the 2012 Network Barometer Report, concluded that 45% of networks will be totally obsolete within five years, and of the devices now in obsolescence, the percentage at end-of-sale increased dramatically from 4.2 % last year to 70% this year. Gartner stated that networks remain a top priority for organizations, with telecom equipment and services expected to account for the largest chunk of a global IT spend that will surpass $3.6 trillion in 2012. Telecom equipment spending will reach $377 billion this year, up 17.5%, and growing another 8.3% in 2013, while telecom services will come in at $1.686 trillion (up 1.4%) and $1.725 trillion (up 2.3%), respectively in 2012 and 2013. Walsh calls today’s network environment the result of a “perfect storm”, caused by four trends – big data, cloud, virtualization and network convergence – hitting at the same time, and putting tremendous pressure on data centers to increase network bandwidth and I/O. “Network dynamics are changing.” Cloud isn’t so much a trend as a tool in the toolbox. “We need to pick the right tools for the right job.” Big data, AKA the acquisition, analysis and interpretation of ridiculously huge data sets, is an interesting example. Currently restricted due to to costs,...

Read More