RONNIEE: A Better Mousetrap For Storage Network Bottlenecks?

With devices, connected things and data growing explosively, we are evolving from an era of to an era of , according to A3CUBE CTO and cofounder Emilio Billi. And this evolution requires a new architecture that moves us from expertise to supercomputing expertise centered on data.

The solution, or at least A solution, is the company’s recently introduced Network Interface Card (NIC) designed to bridge supercomputing benefits to the enterprise by ‘dramatically transforming storage networking to eliminate the I/O performance gap between CPU power and data access performance for HPC, Big Data and data center applications.’ Its Express data plane, incorporating network technology, elevates PCI Express from a simple interconnect to a new intelligent network , delivering ‘the lowest possible latency, massive scalability and disruptive performance that is orders of magnitude beyond the capabilities of today’s network technologies including, Ethernet, InfiniBand and Fibre Channel.’

said it is offering a ‘brain-inspired network architecture that will increase performance, scalability and latency by orders of magnitude over the capabilities of today’s SSD technologies.’ It expects to disrupt the market substantially, said CEO and cofounder Antonella Rubicco in a recent interview with IT Trends & Analysis.

We’re positioning ourselves in the migration from high-performance computing to high-performance data. “Ideally we’re going to be the new standard for the high-performance data plane.”

The numbers on data growth are all over the map, ranging from a low of 15% to as high as 90% per year. More storage is not the solution, said analyst George Crump, Storage Switzerland.

“We have to change the storage system so it can adapt in real-time to the workloads it is servicing. This is something that can be done if the storage system leverages software defined networking in addition to software defined storage.”

While the current situation is challenging, enterprises are also not interested in forklift upgrades. “Our research consistently tells us that maintaining traditional storage infrastructure is a major pain point for IT managers, resulting in additional cost and sometimes risk for the business,” said Simon Robinson, Vice President, Storage at 451 Research.

One approach is software-defined storage, but while it is expected to grow to $5.4 billion by 2018, there’s a lot of work to be done, and it won’t all happen this year, said Chuck Hollis, Chief Strategist, VMware SAS BU. However he believes SDS will be an increasingly important discussion in 2014 and “this is the year it all begins.”

-based storage in its various flavors – solid-stated drives (SSDs), all- and hybrid storage arrays, server storage () and server memory – is gathering momentum. Just the all- array market is expected to reach $1.6 billion in 2016, up from about $300 million last year. Then you can throw in hybrid arrays, which are part flash and part hard drive, server or memory-based flash.

Commenting on recent developments at flash vendor SolidFire, Mark Peters, Senior Analyst, Enterprise Strategy Group, said “[flash] performance is merely table-stakes – whereas such things as simplicity, efficiency, scalability, automation, and integration will determine the winning and losing vendors.”

His colleague, ESG founder and Senior Analyst Steve Duplessie stated that as more and more applications are delivered from shared storage infrastructure – performance predictability and scale have become paramount. “That’s been the problem with traditional storage architectures in the modern era of infrastructure virtualization.”

The huge disparity between computing power and storage performance has created a massive I/O performance gap in the enterprise as well as Big Data, HPC and data centers, said A3CUBE. However, the emergence of enterprise SSD technology has simply shifted the storage I/O bottleneck from the storage device to the interconnection between storage and the CPU, exposing the limitations of conventional PCI Express and other flash architectures.

Founded in 2012, A3CUBE claims RONNIEE can provide internode latencies measurable in 100s of nanoseconds and memory mapping that is 10 times faster than traditional PCIe memory mapping. A 5-node storage network using a multipoint direct topology with RONNIEE Express and 40 TB of SSDs reportedly can deliver 4 million IOPS, equating to something like eight times the performance of competing solutions for about one third of the cost.

“We’re adding all the features you would find in a network,” said Rubicco. They’ve gone back 20 years to when Cray used shared memory and memory mapping, but with commodity hardware.

RONNIEE Express scales linearly up to 64K nodes and 100s of I/O per node, providing end-to-end traffic management with up to 16 million unique virtual streams between any two endpoints. The RONNIEE Express NIC uses PCIe direct access to memory using “memory windows” in combination with a globally shared 64 bit memory address space to create a shared global memory container that permits direct communication between local and remote CPUs, memory to memory, and local and remote I/O.

“A3CUBE’s In-memory Network fabric leverages an innovative approach to transforming HPC, Big Data and data center environments in order to drive greater performance and efficiencies in the network and storage systems,” said Bob Laliberte, senior analyst, ESG, in a prepared statement. . “A3CUBE is extending PCIe capabilities in order to deliver a next generation network that it claims will overcome traditional network bottlenecks utilizing a high performance (Nano-second latency) and massively scalable architecture.”

Using A3CUBE figures, a RONNIEE Express 3D solution for 10,000 nodes over a year typically saves close to $70,000, while the RONNIEE Switch solution typically saves close to $113,000. In a five-year period the 10,000 nodes with RONNIEE 3D, when compared to a typical 10Gbe Network, lead to a reduction of 805 tons in CO2 emissions and RONNIEE Switch lead to a reduction of 1,230 tons in CO2 emissions

Clearly SOMETHING needs to be done and A3CUBE makes an interesting case for that something to be its HPD-based set of offerings. However, according to Network World contributor Mark Gibbs, the brain comparison doesn’t work, saying the company ‘completely failed’ to explain the relationship. Brain comparisons aside, we’ll have to wait and see if RONNIEE lives up to its promises.

 

 

Author: Steve Wexler

Share This Post On

1 Comment

  1. The brain reference – The Neurons in our brains have computational and storage capacities and these cells are directly connected with neighboring cells using multiple links. Our RONNIEE Express technology works in a similar fashion and we coined the new architecture – the “brain inspired” architecture.

    Our brain inspired architecture is the combination of RONNIEE with an entity comprising at least one CPU and one SSD. These entities can be connect directly with their neighbors in a multi dimensional mash scaling up to tens of thousands of nodes and creating a massively parallel storage and analytics platform

    Post a Reply

Leave a Reply