Diablo Whips Up Devilish Brew: Faster, Cheaper Flash

Although flash memory represents just a small portion of the overall enterprise storage market, it is far outpacing disk growth, which has caused a flurry of activity, including recent announcements from LSI (HBAs) and SSDs, Hitachi Data Systems, SanDisk, HP and Dell, and now you can add Ottawa-based Diablo Technologies to the list. However Diablo has come up with a new twist – bringing flash even closer to the CPU than PCIe — that just may revolutionize the flash industry (at least until the next new twist comes along).

Memory Channel Storage (MCS) is a transformational new storage and system memory solution that resets the bar for latency and throughput for enterprise applications, said Diablo CEO and cofounder Riccardo Badalone. It allows applications to leverage the benefits of flash memory connected directly to the processor’s memory controllers, which will ultimately change the cost/density/performance rules forever.

Diablo has been around since 2003 as a logic company, developing application specific integrated circuits (ASICs) that sit between the processor and DRAM memory, and selling them to chip vendors who would sell them to the OEMs. The company worked closely with vendors like Intel, but a couple of years ago started looking for what else they could do, and started developing what would turn out to be MCS. “We were strong on the memory side, but could we leverage this expertise and combine it with the potency of flash.”

Flash SSDs boost system performance, and flash over PCIe is even faster, but the best performance will come from flash on the memory channel, said Jim Handy, Director at Objective Analysis, in a prepared statement. “Diablo is on the right path by providing a way to plug flash right into the DDR memory buses on today’s servers.”

MCS is compatible with any industry-standard DDR3 memory slot, allowing deployment across the full spectrum of server and storage system designs, chassis’ and form-factors, including blade servers, where PCIe slots are severely limited in availability and size. By using the industry standard DIMM form factor and native CPU memory interface, MCS can be used to replace RDIMMs. Badalone said configuring MCS as a block storage device enables new performance levels for applications, while reducing latencies by more than 85% over PCI-Express based SSDs and 96% over SATA/SAS based SSDs. MCS can also be configured to expand system memory from gigabytes to terabytes, providing a 100x increase in accessible memory and enable the entire application data set to reside in the CPU memory space.

Applications range from database and cloud to big data analytics, including high-frequency trading, server and storage virtualization and consolidation and virtual desktop infrastructure, he said. Diablo has had a hardware prototype of MCS for the last 18 months or so, giving it to vendors, users, OEMs, whoever would take it.

“We built a non-form-factor-correct but functionally correct prototype which allowed us to learn what our performance advantage was. We didn’t know until we built it and started to test it.” The prototype delivered about half of everything that the ASIC will deliver, he said, and now that we have the product, it is delivering everything that it promised.

Badalone said there are two markets where MCS will shine, the IO acceleration market that is typically addressed with PCIe-based server-side flash, because that was the best performance available, and the more disruptive application, like VDI or database in a cluster, where only massive amounts of memory could provide the needed performance. These applications were using hundreds of gigabytes of RAM that Diablo can now collapse to just 64Gb for literally 2-10X more work. “This has nothing to do with storage… it’s a completely different segment of the market which I believe is truly disruptive.”

 

Author: Steve Wexler

Share This Post On

Leave a Reply