IBM… Big Step Forward with Flash Storage for the Hybrid Cloud

Eddie Cantor once said, “It takes 20 years to make an overnight success.” That is certainly the case with flash storage which has been around for many years, but high cost limited its acceptability to a limited number of high-performance/high-value-added applications. Declining prices have led to broader acceptance of flash storage for a broader base of high performance (tier 0) applications. Then came a seemingly overnight (although it was actually a couple of years) transition where flash storage was seen as capable of replacing traditional primary disk storage (tier 1). That made the economics of flash quite justifiable to data center owners and the adoption of flash storage as primary storage is proceeding rapidly. Related to this, much of the exponential growth of storage comes from new and emerging trends that are related to the Internet of Things (IoT), social media and Web services. Big data and the emerging trend of cognitive analytics thrive on not only the humongous quantity of data that these trends produce, but also the need to process much of the data very rapidly in order to derive the benefits (such as actionable, near-real-time insights) that enterprises seek in trying to gain a competitive advantage. The “cloud” in some form is likely to be the recipient of that data as traditional IT infrastructures are neither cost effective or performant enough. With the introduction of IBM FlashSystem® A9000 and IBM FlashSystem A9000R, IBM delivers the necessary purpose-built flash storage infrastructure to meet the demands of the cloud both from a scale and performance basis. So IBM is taking the next step for flash storage beyond primary storage for traditional applications to meet the new and emerging needs of the cloud. But before we get to the new products, let’s examine IBM FlashCore™, the foundational IBM technology for all of its FlashSystem solutions and briefly review FlashSystem 900 for tier 0 application acceleration and FlashSystem V9000 as an all-flash array for tier 1 primary storage. To read the complete article, CLICK ON AUTHOR’S BYLINE NOTE: This column was originally published in the Pund-IT...

Read More

IBM Introduces Platform Conductor for Spark

Apache Spark is an open-source cluster computing framework for managing big data infrastructures that often leverage OpenStack Swift object stores. But organizations managing large clusters of compute and storage resources for multiple projects are finding themselves faced with challenges, such as performance and cost, that Swift by itself was not designed to handle. That is why IBM has introduced IBM Platform Conductor for Spark, a software offering whose capabilities help enterprises meet those challenges. But before we get into IBM Platform Conductor for Spark, we need some context, including understanding that the new data-driven world where analytics shines is different from the older application driven IT world. Apache Spark is one of the new technologies that can make the data-driven world work efficiently and effectively, which is why IBM has made a major commitment to it. For more information, CLICK HERE NOTE: This column was originally published in the Pund-IT...

Read More

Scality: Using…[SDS] to Manage Petabyte-Scale Data

Organizations that have petabyte data storage requirements — and that includes more each day – are unlikely to turn to traditional enterprise storage solutions to manage these environments. Rather, they are more and more likely to turn to software-defined storage that includes the use of object-based storage. And that is where products such as the Scality RING come in. Managing petabyte-scale amounts of data requires a different mindset and way of doing the data and storage management functions, such as data protection, and that is the role of software-defined storage. Vendors large and small are targeting the capacity-driven segment of the revamped enterprise storage space. The fact that Hewlett Packard Enterprise is throwing its weight (i.e., resources, including money) behind Scality qualifies not only as an endorsement of the company’s solutions, but the capacity-driven market itself. For more information, EMAIL davidhill@mesabigroup.com NOTE: This column was originally published in the Pund-IT...

Read More

IBM’s Spectrum Storage: Now a Suite, Not Just a Family

IBM is now providing a comprehensive set of software-defined storage tools under the rubric of the IBM Spectrum Storage Suite to meet this need. These solutions can be applied to data residing in both SAN infrastructures and storage-rich server environments, enhancing the effectiveness of IT. But the new simplified licensing policy for IBM’s Spectrum Storage Suite should also allow enterprise customers to improve both cost and process efficiencies. For more information, EMAIL davidhill@mesabigroup.com NOTE: This column was originally published in the Pund-IT...

Read More

INFINIDAT: Back to the Future for Enterprise-Class Storage

Technology trends tend to mesmerize us, as we like to follow the latest and greatest trajectories. Software-defined storage, where storage management software is decoupled from the physical storage itself, is one such trend. Another is the adoption of flash for primary storage. Both are important and significant, but does that portend the end of traditional controller-based storage with a heavy dose of hard disk drives? INFINIDAT would argue to the contrary, at least for enterprise-class storage. The company argues that its new controller-based storage architecture provides the performance, robustness, reliability and scalability required from enterprise-class storage and, moreover, can do so at a reasonable price. For more information, EMAIL davidhill@mesabigroup.com NOTE: This column was originally published in the Pund-IT...

Read More