Lenovo‘s Cool Fix for HPC Energy Consumption

High performance computing (HPC) and supercomputing haven’t always been closely associated with energy efficiency. In fact, for the first four decades (beginning in the early 1960s) of commercial supercomputing, owners were far more concerned with systems’ computational capabilities than the electrical energy they consumed. That was mainly because of the unique value of custom-built systems like the Cray CDC 6600 (delivered in 1964) which performed highly complex calculations faster than most people could imagine. In addition, the heady price tags of supercomputers limited interest in the systems to any but the deepest-pocketed large enterprises and government labs—organizations that cared more about results than virtually any cost. External events began to change that dynamic beginning in the early 2000s. Those points resonate in Lenovo’s new ThinkSystem SD650, a high-density commercial solution designed to maximize compute performance for HPC workloads and applications while minimizing energy consumption. Let’s take a look at how power issues are impacting HPC and supercomputing, what Lenovo has achieved and how its new ThinkSystem SD650 addresses customers’ energy constraints and concerns. To read the complete article, CLICK HERE NOTE: This column was originally published in the Pund-IT...

Read More

IBM Advances Cluster Virtualization…

On the classic Groucho Marx quiz show You Bet Your Life if a contestant accidently said the “secret word” of the day, he or she would win a prize. There’s no prize included in this commentary, but the secret word of the day is virtualization, especially as it relates to IBM’s new HPC and AI solutions. IBM defines virtualization as “A technology that makes a set of shared resources appear to each workload as if they were dedicated only to it.” IT is very familiar with this concept, what with operating system-level virtualization, server virtualization, network virtualization, and storage virtualization all continuing to permeate more and more through computing infrastructures and the collective consciousness. So, it should come as no surprise that IBM is advancing the concept of cluster virtualization in its latest announcement, tying it closely to cloud and cognitive computing. IBM’s cluster virtualization initiative combines products from its Spectrum Computing family, namely Spectrum LSF, Spectrum Symphony, and Spectrum Conductor, along with overall cluster virtualization software (Spectrum Cluster Foundation) to manage the whole process. And that includes the storage that is delivered through IBM Spectrum Scale, another member of the IBM Spectrum Storage family. The goal of this approach is to automate the self-service provisioning of multiple heterogeneous high-performance computing (HPC) and analytics (AI and big data) clusters on a shared secure multi-tenant compute and storage infrastructure. Doing so delivers multiple benefits to numerous technical computing end users, including data scientists and HPC professionals. The announcement focuses on these products: IBM Spectrum LSF, IBM Spectrum Conductor, and IBM Spectrum Scale. For more information, CLICK HERE NOTE: This column was originally published in the Pund-IT...

Read More

Lenovo at SC 16 – A Commitment to HPC Innovation

How companies evolve after major acquisitions is usually interesting and seldom predictable, as numerous examples can show. That’s especially true in the rare cases where one company acquires multiple properties from another, as Lenovo has from IBM. In 2005, Lenovo purchased IBM’s PC division and assets, then repeated the process in 2014 with IBM’s System x server organization and Intel-based products. In the former case, some competitors suggested that Lenovo (then known mainly for its sales in China and other Asia markets) would be a poor steward for IBM’s Thinkpad and its solid business-class reputation. The company quickly proved those critics wrong, and steadily expanded its PC and notebook portfolios and market position. Then in Q3 2012, Lenovo achieved what many considered unthinkable and surpassed HP to become the world’s largest maker of PCs by volume, a position it continues to enjoy. To read the complete article, CLICK HERE NOTE: This column was originally published in the Pund-IT...

Read More

New IBM Software Toolkit Supercharges Deep Learning

A few weeks ago, IBM launched a new POWER-based data center solution for High Performance Computing (HPC) applications, including artificial intelligence (AI), deep learning and advanced analytics. The Power System S822LC is a Linux-based offering that leverages a new POWER8 chip and NVIDIA’s NVLink interconnect technology optimized for the Power architecture. Via NVLink, IBM’s Power server architecture can be tightly integrated with NVIDIA’s Pascal architecture and the company’s Tesla P100 GPUs. Why is this a big deal? Because the new Power System S822LC solutions avoid the potential bottlenecks that are commonly associated with conventional PCIe interfaces. That’s a good thing in HPC applications that require sustained, muscular data throughput. But it also means that HPC systems utilizing Power System S822LC hardware can deliver considerably higher performance than similarly configured Intel-based systems with PCIe. To read the complete article, CLICK HERE NOTE: This column was originally published in the Pund-IT...

Read More

IBM, OpenPOWER and Gaining a Competitive Edge in HPC

High performance computing (HPC) represents a pinnacle of computational excellence that’s also amazingly cool for those of an IT bent. It isn’t just the systems themselves, though there’s much to consider there. What’s more remarkable and often more impressive is how HPC can enable flights of the imagination that find landing places in the real world. The effects of these excursions are often apparent to the narrowest of audiences. After all, the results of classified nuclear weapons research aren’t open to the general public but many other HPC-inspired advancements are. Plus, the same rules of commoditization that impact other IT markets hold true for HPC, too. So that HPC capabilities and applications that were unthinkable just a few years ago enable numerous commercial solutions and services today. Those same commoditization rules make HPC every bit as dynamic as any other commercial IT market, if not more so. That’s because HPC vendors and their customers are always looking for an advantage, a way to get ahead and achieve consistently leading edge results. At the same time, HPC is one of IT’s least sentimental practice areas. “What have you done for me today?” isn’t a cliché in HPC. It’s a mantra repeated daily, weekly, monthly, annually by scientists and research professionals on the hunt for the next big thing, and the bigger things to follow. To read the complete article, CLICK HERE NOTE: This column was originally published in the Pund-IT...

Read More