The Relevance of Networking at AWS re:Invent

This year was my first re:Invent and it was an impressive event. There were over forty-three thousand people in attendance and the show occupied a number of hotels along the Vegas strip. It wasn’t just that there were a lot of people there, it was that there were a lot of people who wanted to be there – after attending hundreds of trade shows and user group events you get to know the difference. There was a buzz and excitement at the show that reminded me of early VMworld and TechEd shows. Sessions were sold out and queues were long as people waited for the doors to open. All the attendees I spoke to had specific reasons for attending; many were in the process of moving to a cloud first strategy and were there to learn. Clearly the main point from the keynotes was to remind everyone that AWS is continuing to innovate and provide services to help organizations of all sizes transition to cloud by offering the greatest breadth and depth of capabilities for a cloud platform, thus, making it easier for organizations to make the transition to the cloud and ensure AWS has capabilities for all possible use cases… thus potentially expanding its already dominant forty plus percent share of the market. On the expo floor it was good to see a mix networking companies attending to help customers better understand how to connect to the public cloud. In fact, ESG research on network modernization indicates that the top impact that organizations have reported that public cloud computing services has had on their network strategy is that they’ve integrated their data center and WAN links to create a seamless network that connects on-premises and off-premises resources (38%). That is why it was important for companies like Cisco, Juniper, and Arista to be at re:invent to talk about how they can enable seamless connectivity from the data center to the cloud for hybrid cloud environments. To read the complete article, CLICK...

Read More

IBM Advances Cluster Virtualization…

On the classic Groucho Marx quiz show You Bet Your Life if a contestant accidently said the “secret word” of the day, he or she would win a prize. There’s no prize included in this commentary, but the secret word of the day is virtualization, especially as it relates to IBM’s new HPC and AI solutions. IBM defines virtualization as “A technology that makes a set of shared resources appear to each workload as if they were dedicated only to it.” IT is very familiar with this concept, what with operating system-level virtualization, server virtualization, network virtualization, and storage virtualization all continuing to permeate more and more through computing infrastructures and the collective consciousness. So, it should come as no surprise that IBM is advancing the concept of cluster virtualization in its latest announcement, tying it closely to cloud and cognitive computing. IBM’s cluster virtualization initiative combines products from its Spectrum Computing family, namely Spectrum LSF, Spectrum Symphony, and Spectrum Conductor, along with overall cluster virtualization software (Spectrum Cluster Foundation) to manage the whole process. And that includes the storage that is delivered through IBM Spectrum Scale, another member of the IBM Spectrum Storage family. The goal of this approach is to automate the self-service provisioning of multiple heterogeneous high-performance computing (HPC) and analytics (AI and big data) clusters on a shared secure multi-tenant compute and storage infrastructure. Doing so delivers multiple benefits to numerous technical computing end users, including data scientists and HPC professionals. The announcement focuses on these products: IBM Spectrum LSF, IBM Spectrum Conductor, and IBM Spectrum Scale. For more information, CLICK HERE NOTE: This column was originally published in the Pund-IT...

Read More

Enterprise Networks and Telco Clouds on a Collision Course

The Internet of Things will move more processing to telecom suppliers’ facilities. Network engineers have traditionally treated networks managed by their telecom suppliers as outside their immediate domain of concern. The telco network was brought into the data center, appropriate routes or peering set up, and that was it. Enterprise workloads typically don’t run directly on telco networks for many reasons, including governance or compliance requirements. Now, emerging technologies such as the Internet of Things are starting to require workloads to be located within telecom service providers’ facilities. To read the complete article, CLICK...

Read More
Cisco: “The new datacenter is the multi-cloud datacenter.”
Oct12

Cisco: “The new datacenter is the multi-cloud datacenter.”

Already one of the biggest players in the red-hot cloud infrastructure market (it grew 25.8% in the second quarter to $12.3 billion), Cisco Systems — in third place with 8.2% marketshare, trailing Dell (11.8%) and HPE (11.1%) — has a lot of credibility when it says cloud is transforming the datacenter. “The new datacenter is the multi-cloud datacenter,” said Tom Edsall, formerly a Cisco Fellow, SVP and GM, Insieme Business Unit, Cisco Systems. However, he told IT Trends & Analysis, the challenge is now you have an infrastructure that is basically a multi-vendor infrastructure. Rather than just a collection of hardware and software from different vendors, you have to throw in the various cloud providers like Amazon and Azure. He said organizations have part of their infrastructure running on different clouds, with different APIs, and are struggling to make the differences disappear. “The problems that we encountered 10 years ago are happening all over again,” said Edsall. “Then it wasn’t cloud, it was multi-vendor.” He added that the company has had strong success with on premise with its ACI (Application Centric Infrastructure) portfolio with over 4,000 customers. But while the customers really like the application-centric approach, they are frustrated because “they can’t get the same API at Amazon.” They want to know how do they get a common experience across these systems, said Edsall. Ever helpful, Cisco recently announced a management and automation platform for its Unified Computing System (UCS) and HyperFlex Systems, Cisco Intersight. To be available 4Q17 in two versions — the Cisco Intersight Base Edition will be available at no charge, while the Cisco Intersight Essentials Edition will cost you — it is intended to simplify datacenter operations by delivering systems management as-a-service, instead of having to maintain ‘islands of on-premise management infrastructure.’ ‘The longer-term vision of Intersight is spot-on,” noted Matt Kimball, senior datacenter analyst, Moor Insights & Strategy. ‘Not only does it address the issues IT organizations face today, but it also provides a platform that can accommodate the unknowns of tomorrow. If Cisco successfully executes this vision, it will firmly position itself as a leader in multi-cloud infrastructure orchestration and management.’ Unsurprisingly, a canned quote included in the Cisco release was equally ebullient: “Organizations that move to cloud-based systems management platforms will find that service delivery quality is significantly improved, the overall risk to the business goes down, and IT staff productivity is increased,” said Matt Eastwood, Senior Vice President, IDC. “Artificial Intelligence (AI) –infused cloud-based management tools can offer deep insights into the state of the infrastructure, identify troubles before they become major issues, and enable quicker ‘root cause’ identification and analysis...

Read More
Will Cloud DevOps Re-Energize ‘Big Iron’?
Oct05

Will Cloud DevOps Re-Energize ‘Big Iron’?

Not only has ‘Big Iron’ shrugged off its naysayers — suffering neither Monty Python’s ‘flesh wounds’ nor Mark Twain’s ‘reports of my death’ — the mainframe appears to be poised for a renaissance, one that software developer Compuware hopes to accelerate with its recent DevOps announcement for Amazon’s popular AWS cloud platform. “We’ve made Topaz [its flagship solution for mainframe Agile/DevOps] into what customers are evaluating and incorporating as a force multiplier,” said CEO Chris O’Malley. “The next step is bringing Topaz to AWS,” he told IT Trends & Analysis, accelerating DevOps availability to “minutes instead of months. In some cases, it can take more than a year for competitive products.” The mainframe, or at least IBM’s version, has been a staple of IT for more than 50 years, and it shows no signs of disappearing. The numbers speak for themselves: 55% of enterprise apps need the mainframe; 70% of enterprise transactions touch a mainframe; and, 70-80% of the world’s corporate data resides on a mainframe. However the installed base appeared to be shrinking as newer, less-costly alternatives proliferated. Annual mainframe system sales have declined from a high of about $4 billion earlier this decade to $2 billion in 2016, accounting for just 3% of IBM’s total revenue (although the associated hardware, software and technical services accounted for nearly 25% of IBM’s sales and 40% of its overall profit last year). Apparently Big Iron is back in vogue. According to a new study, the global mainframe market is expected to see a compound annual growth rate of 2.58% between 2017-2021. In March it was reported that mainframes had reached an inflection point where they will either continue as a revenue-supporting mechanism or evolve into a revenue-generating platform. “IDC believes that the mainframe has a central role in digital transformation; businesses that do not take advantage of its broad range of capabilities are giving up value and, potentially, competitive advantage,” the research company stated. ‘The mainframe is not going away, but the way that you use it will change,’ noted Robert Stroud, Principal Analyst, Forrester, in a blog entitled DevOps And The Mainframe, A Perfect Match?. ‘Containers and microservices are coming to every platform, including the mainframe. Gradually breaking large monolithic applications into smaller services will help you transition to a containerized future that promises faster application delivery, greater scalability, and better manageability – regardless of the platform.’ A month ago IBM refreshed its z series mainframes with the LinuxONE Emperor II. “LinuxONE is a highly engineered platform with unique security, data privacy and regulatory compliance capabilities that doesn’t require any changes to developer or open source code, combined with...

Read More