Search

HPE Enters the Era of AI With New Cloud Service

The News: Hewlett Packard Enterprise (HPE) announces super commuting hardware as-a-Service model to deliver AI workloads as a cloud. For full details of the announcement, click here.

HPE Enters the Era of AI With New Cloud Service

Analyst Take: HPE took the opportunity in its annual Discover event this week to unveil HPE Greenlake for Large Language Models (LLMs), as a service offering. In perhaps the most unsurprising but significant announcement for HPE, this AI service is providing bulk access to NVIDIA H100 GPUs and data science tools curated by its High Performance Computing (HPC) division.

The introduction of HPE GreenLake for Large Language Models enables enterprises to privately gain access to the ability to train, tune, and deploy AI at scale and addressing the environment in a highly sustainable manner. HPE is leveraging its considerable experience in the HPC market, where it is credited with some of the world’s largest supercomputers and a history of innovation.

The announcement is, according to the company, the first in a series of industry and domain-specific AI applications that HPE plans to bring to market. Initial use cases and potential deployment scenarios where HPE is focusing include support for climate modeling, healthcare and life sciences, financial services, manufacturing, and transportation.

Architectural Approach

Citing the issues of performance, data management, and security along with the reliability of job completion, this move is designed to differentiate the new service from the more general-purpose cloud offerings where multiple workloads run in parallel. Performance and reliability of job completion is of concern by data scientists. HPE’s historical expertise will likely bring a higher trusted system and availability, compared to general purpose offerings. HPE is also deploying a model where nodes are single tenant, creating a more trusted environment for critical data. It will enable clients to run a single large-scale AI training and simulation workload, and at full computing capacity without having to share the underlying GPU layer.

Eliminating egress fees is a value point. And the use of a multitude of bring your own tools, or tools provided by their cloud, addresses ease-of-use for the scientists.

One key area that HPE stressed in the closed-door sessions was the software stack, although details were limited. This will be crucial for HPE to get traction, as raw GPU access will not be where this market lands. We will be looking for more clarity on the software stack in future briefings.

It is unclear from the announcement exactly how many NVDIA H100s HPE has gained access to but in the release it quotes “hundreds or thousands of CPUs or GPUs”. We fully expect this service to be close to the front of the line when it comes to access to H100s, although we would have liked more clarity on the volumes. Access to H100s and for that matter A100s is a key concern for enterprise buyers as they look to roll out enterprise AI projects.

Who Are Aleph Alpha?

Antonio Neri brought the CEO of Aleph Alpha to stage as part of his keynote to announce the new service. Aleph Alpha is a European AI-focused company based in Germany. The company’s solutions will help streamline model development using its pretrained models. A key element of the solution is that it works in multiple European languages and is culturally sensitive to European cultures and norms. This will be crucial as Europe is gearing up to heavily regulate AI, based on early statements.

Looking Ahead

Antonio Neri shared in the closed-door analyst briefing that he sees the AI solutions as significant to the overall performance of HPE because it expands the company’s total addressable market (TAM).

One other element that will become crucial as AI reaches scale in the enterprise is the sustainability of the service. HPE has taken this into account with initial deployment being in the Qscale site in Quebec, Canada. HPE is focused on delivering a sustainable baseline for proving an AI cloud capability and made a connection to the sustainability dashboard that we covered a couple of weeks ago. Sustainability will be crucial going forward for AI as the GPUs are power hungry and will surely garner attention as deployments ramp up.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

HPE offers Cray supercomputer cloud service for AI models – Tech Target

HPE GreenLake for Private Cloud Enterprise: Delivering Real Value Across the Hybrid and Multi Cloud Ecosystem

HPE Expands Alletra Storage Portfolio to Transform File, Block, and Data Protection Services

The Cost of The Next Big Thing – Artificial Intelligence

Author Information

Camberley brings over 25 years of executive experience leading sales and marketing teams at Fortune 500 firms. Before joining The Futurum Group, she led the Evaluator Group, an information technology analyst firm as Managing Director.

Her career has spanned all elements of sales and marketing including a 360-degree view of addressing challenges and delivering solutions was achieved from crossing the boundary of sales and channel engagement with large enterprise vendors and her own 100-person IT services firm.

Camberley has provided Global 250 startups with go-to-market strategies, creating a new market category “MAID” as Vice President of Marketing at COPAN and led a worldwide marketing team including channels as a VP at VERITAS. At GE Access, a $2B distribution company, she served as VP of a new division and succeeded in growing the company from $14 to $500 million and built a successful 100-person IT services firm. Camberley began her career at IBM in sales and management.

She holds a Bachelor of Science in International Business from California State University – Long Beach and executive certificates from Wellesley and Wharton School of Business.

Regarded as a luminary at the intersection of technology and business transformation, Steven Dickens is the Vice President and Practice Leader for Hybrid Cloud, Infrastructure, and Operations at The Futurum Group. With a distinguished track record as a Forbes contributor and a ranking among the Top 10 Analysts by ARInsights, Steven's unique vantage point enables him to chart the nexus between emergent technologies and disruptive innovation, offering unparalleled insights for global enterprises.

Steven's expertise spans a broad spectrum of technologies that drive modern enterprises. Notable among these are open source, hybrid cloud, mission-critical infrastructure, cryptocurrencies, blockchain, and FinTech innovation. His work is foundational in aligning the strategic imperatives of C-suite executives with the practical needs of end users and technology practitioners, serving as a catalyst for optimizing the return on technology investments.

Over the years, Steven has been an integral part of industry behemoths including Broadcom, Hewlett Packard Enterprise (HPE), and IBM. His exceptional ability to pioneer multi-hundred-million-dollar products and to lead global sales teams with revenues in the same echelon has consistently demonstrated his capability for high-impact leadership.

Steven serves as a thought leader in various technology consortiums. He was a founding board member and former Chairperson of the Open Mainframe Project, under the aegis of the Linux Foundation. His role as a Board Advisor continues to shape the advocacy for open source implementations of mainframe technologies.

SHARE:

Latest Insights:

T-Mobile Raises 2024 Guidance Driven by Q1 2024 Service Revenue, Profitability, and High-Speed Internet Breakthroughs Plus Record Low Postpaid Phone Churn
The Futurum Group’s Ron Westfall and Daniel Newman examine T-Mobile’s Q1 2024 results and why they expect T-Mobile to fulfill its raised 2024 guidance as the company is outperforming its rivals across important mobile network service categories.
Generative AI-Powered Workflows Are Helping to Fuel Performance Across All Key Business Areas
The Futurum Group’s Daniel Newman and Keith Kirkpatrick cover ServiceNow’s Q1 2024 earnings and discuss how the company has successfully leveraged generative AI across its platform to drive revenue growth.
A Game-Changer in the Cloud Software Space
The Futurum Group’s Paul Nashawaty and Sam Holschuh provide their insights on the convergence of IBM, Red Hat, and now potentially HashiCorp and the compelling synergy in terms of developer tools, security offerings, and automation capabilities.
Google Announces Q1 2024 Earnings, Powered by Revenue Gains across Cloud, Advertising, AI, and Search
The Futurum Group’s Steven Dickens and Keith Kirkpatrick cover Google’s Q1 2024 earnings and discuss how the company’s innovations across cloud, workflows, and AI are helping it to drive success.