Search

HPE Swarm Learning Machine Learning AI Framework Delivers Accelerated AI Insights at the Edge in Healthcare, Banking, Finance and More While Maintaining Data Privacy

The News: The HPE Swarm Learning machine learning AI framework enables enterprises to create and share AI modeling results inside and outside their organizations with a big privacy twist – the actual data being run by the models does not have to be shared. The HPE Swarm Learning machine learning software platform, which works at the edge or on distributed sites, includes compute, accelerators, and networking to help organizations develop and better train accurate AI models more quickly. For the full Press Release click here.

HPE Swarm Learning Machine Learning AI Framework Delivers Accelerated AI Insights at the Edge in Healthcare, Banking, Finance and More While Maintaining Data Privacy

Analyst Take: HPE Swarm Learning is an exciting announcement for enterprises that are constantly seeking innovative machine learning tools for AI.

I think its bold capabilities – its ability to preserve data privacy for organizations while allowing them to run and share AI modeling at the edge or outside their companies – means that critical work can be done without having to physically share sensitive data outside their comfort zones. This is an important and exciting distinction for organizations that are working on critical projects using machine learning and AI technologies, particularly in heavily-regulated markets where data privacy is paramount, such as banking, finance, and healthcare.

This is due to the HPE Swarm Learning technology developed by Hewlett Packard Labs, HPE’s R&D organization, which lets customers use containers that are easily integrated with AI models using the HPE Swarm API. HPE Swarm Learning then uses blockchain technology to catalog and analyze the modeling data and essentially decouple it from its identifying factors so it can be used at the edge or elsewhere without traditional data privacy concerns. That core data, without its privacy concerns, is then used by the models to reach their conclusions and analyses based on the learnings of the raw data.

Those newly-created AI model “learnings” can then be immediately shared inside or outside an organization and with industry peers to improve training without sharing the actual data being used in the models. Think about that for a moment – this can be accomplished without sharing the actual data. This is a huge benefit when it comes to enterprise data security and privacy concerns that are omnipresent in the minds of every enterprise IT leader.

By only sharing the learnings from the processing of the AI models, HPE Swarm Learning allows users to leverage large training datasets without constant concerns about data privacy.

I see this as a development that could inspire even broader use of AI and machine learning capabilities in the enterprise, especially in markets where data privacy is even more critical.

HPE says it developed its HPE Swarm Learning technology to help solve a conundrum that existed with existing AI model training – that it is typically done in a central location using centralized, merged datasets. That approach can be inefficient and costly, not only because it requires large volumes of data to be moved together, but also because it can be constrained by data privacy and data ownership rules and regulations that limit data sharing and movement. Solving these issues using swarm technologies is what now allows enterprises to train models and harness insights at the edge and elsewhere, giving enterprises eye-opening important new capabilities.

It will be interesting to watch as the new HPE Swarm Learning machine learning AI framework is adopted by customers and used to further address their AI and security requirements.

Disclosure: Futurum Research is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum Research as a whole.

Other insights from Futurum Research:

The 5G Factor: AT&T and Northrop Grumman, Intel Lockheed, DOD $600M for 5G Testbeds, HPE RAN Automation, Mavenir and Aspire, Qualcomm and O-RAN, Cisco and Verizon

HPE Dazzles with Host of New HPE GreenLake Capabilities and Partnerships

MWC 2022: Qualcomm and HPE Prep Virtual Distributed Units for 5G Prime Time

Image Credit: HPE

 

SHARE:

Latest Insights:

On this episode of The Six Five Webcast, hosts Patrick Moorhead and Daniel Newman discuss Intel Vision 2024 event, Google Cloud Next 2024 event, Apple M4 chips and new macs, TSMC gets $6.6 billion CHIPS Act funding, Marvell Accelerated Infrastructure for the AI Era event, and U.S. inflation data.
Patrick Moorhead and Daniel Newman are joined by Intel's Justin Hotard and Sachin Katti for an insightful discussion on Intel's strategic direction regarding Enterprise AI, which was covered this week during Intel Vision 2024.
The Futurum Group’s Guy Currier provides his insights into the advancements in the creation and operation of applications and their foundational data, along with AI, showcasing the rapid progress being made in cloud and application development.
Kubecon and the Vendors Lay Out Strategies for Driving AI
Camberley Bates, Vice President at The Futurum Group, covers the pressing issues of memory constraints and highlights from Memcon 2024.