Search

Google Cloud Launches Service to Simplify Mainframe Modernization

The News: Google took the opportunity as part of the company’s recent Google Cloud Next event to launch a service called Dual Run that is focused on removing risks and roadblocks preventing enterprises from moving off mainframe infrastructure. See the full announcement from Google Cloud.

Google Cloud Launches Service to Simplify Mainframe Modernization

Analyst Take: Hot on the heels of similar announcements by Hyperscale competitors AWS and Microsoft, Google recently announced the launch of a new cloud service focused on assisting mainframe customers to migrate off their on-premises deployments and transition to the cloud.

Google Cloud’s partner ecosystem plays a big role here. Dual Run is built on top of tech developed by Banco Santander, one of the largest banks in the world with clients across both Europe and the U.S., and integrates tightly with Micro Focus, whose well-known and widely adopted enterprise tech IT modernization platform is available via the Google Cloud Marketplace.

The rationale for hyperscale cloud providers to focus on the mainframe is pretty straightforward: they want the mission-critical data that sits on the mainframe on their cloud. Coupled with the fact that financial services enterprises (where the mainframe is most widely deployed) are lagging behind other industries in moving to the cloud, the rationale becomes more obvious. Banks and insurers are typically running large transactional ‘systems-of-record’ on mainframe systems and have done so for decades. In fact, more than 44 of the top 50 banks use mainframes to run their businesses.

While many will say these mainframes are legacy systems, in the majority of instances, financial services organizations are running on the latest versions of the hardware purchased within the last two years. They also invest heavily to keep them up-to-date, while leveraging modern DevOps tooling from companies like BMC to ensure that they remain innovative and provide robust levels of service.

Google Dual Run

Although I’ve read the launch material provided by Google Cloud, Santander, and Micro Focus, and despite having had briefings from both Google and MicroFocus, actual technical details on Dual Run are still scant. What I can discern is that Santander developed code internally, called Gravity, that Google has subsequently licensed and renamed as Dual Run.

According to Google Cloud, Dual Run enables mainframe workloads to run simultaneously on the existing mainframe and on Google Cloud, allowing an organization to perform testing and gather data on performance and stability, with no disruption to their operations. Google described the software as ‘splitting’ transactions so they complete both on the on-premises mainframe deployment and on a GCP environment. Customers can then compare the results of both transactions and, if consistent, then decide whether to move the workload to the cloud or operate in a hybrid mode, with operations both in the cloud and on-premises.

According to Raincode, and following a conversation I had with Raincloud CEO Darius Blasband, this approach of ensuring the functional equivalence of two systems is not new. Blasband stated that they have been pioneering this same ‘dual run’ approach for the last 12 years.

While the approach may not be new, it is interesting. The ‘dual-run’ approach demonstrates that through testing the coherence of mainframe applications enterprises can validate a cloud deployment of mainframe workloads. While this is a crucial step, and innovation by big names such as Google and Santander is noteworthy, it is still only one of the many steps required to facilitate a wholesale migration to the cloud.

Questions Remain

Hyperscale providers such as AWS, Google, and Microsoft have their sights firmly set on the mainframe and are investing heavily to tempt mainframe customers to embark on a wholesale migration to the cloud. Back in 2020 AWS bought BluAge and I expect the Big Three cloud providers to continue to either partner or scoop up the small software providers in this space who are innovating to deliver point solutions to assist mainframe customers moving their on-premises deployments wholesale to the cloud.

What is less clear to me is how these point cloud migration solutions fully address the requirements of these large-scale, mission-critical migrations. Even with innovative tooling and targeted approaches at refactoring and re-platforming mainframe workloads, this is still a ‘heart and lung transplant’ level task for many organizations. These projects typically take upwards of 5-years to fully execute with a heavy services component from the likes of Kyndryl, DXC, or Accenture.

We recommend that any enterprises considering a migration to the public cloud undertake a thorough process of reviewing the options available to modernize mainframe workloads where they reside today, as the landscape has radically changed for mainframe DevOps over the last couple of years. We also recommend that organizations looking to embark on a migration to the cloud look holistically at software costs and the implications of running tools in two locations during the transition. We also recommend customers get a clear idea of the scope of services and what happens in the event of project overruns. When all these factors are considered, and religious dogma is minimized, only then can enterprises make an informed and considered choice.

Looking Ahead for the Mainframe and Beyond

Mainframe modernization is a hot topic right now, and many enterprises are evaluating what they do to digitally transform their systems of record to handle ever-changing market and customer demands. Many CIOs who haven’t grown up with mainframes are rightly questioning their place in a modern hybrid multi-cloud architecture. While questioning architectural choices is healthy, even the Google Cloud team acknowledged during my briefing that mainframes can have a place in the enterprise IT landscape, while still looking to position Google Cloud Platform as a target for certain workloads better suited to an x86 architecture.

In my opinion, Richard Baird, VP, CTO and Engineering Lead, Core Enterprise and zCloud at Kyndryl, said it best during a recent interview as part of our Futurum Live! From the Show Floor with Kyndryl at SHARE Columbus video series, which you can view here:

As Richard said, for most customers the issue is: “Are you doing it on, with, or off, or some combination of?” Mainframe customers are rightly evaluating options and, for some customers, the hyperscale options have a role to play. For others, viable options are emerging that allow customers to leverage modern DevOps toolchains to modernize mainframe deployments in situ and not make the transition to the public cloud.

What is very clear from these announcements and the wider mainframe space is the debate about what mainframe modernization actually means is not yet fully resolved. To prove this point, the Open Mainframe Project, a Linux Foundation hosted collaborative project, recently set up a working group to help define the term and provide clarity to the market.

I am confident that the topic of mainframe modernization and how to achieve it is going to continue to be a subject of hotly contested debate in 2023 and beyond and look forward to continuing to provide coverage on that front.

Disclosure: Futurum Research is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum Research as a whole.

Other insights from Futurum Research:

Google Cloud to Use Arm-based Processors in Tau T2A VM Expansion

Google Cloud Open Source Expertise Now for Sale to Customers

Image Credit: Protocol

Author Information

Regarded as a luminary at the intersection of technology and business transformation, Steven Dickens is the Vice President and Practice Leader for Hybrid Cloud, Infrastructure, and Operations at The Futurum Group. With a distinguished track record as a Forbes contributor and a ranking among the Top 10 Analysts by ARInsights, Steven's unique vantage point enables him to chart the nexus between emergent technologies and disruptive innovation, offering unparalleled insights for global enterprises.

Steven's expertise spans a broad spectrum of technologies that drive modern enterprises. Notable among these are open source, hybrid cloud, mission-critical infrastructure, cryptocurrencies, blockchain, and FinTech innovation. His work is foundational in aligning the strategic imperatives of C-suite executives with the practical needs of end users and technology practitioners, serving as a catalyst for optimizing the return on technology investments.

Over the years, Steven has been an integral part of industry behemoths including Broadcom, Hewlett Packard Enterprise (HPE), and IBM. His exceptional ability to pioneer multi-hundred-million-dollar products and to lead global sales teams with revenues in the same echelon has consistently demonstrated his capability for high-impact leadership.

Steven serves as a thought leader in various technology consortiums. He was a founding board member and former Chairperson of the Open Mainframe Project, under the aegis of the Linux Foundation. His role as a Board Advisor continues to shape the advocacy for open source implementations of mainframe technologies.

SHARE:

Latest Insights:

In this episode of Enterprising Insights, The Futurum Group Enterprise Applications Research Director Keith Kirkpatrick discusses several new generative AI-focused product announcements and enhancements focused on contact centers and service applications.
Anthony Anter and Tim Ceradsky from BMC Software join Steven Dickens to share their insights on fortifying mainframe operational resilience through a strategic CI/CD pipeline approach, emphasizing the importance of early integration and comprehensive testing strategies.
Dario Gil and Ion Stoica, from IBM Research and Anyscale & Databricks respectively, join us to share insights on why an open future for AI is critical for innovation and inclusivity. They delve into the AI Alliance's role in this vision.
The Six Five team discusses Synopsys Investor Day 2024.