Clicky

The Six Five On the Road: IBM’s Full Stack Approach to the Future of Computing
by Daniel Newman | August 26, 2022

On this episode of The Six Five – On The Road hosts Daniel Newman and Patrick Moorhead are joined by some of IBM’s leading minds to talk about IBM’s full stack infrastructure and the future of computing.

Their conversations covered:

  • IBM’s semiconductor vision and ecosystem with Mukesh Khare, VP Hybrid Cloud at IBM Research
  • The benefits of fundamental science and technology innovation with IBM’s Ross Mauri, GM IBM Z and LinuxONE
  • How IBM’s Cloud fits into their full stack and impacts the future of computing with Hillery Hunter, GM, Cloud Industry Platforms & Solutions, CTO IBM Cloud, and IBM Fellow
  • Distributed infrastructure, AI, and how IBM’s vision of the future of computing extends to Edge with Nick Fuller, VP, Distributed Cloud at IBM Research
  • How Quantum Computing is shaping the future of IT with Jay Gambetta, IBM Fellow & VP Quantum Computing at IBM Research

To learn more about the IBM Research, check out their website.

Watch our interview here and be sure to subscribe to The Six Five Webcast so you never miss an episode.

Listen to the episode on your favorite streaming platform:

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.

Transcript:

Patrick Moorhead: Hi. This is Pat Moorhead and we are here for another Six Five On the Road, and we are talking about IBM’s full stack infrastructure, but more importantly, the future of computing. And we’ve spent the last couple days going in between Yorktown, New York, and Albany, New York talking to many of IBM’s brightest folks at their research facilities.

Daniel Newman: Yeah, there’s a lot of very, very smart people that we’ve had the chance to sit down with, Pat. And for us as industry analysts, we’re really trying to inform ourselves so then therefore we can translate that knowledge out to all of you and to the market because the nature of computing is shifting so much and we’re seeing how workloads are being distributed. We’re seeing the impact that semiconductors are playing on the world. We’re seeing new technologies like Quantum rise and people are kind of wondering, well, how does this contribute to what the future’s going to be? And of course, data proliferating and exponentially growing. And IBM, interestingly, has a really full stack story that I think needs to be told more and so many people out there could benefit from hearing. So that’s what we did. We went on the road and you’ll see us sitting here, you’ll see us sitting in a few other spots, but really we’re going to talk to some great people and it’s going to be a great conversation if you can just give us a little bit of time.

Patrick Moorhead: Yeah. I’m super excited about this. And part of industry analysts’ role is to educate, and this is what this is about today. And I found it super special because it combined not only deep chip tech, but also all the way down to the practical level of delivering cloud services to clients.

Daniel Newman: And without further ado, we’re going to go to Dr. Khare and we’re going to talk about semiconductor research, hybrid cloud, and so much more. So join us. Mukesh, it is so great to have you here. We’re here in Albany at the NanoTech Center having this conversation about the full stack future of compute. And boy, what a great opportunity to have you join us on the Six Five.

Dr. Mukesh Khare: Thank you. It’s my honor to share what we do here in Albany NanoTech.

Patrick Moorhead: So much history with IBM and chips. I mean, you were doing your own first party Silicon for systems before it was cool. We talk about heterogeneous computing. You’d be doing a lot of that with ASICs, fixed function, accelerators. It is super exciting. And at one point, you were a major manufacturer. In fact, I worked for a systems provider in the ’90s that bought chips from IBM, Broad Scale Foundry, which I know that strategy has changed, but here we are. It’s an exciting week here with the chip sec and everything. But can you talk about your vision and your strategy moving forward? And before I forget, I want to point out the incredible amount of IP that you provide sometimes publicly, sometimes not to some of the biggest chip producers and designers on the planet.

Dr. Mukesh Khare: Thank you very much for, again, this opportunity. IBM as a company, our strategy is very clear. We are very laser focused on our hybrid cloud and AI strategy. And as you said, from the beginning, IBM has been working on the full stack approach. I call IBM stack the golden stack. It’s a golden stack, which includes everything. We have infrastructure business. We have our software business, which includes Red Hat, which is a hybrid cloud platform. And then we have consulting business to help client grow through the journey. So those are the three ingredients for IT industry. And today, we are here at the infrastructure part of our stack that we are going to talk about. And clearly semiconductor, as you know, semiconductor industry started in the US and IBM was the company who created these scaling laws, the laws that Bob Denart created, which essentially defined how do you scale transistor generation over generation. However, over period of time, IBM’s strategy has evolved as the industry evolved. Our view is very simple. We want to focus on things which we are good at, which we can do, and we want to partner with companies with the value that they bring to the table.

So today, what we focus on is the research and development part, because that’s our strength, that’s in our DNA. So we drive research and development agenda, a very strong R&D agenda for IBM’s business. And we partner with the companies who bring in manufacturing scales, or which who need certain IP for their business, so that it’s kind of a complimentary relationship. And that’s our new model where we are focusing on developing technology for our product, as well as helping our partners to build their product for their business. So that’s the new model that we are on right now.

Daniel Newman: Yeah. It’s really interesting. You mentioned… Did you call it the golden stack?

Dr. Mukesh Khare: Yes.

Daniel Newman: Is that what you called it?

Dr. Mukesh Khare: Yeah.

Daniel Newman: All right. I’m going to hold onto that one. I’m watching that. But it is really fascinating just having spent some time here seeing kind of how the research informs everything else and you’ve got a really big role in research. And so we’ve sat down actually on the Six Five, Arvin Krishna, your CEO, joined us and he also illuminated us on the visions for hybrid cloud and AI. And that’s something that seems you’re very focused on.

Dr. Mukesh Khare: That’s right.

Daniel Newman: How does that sort of continuum function from research to the development of end execution of what’s now your hybrid and AI strategy, which we’re very clear. Those are the areas that you talk about accelerating at, and that’s where right now IBM is accelerating.

Dr. Mukesh Khare: That’s the yes part on. So with the stack that we work on, we work with clients. We understand the client’s needs. And then that gives us feedback into what type of system technology, what type of chip technology, what type of process technology we need to develop to solve client problem. A great example of that is our Z16 recent announcement where we introduced an AI accelerator inside our tele processor, because we understood that there is a very strong need of… During transaction, AI inferencing. So we were able to translate that into what does it mean at the cloud level? What does it mean from a chip design level and at the process technology level?

And here in Albany NanoTech, we had launched an initiative called AI Hardware Research Center. And we used a partnership model to develop the core, the influencing core that then goes into our own chip that then solves our client’s problem. So the fact that we worked on the entire stack we come from, what does client want? And then we, fortunately at IBM, can go at the lowest possible level, at the process level, at the chip level to solve client problem in the best possible way.

Patrick Moorhead: So one of the things I appreciate about the folks that are doing research, and by the way, I like to separate research from development. There’s a lot of people who do development. There’s not a lot of people who do research, but a lot of things that I think about is our business models. Because at the end of the day, doing research and development for the sake of research and development doesn’t make the shareholders happy and doesn’t-

Daniel Newman: Your university.

Patrick Moorhead: Yeah, exactly. So what is the IBM business model? You’re in so many different places and I don’t hear a lot about it publicly, but I do ask you, can you talk a little bit about that?

Dr. Mukesh Khare: Yeah, so first to answer part of your question, we do R&D not for the sake of R&D. We are a business and anything that we do here in Albany NanoTech or in IBM research is to have an impact to a product, impact to an IBM product, as well as impact our partner’s product. So that’s the first part, it’s a research which will have an impact to a business objective at the end. So that’s the way we think. And that’s why IBM research is one of the, I guess, only extremely successful research organization in the world. The second part I will say is that, yes, the model for research… And we at IBM essentially invented a new model for research and development. We started the model of partnership where companies with a shared need go invest, and we started this partnership model more than 30 years ago, actually, with IBM’s [inaudible 00:09:21] Alliance, then transition into Logical Alliance.

And here we are, which is in Albany NanoTech, which is world’s most advanced public private partnership where we are collaborating and co-investing, which is very important because co-investment not in terms of dollars, but also in terms of intellectual capital, because we bring intellectual capital from IBM, our smart researchers from IBM, as well as other companies who are partnering with us, they bring their best people, best talent, their best equipment and best investment. So it’s a very unique model where we are co-creating at the end for the benefit of our business. So that’s the unique model. We do generate a significant amount of intellectual property IP. That definitely helps us make sure that we will have technology from an IBM lens. We will always have knowledge, IP and technology that we need for our business. And obviously, that can also help us to monetize that IP through licensing or through other models as well.

Daniel Newman: Yeah. The opportunity for that IP to translate to revenue is tremendous. And we of course know that there’s so many products that have hit the market and developed that IBM had a part to play in.

Dr. Mukesh Khare: That’s correct.

Daniel Newman: And many of them, the average consumer may not know that because where it got licensed or sold off or abandoned. I mean, the history of research. You just got to go to any of these facilities. You and I have had the fortune of going to headquarters and being in some of these and just seeing things and going, oh, wow, did not know IBM did that. And so it’s really, really interesting. And one of the areas that is very interesting is some of the IP leadership in semiconductors, I believe it was maybe earlier this year, last year. Get the dates right, but you announced 2-Nanometer. And of course that’s at the very leading edge of the leading edge. Talk a little bit about that type of innovation, where that’s going next, and how being on the front end is a big part of your strategy.

Dr. Mukesh Khare: That’s a great question. And yes, our goal is to be develop technology, which is absolutely leadership technology, because that’s what IBM business wants, right? You will speak with Ross Mauri, who is the general manager of IBM Z business. Product or technology requirement for IBM systems business, Z business and Power business is like the ultimate standard. If you can meet the requirement for that, you can meet requirement for everything else that is out there.

So clearly to us, leadership is very important for research division. And for me as a researcher, leader responsible for hybrid cloud technologies. Now that said, we have been driving this, especially in the Albany NanoTech Center. In 2015, we announced 7-Nanometer technology, which was world’s first announcement of such chip technology using lithography. That got adopted by all the major players. Then we were the first one to announce gate all around nano sheet technology.

And that’s very interesting because the idea of gate all around was always there, but how do you make it real for product? That was not known. And that’s where we from our product lens brought in the sheet idea. And that nano sheet idea is the idea we believe is going to be adopted by most of the companies have already announced. That will be the structure going forward after [inaudible 00:13:03]. And then yes, last year in 2021, actually we announced 2-Nanometer technology, which is a second generation of nano sheet technology. We are working on that technology here right now in Albany NanoTech with many partners. We are co-creating, perfecting that technology because we really want to make sure that both the technology is manufacturable, so that our manufacturing partners can take it for volume production. And at the end, technology has features that IBM needs for our own product.

Ross needs for a Z product as an example, we need for our cloud product. So we want to make sure both of those needs are covered as we continue to make progress. On this site, we are working on technologies beyond 2-Nanometer now. You will see more and more come out over a period of time as we perfect that technology with our partners in all of these cases. What’s beyond nano sheet? It’s coming in the pipeline. And another technology that we are very excited about is the chip led technology. And that’s going to be another future, which will supercharge [inaudible 00:14:14] in my view, beyond the traditional logic scaling. And in fact, during the pandemic, we invested together with New York state to build a fab here for chip led innovation and that’s coming online and you will see more and more of such technology essentially at the forefront coming out of this lab.

Patrick Moorhead: Mukesh, I would love to sit here for hours and talk with you about this. But listen, this is the first stop on talking about IBM’s full stack approach and also on the future of computing. So this is our first stop and I appreciate us kicking it off with you. And we’re going to talk to Ross. We’re going to talk to a lot of different people about this down the line, but thank you so much for your time and being on the Six Five for the first time. I appreciate that.

Dr. Mukesh Khare: Thank you. Thanks for the opportunity. Thanks for this great conversation. And we can’t wait to share with you all the exciting thing that we do at IBM.

Patrick Moorhead: That’s great. And if you want to tell us early, you can too.

Dr. Mukesh Khare: Thank you. We will do that.

Patrick Moorhead: Take care.

Dr. Mukesh Khare: Thank you.

Patrick Moorhead: It was great touching base with Mukesh. I’d only done Zoom with him before, but actually meeting him face to face was great. And I always learned something when I talk to him. We talk about what we do here at the facility in Albany, and it’s transistor, it’s manufacturing, it’s AI, but it’s also packaging. And that’s something I need to learn a little bit more about what they do here.

Daniel Newman: I like the way he was able to tie together how research really connects to what our vendor has been talking about so passionately and that’s the hybrid cloud and AI opportunities. And we’re starting to really see as a result that this is becoming much more crystallized in the interactions that we’re having with the executives at IBM. And if that’s going to stick in the market, it has to be the first thing on the tip of the tongue of all the executives. And that’s kind of what I’m sensing from these trips, from these tours. And of course, these executive conversations.

Patrick Moorhead: I like the business model reaffirmation too, because IBM can’t talk about all its customers and exactly who they do it with and who they do it for, but it was good affirmation to know, well, first of all, make chips for Z and Power and not just the general purpose parts, but also the ASECs, the accelerators. He talked about the block that he puts in [inaudible 00:16:53] for AI, but also the intellectual property that goes along with that. And I don’t think there’s a company out there, a designer and even a manufacturer with big names who IBM isn’t touching. They don’t talk about all of them, but that’s okay. I appreciate that anyways.

Daniel Newman: Well, not everyone wants credit for everything.

Patrick Moorhead: And that’s all right. And next up, we’re going to be talking to Ross Mauri who runs the Z division. He’s going to be talking holistically about systems and it’s a great segue coming from Mukesh who talked about semiconductors and what the company is doing in that area.

Daniel Newman: Always great to talk to Ross, connect the dots.

Patrick Moorhead: Ross, it’s great to see you again. I think we bumped into each other at one of the first IBM thinks that was back in person again. I love that venue by the way, but we’re here in Albany right now talking we’re on kind of this IBM road tour, talking about the future of computing and really getting underneath of what IBM is doing, looking at your top to bottom approach. And here we are. So it’s great to see you.

Ross Mauri: Thanks, Pat. It’s great to see you too. And I love being here in Albany. The researchers and scientists here really make a difference for the future of my platform, the Z platform. And so it’s great to be in this setting and to see both of you here today.

Daniel Newman: Yeah. The road to Albany is beautiful. It’s very green, very lush. If you haven’t been here, I think people should probably visit sometime, but we’ve had the chance to speak to you. And now in a number of your colleagues, and we’re really kind of looking at this future of compute narrative and this full stack story, and IBM is such a interesting, compelling story. There’s so much history, and then obviously so much innovation going on. And I want to start there with you, Ross. We just got finished. We talked to Mukesh, looking at the whole kind of semiconductor research part of the business, where that’s heading. And of course, you leading a big part of the systems business and IBM Z. I want to kind talk about what we talked with Mukesh about with the semiconductor and the research and how that really informs and drives the business and systems for you.

Ross Mauri: So we’ve had a very long, decades long partnership with IBM research. And yes, it is around the Silicon. It is around the chips. It is around fundamental computing paradigms and research is really… I mean, I would say they’re an essential part of the Z business. And one of the things I know that heterogeneous computing and full stack integration is coming into Vogue now, but that’s something the mainframe platform has basically taken advantage of for decades. And it does start at the Silicon because the fundamental performance, reliability, and security of the system is baked in from the chip packaging to the central electronics complex and including the memory operating system. I can stack it up.

But again, research is fundamental to most layers of the Z system. And again, back to semiconductors. What research has been able to push for us, not only in terms of density and speed of circuitry and things that allow you to pack more horsepower and more capabilities into a smaller area, but also some fundamental breakthroughs like around security, like post quantum crypto. And we’ve seen the recent [inaudible 00:20:29] announcement. It was really thrilling that all four of the first accepted algorithms had IBM researchers participating in them. And two of them were led by IBM, but that type of partnership and leveraging of innovation is key to my business. And I love the fact that research is looking 5, 10, 15, 20 years down the pike. They’re solving hard problems with us, for our clients, even before our clients run into those problems, so to speak. So I think that’s one of the great things… to speak. So I think that’s one of the great things. And again, to me, it all really starts at the semiconductor.

Patrick Moorhead: It’s a great time right now. I would say if I could look back 20 years ago, people were talking about semiconductors being this commodity, and I’ve always been a big believer that people allow themselves to be commoditized. And here we are now where I think everybody knows what a semiconductor is, and I’ve always loved your fit for purpose approach that seems to be in Vogue right now. So I don’t know if you started the trend or you showed people how it could be done, but a lot of companies have jumped on the bandwagon. And here we are talking about this. One other thing that people might not know as well is that IBMZ in the mainframe is very much cloud enabled. Your clients obviously know this, but a lot of people don’t. So I’m curious though, how does Z fit into the overall IBM cloud story?

Ross Mauri: So I think we fit very, very well. I’m happy to say… I mean, you can take cloud at different layers of the stack again. You could take it, I would say at the highest layer where we have IBM mainframes and IBM power systems fully integrated into the virtual private layers of the IBM public cloud, so that you can access a mainframe capability, but through a [inaudible 00:22:33] service, through a web service.

Patrick Moorhead: In fact, give a big time security service.

Ross Mauri: That’s right. That’s right. In particular, what clients want is they want to keep their own crypto keys. They don’t want anyone else touching them. And the mainframe has I would say the best cryptographic capability of any system in the world. And so by putting some mainframes in the IBM public cloud, all the services, whether they’re running… They’re not necessarily running on the mainframe, they might be running VMs on X86, but they’re accessing those crypto services. So again, at the highest level, the mainframe integrates well with the cloud, but then there’s all different layers that I would say that are just as important in a multi-cloud hybrid cloud world.

Connecting applications via something, a common platform like Red Hat OpenShift, and having containers be more of a fundamental way to develop applications, not just for portability, but I would say for better manageability, being more agile and the mainframe integrates with every key Red Hat product into the OpenShift stack. And again, there’s layer after layer, at middleware layer, at service layer, like a Kubernetes OpenShift layer down deeper. We’re integrated all the way. And IBM has two strategies really that we talk about the highest level, hybrid cloud and AI, and we’re integrating at all levels on both of those.

Patrick Moorhead: Okay. Real quick. Wait, are you telling me that the IBM systems can… They run Linux and they run containers and you can address them in the public cloud. And they’re part of a complete hybrid cloud architecture. I’m being a little snarky, of course, but it’s like-

Ross Mauri: Yes, absolutely.

Patrick Moorhead: Not many people know this, but I think it’s a real testament to a lot of the success that you’ve been having over the past few years.

Ross Mauri: Thanks. And our clients have been telling me that they want us to better integrate. And again, with some common software platforms, that’s been very easy. Though, we always have to put a little bit of the mainframe into what we do. If something does become distributed, say the security model and things like that, because again, our clients are… Especially the ones in regulated industries are basically, they’re counting on the highest level possible security that our systems have for their applications and their data.

Daniel Newman: Something interesting that we had the chance to speak about in one of our briefings interactions, Ross, was sort of how the chip technology has advanced so much, the design, the packaging to be able to do more on chip. And in your newest Z16 iteration, this sort of ties into the whole hybrid story that we’re trying to tell here though, you brought AI much closer. Is that a great example in your mind though of how the relationship with research and systems and how you develop products? Is that maybe a concrete example in this most recent generation of what’s really happening in that relationship?

Ross Mauri: I think it’s a fantastic example because research had been looking at AI and doing AI Silicon for a number of years and they were doing test beds and demonstrations. And we saw one of the engines that they had built. And it dawned on some of our lead technical folks about six years ago that what if we were to actually embed an inference engine, an inference accelerator right onto the microprocessor, kind of on a different side of the bus than everybody else has AI implemented? What could we change? And so research worked with us and it was actually their logic for the AI inference engine that we took. And then we adapted it and made it really robust and a lot of error checking and all that as you expect in a mainframe and integrated it onto silicon. And so now we can do things that no other system can do. With guaranteed a millisecond or less latency, we can do 300 billion inferences a day on one system, on one mainframe system.

So we’re bringing high performance, low latency inferencing into the decision process of something like a transaction that a bank or a payment company might do. Game changing.

Patrick Moorhead: By the way, I guess the fancy word we use that for anything that’s not let’s say a general purpose CPU is heterogeneous computing. You talked about AI acceleration, but you do a lot more too. You have crypto, you have FPGAs. You were doing [inaudible 00:27:12] before it was cool. Okay. Now it’s cool too, but let’s dial out a little bit. How is this changing the way that you’re recommending your clients look at the overall estate of their enterprise architecture?

Ross Mauri: Well, it’s in a number of ways, but I would say one that’s really fundamental to what they do today is as they use the mainframe for a transactional engine and the operational data that’s created is really core business data that’s relevant for many uses within a business after it’s created. And so the two decades ago, or maybe more people started copying the data off the mainframe, because they didn’t think they could do things like advanced analytics or AI on a mainframe.

So they’d copy it off. When you copy data, you open up a security risk because you now have multiple copies of usually highly valuable data. There’s complexity. There’s cost to it. And one of the things and clients are saying is we want to be able to use this data more real time. So that means as opposed to copying it off and processing, post-processing, let’s do more real time. And that’s what, again, the partnership with research, listening to clients, and then trying to bring the value of the integrated stack back to the integrated stack to the middle of our clients businesses. And we’re doing that.

Daniel Newman: That’s great. Yeah. There’s a lot of excitement here. Clearly, this most recent cycle has shown a lot of momentum. It’s always very interesting. And Pat, I think you’d share my sentiment to sort of hear how these different threads come together, how the research. You and I always talk about how we need more R and not just D. And there’s a lot of that going on here in Albany where we’re talking, Ross, and of course it’s always great to sit down with you. Thanks so much for your time.

Ross Mauri: Absolutely.

Patrick Moorhead: Yeah, I appreciate it. Yeah, you did the Six Five Summit two years, so you’re not a newcomer, but here we are again, Six Five on the road in Albany.

Ross Mauri: I’ll be back.

Patrick Moorhead: Yep. Thanks so much. Thank you.

It’s always great to talk to Ross and I’m just getting to know his team and the Power team. They were doing cool SOCs and applications, pacific integrated circuits and accelerators before it was cool. And I think they deserve a lot of credit for that. And it’s not for just doing it. They’d actually get business, their clients get business value out of it.

Daniel Newman: It’s a lot of fun when we have the chance to talk to Ross about that too. All that business value and some of those specific customer cases. Look forward to maybe someday sharing those a little bit more publicly. But when you hear how the research turns into value, I think that’s when I as an analyst get really excited because sometimes you hear about a concept and a concept and a concept and you go, what’s the intrinsic value? What’s the market value? What’s the consumption value? And this is what we’re starting to feel is that these investments and really what is a full stack future of computing architecture are starting to come to fruition.

Patrick Moorhead: No, it is good. I mean, you’ve got research to the development, to the product platform, to clients and delivering value. You don’t see that in too many companies, but it is good to see. There’s not enough of it out there, but I always love talking to Ross. So who do we have up next, Daniel? What are we talking about next?

Daniel Newman: Yeah. So we have Hillery Hunter joining us. We’re going to be talking about IBM Cloud and we’re going to be making the connections. Started at the chip layer. We moved into systems. We’re going to go to the cloud now and we’re going to just keep tying all this together.

Patrick Moorhead: I love it. It almost sounds like this was very well planned out, Daniel. Great job. Do I give you credit or somebody else?

Daniel Newman: I cannot take credit for this, Pat, but I do think everybody out there’s probably going to learn a lot here.

Patrick Moorhead: So stay tuned for Hillery Hunter. We’re going to talk on IBM Cloud.

Hillery, how are you doing? Thanks for coming onto the Six Five. First time guest. Great to see you.

Hillery Hunter: Great to see you as well. Really a pleasure to be here.

Patrick Moorhead: Yeah. We’re having some great conversations here in Yorktown talking about the future of computing and the role that IBM is having in it. And we’re here to talk about IBM Cloud’s role in that. So thank you so much for coming.

Hillery Hunter: Thanks so much. There’s such exciting stuff in Yorktown and we have such great partnerships there. Lots of technology that comes into our cloud comes out of IBM Yorktown.

Daniel Newman: Yeah, it was really enjoyable to come on campus. Got the tour. We stood at the Watson jeopardy. I took a picture standing behind the Watson computer because I wanted everyone to think I’m the Watson machine that won the game. Turns out I’m not, Hillery, but that’s okay. So let’s talk public cloud. Let’s talk about IBM Cloud. It’s a part of the business that sometimes I think we could talk about more that I think that there’s a big opportunity for the company. I’d love for you to start out sharing a little bit about the IBM Cloud, its differentiation, how you’ve sort of focused on some specific verticals and what’s working right now and what should the market know about IBM Cloud?

Hillery Hunter: Yeah, I think it’s interesting to think about what has happened with cloud over time. Cloud is definitely a place and it’s a set of technologies. And for IBM it’s also hybrid cloud conversation. And so the specific positioning of our public cloud has evolved a bit over time. And I think where we’re at right now is very much in alignment with that overall IBM mission as an enterprise IT provider. It is also very well aligned with our hybrid cloud story, which may be a little bit surprising to say from the public cloud lens. But what I mean by that is our public cloud is positioned as focused on enterprise workloads, mint to back office, regulated workloads, and on contextualizing public cloud adoption for industries, for example, regulated industries like financial services. Like telco, like healthcare, et cetera. Traditional clients who really value what IBM does from a security perspective, what we do from a compliance perspective, et cetera.

But we work with our clients to help them trust the public cloud as a deployment location, even for their most sensitive workloads because of the data protection and other technologies that we have. But we also work with them in the context of IBM being a hybrid cloud company, where they’re looking to have consistency between their on-premises estate and their cloud deployments, or they’re looking to have consistency in a single control plane across their public cloud deployments on multiple providers. And so we deliver technologies that help secure their most sensitive workloads in our public cloud. And we deliver technologies that help them get consistency across their estate.

Patrick Moorhead: So Hillery, for what it’s worth from one industry analyst to you, I do like the strategy. It’s a natural for me. When I look at where IBM has had high degree of success the last 30 years it is in these environments. And whether it’s on premises or cloud, it’s very similar types of regulations, security, like you said. So it does make sense to me. Now IBM cloud does participate though in the overall architecture of a cloud as well with Red Hat, right?

Hillery Hunter: Absolutely. Yeah. And I was deeply involved in the Red Hat acquisition and in the overall hybrid cloud strategy that we established at the time and for us in the public cloud, OpenShift, we run it as a managed service. We help distribute it and other services in a common control plane out into other environments. And a lot of these clients in IBM’s core base within regulated industries, they need that optionality. Cloud concentration is a great concern. And so they want to be able to deploy their OpenShift based workloads that they have on premises. They want to be able to deploy them in the public cloud, so as to have more flexibility, more elasticity, be able to respond to things happening in the market. They want to be able to deploy it consistently across environments, et cetera. And so the Red Hat alignment for us has been really great. It is a key part of how we help our clients envision the transforming of their estate into public cloud.

Daniel Newman: It’s interesting, Hillery, you’re talking about… We’re working from the public cloud backwards. Now we’re gone back on prem. We’ve talked a little bit about the Red Hat involvement. And here in Yorktown, we spent a lot of time on in the research, looking at the things that are being developed. And IBM does a lot of research and development. We saw the quantum machines today while we were here, but we also know that IBM has a big role in leadership in the semiconductor space. A lot of research, transistor research, things that are being done that will probably be seen as breakthrough on ultimately some leading edge part, that’ll be put into a phone or into a computer, but bottom line IBM has a big part to play in that. But what about in terms of in the cloud? How do you see the IP, the research development and leadership that’s done here in Yorktown and other IBM R&D facilities participating in the evolution of your IBM Cloud offering?

Hillery Hunter: Yeah, it’s truly soup to nuts and we’ve had this luxury for the entire time we’ve been a cloud provider of being able to do that soup to nuts type of approach, custom hardware, all those other kind of opportunities afforded by that skill base that you were describing.

Maybe just to highlight just a couple of things, because I could talk about this for hours, but our cloud in order to do data protection at the industry’s highest possible standards, leverages technologies that are done jointly with IBM research on cryptography that get built into our Linux One systems at IBM and it’s custom hardware design all the way down to the transistors and all of the math that then helps our clients know that their data is their data and only their data in the cloud. So whether or not it’s on the cryptography side, on quantum secure cryptography, we have deployed the algorithms that have recently been selected by [inaudible 00:37:41] to protect us against a future quantum state.

And we also bring IBM’s quantum capabilities, both the developer environment and actual access to the machines, we bring those to market actually through the cloud. So whether or not it’s the current problems and protection of data that are running on any type of servers in the cloud, but protecting the data, leveraging advanced cryptography techniques and hardware techniques through to protecting the future and then developing the future and looking and exploring the future of IT, including quantum, we bring all of that to market and all of that to bear in our cloud.

Patrick Moorhead: That’s excellent. So Hillery, thank you for bringing Z Technology and Linux One to the cloud. I know you and I had a conversation a few years back on that and it’s great to see an area that just make sense like security. I love that. So Hillery, you talked about how some of the things going on in Yorktown are making its way into the service line today, but what about the future? We’ve seen some amazing quantum innovations here, in fact. We went through the lab that was here. We’ve talked about AI and things like AI in the edge and moving data around. What about the future? How are you looking at IBM Cloud and future innovations?

Hillery Hunter: To be honest, we’re getting all those things that you were mentioning out to our customers through the fluidity of cloud. The quick turnaround time, the ability to deliver stuff as a service, put things out in beta, help people try and experiment them, I don’t think there’s anything you mention that we’re not actually doing even today with our clients, whether it’s the latest in AI stuff or quantum technologies or experimentation with homomorphic encryption, so people can privately compute on things. But honestly I’ll say also up into the stack, serverless computing and changes in orchestration models for workloads and how they’re developed. We have a very tight partnership with research and with Red Hat on many of those different topics and technologies.

And in edge computing with IBM Cloud satellite, we can then help our clients deploy those technologies where their data is as well. So whether the data is in IBM cloud or on their premises in retail branches they have, or other things like that, we can help them quickly deploy those latest innovations and chat bots and virtual assistance and all that other kind of stuff. So I think we see a hybrid and I guess hybrid forms of computing in the future. IT is taking on new types of computing with quantum and stuff. So both in the hardware side as well as then in the software and the development, we’re continuing to build out the latest models together with research.

Daniel Newman: As I read into it, Pat, really she’s telling a story of a very distributed compute future that requires a combination. It requires public and multi-cloud, which by the way, aren’t the same necessarily as hybrid cloud. And I think the market sometimes gets these things conflated a little bit. And that enterprises and organizations are going to have data and compute and workloads all over. And it’s going to take a very thoughtful, well laid out, cohesive infrastructure to address that. And it sounds like that’s what you’re really focused on Hillery with IBM Cloud. And I think in those specific industries in those vertical use cases, it’s really been a winning formula. And it’s going to be very interesting at least from my analyst view to watch how that sort of expands into other industries, other workloads, and other opportunities in the future. But I think it sounds, Pat, like they’re really on the right trajectory.

Patrick Moorhead: I like where we’re going. And Hillery, just want to thank you so much for coming on the Six Five, talking more about what the company’s doing from end to end computing, the future of computing, and how IBM Cloud fits into that picture. Thanks so much.

Hillery Hunter: Thanks so much for having me.

Daniel Newman: That was a lot of fun, very interesting conversation with Hillery. The cloud, something you and I spend a ton of our time… The cloud, something you and I spend a ton of our time thinking about, discussing, and of course where it’s all going, multi, hybrid. And then where does it connect to there? Infrastructure, semiconductors, we really covered it all.

Patrick Moorhead: Yeah, we did, and I really did appreciate the clarity that it brought to IBM’s strategy of it. They’re not a cloud for every single workload out there. They’re a cloud for, I think, the customers and for the applications that they’ve had decades of success with. Highly regulated, highly secure, and stuff that you just can’t have mistakes on or it’s really, really bad for you as an enterprise.

Daniel Newman: Well, you look at things like security. It’s a rising topic of importance at the board room, at the CEO level. Companies know that the vulnerabilities and the risks of security are massive and the implications are massive. And as data continues to proliferate, these organizations need to be thinking, “Am I partnered with the right organization?” And that’s where the whole cloud concept of, it’s not just IAS and it’s not just PASS, and it’s not just software as a service. It’s really all those things. It’s the big C, and it includes the consulting and the service, Pat, and the design. And I think that’s really the roots of IBM. And I love the fact that they’re not saying, “We are for every workload.” They’re basically saying, “We have parts that we are confident we’re the best,” and then in some cases it might be AWS. It might be Google. It might be Azure. These players are all there, and I think they understand their role.

Patrick Moorhead: Yeah, and I’ll put an asterisk on it. When you do overlay Red Hat, Red Hat can be everywhere for every workload, but that is not necessarily the tip of the spear for the IBM cloud. And that’s why in many ways, even though Red Hat powers the IBM cloud, the IBM cloud is not Red Hat. And that’s why we appreciate the separation that they put in there.

Daniel Newman: You and I have had plenty of time, we’ve had chances to sit down with Arvind. We know that hybrid cloud and AI data story is really at the core of what IBM believes. Red Hat was a big enabler. Their public cloud offering is significant, and it’s in these specific enterprise workloads, these highly regulated, it makes sense, Pat. Another area that really is going to just continue to grow, continue to proliferate and create opportunities for the cloud and for this full stack future of compute story is at the edge.

Patrick Moorhead: That’s exactly right. And with that tee-up, thank you very much, we’re going to talk to Nick Fuller, who runs research for the distributed cloud. Talk about how AI basically permeates in everything outside of the data center. So super exciting. Stick with us here, folks, this is going to be great.

Nick. It’s great to see you. Thanks so much for coming on the Six Five, first time.

Nick Fuller: Thank you, and good to be here, Patrick.

Patrick Moorhead: Absolutely. Yeah, I appreciate, and we’re continuing this conversation talking about, first of all, holistically in the market what clients are looking for in terms of the future of computing, where hybrid cloud comes into that. And hey, we’re really lucky today because you run research for distributed cloud, AI, data, everything. So thank you so much.

Nick Fuller: It’s great to be here with both of you. Thank you. Thanks for having me.

Patrick Moorhead: Yeah.

Daniel Newman: You didn’t use the E word, the edge. But yeah, Nick, reading your bio, first timer on the Six Five, super excited to have you here. You’re covering a lot of ground, and I think as we tell this future of compute story, you really can’t tell that without talking a lot about what’s going on in the distributed cloud, and like I said, the edge. Big story, big opportunity. We’re all hearing the data numbers about the proliferation, exponential growth and volumes of data in enterprises around the world, and every other organization that uses compute are focused on the opportunity at the edge. So I’d love to get your take on how IBM perceives the edge, the opportunities, the challenges, and what you’re working on in this particular space.

Nick Fuller: Absolutely, Daniel. So when you look at edge, there’s a key vector that plays a role here, and that vector is the increased disaggregation and decentralization that’s happening from an infrastructure point of view. The emergence of cloud suggested, when you go back to 2006, customers would move their workloads from their data centers to cloud and those cloud data centers would be in specific locations. As time went by, you had the emergence of more providers like Ridge, like CloudFlare. Compute got better, the advancement in AI with GPUs and so on, accelerators, and customers needed to solve problems at the edge in addition to moving their workloads to cloud.

So that vector of disaggregation is a key one from an edge standpoint. Additionally for us, when you think through edge, because of that strong rise in hyperscalers there’s this notion of edge in versus cloud out. What does that really mean? Cloud out simply means you take your cloud stack architecture and you run that stack on a customer’s edge, whether that be a retail appliance, whether that be in a manufacturing floor, a warehouse, what have you. Whereas edge in lends itself to being cloud agnostic, but also really critically elevating the data plane to a first class citizen. From a cloud out point of view, the control plane dominates and data is not really first class. We see that as a key differentiation for us as a company.

Patrick Moorhead: So the industry has had a lot of changes over the last 30 years. And one of the biggest ones, interestingly enough, even though it’s maybe 25% of the data, moved from the on-prem data center to the public cloud. And there were a ton of lessons learned there, right? It’s been 10 years, maybe it’s glass half full where you would have expected more to be there if it were so easy, but there are some things keeping that from happening. But IT has learned a lot. What are some of the lessons going from on-prem data center, colo data center to the edge that you’re seeing? Good lessons of how to do it and maybe some things to stay away from.

Nick Fuller: Yeah, fantastic question. And when you go back to that original trend that you highlight, data privacy obviously, compliance and regulatory issues as it relates to different geographies obviously all played a role. And then the value prop that many companies thought they would be gaining by moving their workloads to cloud, namely to enhance developer opportunities, to grow new business, and certainly to reduce technical debt, some of those didn’t actually pan out. But really what still matters at the end of the day to an enterprise, to a CISO, to a CIO, to a CTO, the same factors rear their beautiful head. Security, software delivery life cycle, and overall manageability of that portfolio. These things continue to be relevant from an edge standpoint. And we see that to be true as we look at the various enterprise clients with whom we work as it relates to the innovation we’re building for edge computing.

Patrick Moorhead: Interesting.

Daniel Newman: So IBM has clearly put an all-in approach on a platform that has basically common platform for your distributed infrastructure. That was a mouthful, but I got it. How does this provide an advantage? Because this is one of the things I think a lot about, and I do have an answer for it, but I’m going to let Nick answer this. Talk about why you’ve gone down that route, the advantage it creates, and the challenges as companies are trying to move in this direction if they don’t use that common platform.

Nick Fuller: Yeah, fantastic. So the platform weighs into what architecture you ultimately are adopting from an enterprise standpoint as you go on that journey. At the heart of this all, you’re trying to solve some sort of challenge that grows your business. Whether that be from a savings standpoint, from a revenue standpoint, that’s really what an enterprise is aiming to address. Our platform, based on OpenShift and various extensions of that as it relates to footprint and location, so single node OpenShift, MicroShift, et cetera, running on a range of infrastructure, gives customers that flexibility as it relates to running their workloads on the edge to solve problems in quality control, for example, retail ordering, you might have seen IBM’s recent acquisition in the quick service restaurant space with McDonald’s.

All of these are critical. These types of workloads, whether it be in the case of quick service restaurants as far as natural language processing for order taking, whether it be for visual inspection for quality control in manufacturing, or a range of other applications, when anchored on that platform, the various visions of Red Hat’s platform that I mentioned, MicroShift, single node OpenShift, and full-blown OpenShift, gives you an architecture with data and AI capabilities that we’re building for what? Scalability. That ultimately becomes the challenge that you face moving from a proof of concept to getting into full-blown production.

Patrick Moorhead: So we talked a little bit in the green room about, this edge thing is new. Well, okay, well, we’ve had compute on the edge for a long time. What changed? And I think we can agree that first of all, there’s a whole lot more data being electronically captured on the edge versus maybe paper tallying, doing cycle counts. It’s automatic when somebody takes a loaf of bread off the shelf at a grocery store. And we finally have enough compute power and we have machine learning algorithms to run against it that are very efficient. But listen, we’ve talked a lot about the infrastructure, but I think I’d really love to hear about the data. And I want to hear about how IBM is leveraging AI, machine learning, and even let’s say a distributed data fabric to make all of this easier and more effective.

Nick Fuller: Fantastic question. And I touched on this briefly a second ago leading into this question. When you solve a problem initially and you demonstrate feasibility, it gives you an idea of how practical that can be as far as addressing that issue, usually with AI machine learning, whether that be NLP, whether that be computer vision, what have you. When it comes to going from that proof of concept to running that model in multiple places, running many more models scaled by orders of magnitude in a variety of locations, that cannot be done with the same type of infrastructure. The anchoring platform helps, but you need an architecture that’s scalable. And the scalability in that architecture, you’re able to leverage that first vector we touched on, the disaggregation and infrastructure. And that choice is up to the client, right? Whomever you’ve chosen as your hyperscaler provider, that’s your choice, that’s where you’ll do your model training.

That model will then be served at a particular location, a retail branch, a warehouse, a manufacturing plant, et cetera, for some type of issue to address. And let’s take the visual inspection one with manufacturing. You solve that issue, but then as you build more models, are you really going to take the point you made with all the data being generated at those locations back to the cloud? It’s not practical. You need an architecture that allows you to take some of that back to the cloud. And what we do here is build a range of data and AI capabilities that are platform centric. So for example, imagine you’ve generated a ton of data from various locations, and now you need to retrain the model because there’s no supervision there.

When you pull up, maybe you still do, I don’t, maybe none of us do any more for that matter, and you order something at McDonald’s and you say, “I want a large fry,” and they got it wrong, well, there’s a way for the model to be supervised there. But if there’s a shift on the manufacturing floor, no one’s supervising that. You need a way to ultimately determine if that model has drifted. You need a way to determine if you can take that set of data that has been generated and only take a sample of it, because the images are fairly similar. So we use AI and machine learning to cluster data, so we figure out what goes back to the cloud, what stays on the manufacturing floor. We infer whether the model has drifted or not using a variety of techniques. And that helps with not only the onboarding of new models, but the ability to scale that infrastructure from plant to plant to plant.

Daniel Newman: Yeah, Nick, no question there’s a ton of opportunities and challenges being presented, and federating certainly can help solve some of them. I think about the examples that you give, and we certainly have the opportunity now with all the data at our disposal to keep getting better, to keep getting sharper, and to improve all the different types of edges, the retail, the bread on the shelf, the next generation of shopping certainly, and of course factories of the future, the edge, the opportunity there significant. But the challenges because of distributed architecture are still, they’re palpable, they’re significant.

Nick Fuller: Absolutely.

Daniel Newman: And something that we expect and we’ll be watching as analysts for you and your team to continue to evolve and innovate upon. And Nick, we’ll look forward to having you back to talk more about all the things you’re doing at the edge, distributed, cloud, and more as part of our future of computing story.

Nick Fuller: Awesome. Thank you, Dan.

Patrick Moorhead: Thanks Nick, appreciate it.

Nick Fuller: Thank you, Patrick. Appreciate it. Pleasure.

Patrick Moorhead: So I really liked that conversation with Nick. Not only did it show the power of the edge, but also some things to think about in practical terms if you’re looking at moving a lot of your applications and moving a lot of data around. Data’s going everywhere, there are some things that enterprises really have to think about.

Daniel Newman: Yeah, we’ve spent the last few years doing a lot of research, spending a lot of time discussing the edge and what this looks like, the rapid proliferation of data, the exponential volumes of data that companies, enterprises, and organizations can benefit from. And it also creates immense challenges. The more you move around, the more “edges” you create, that really requires more thoughtfulness in how you build out your architecture, create that common platform that we talked to Nick about, because that edge is only going to get bigger. While the data centers, at some point there’s only so much physical footprint, and by the way, maybe they even get smaller, the edges are going to be more prevalent, more volumous, and that’s going to create a lot of challenges.

Patrick Moorhead: Yeah, I get the question a lot, Daniel, which is, what changed? We’ve had compute on the edge for 40 years. And the way that I like to explain it is that you have more machine data now. You have sensors that are 50 cents and you have cameras that are taking pictures of things going on, and not even necessarily for security, but things like inspection on an assembly line. “Is this part good?” And that a massive amount of data that’s being computed can’t all be shipped up to the data center or the cloud to be worked on. It has to be done in a much more intelligent fashion, like you said.

Daniel Newman: And there are some organizations that probably wouldn’t mind if all that data went up to the cloud.

Patrick Moorhead: Exactly.

Daniel Newman: But it’s not sensible. And going back to the example you used with Nick about the loaf of bread, it’s not a sensor necessarily on the loaf of bread. It’s computer vision and it’s that computer vision taking snaps in real time over and over of a retail environment and everything that’s happening. And you need that algorithm to be able to process to say, “Hey, that loaf of bread went off the shelf. What does that mean for restocking? What does that mean for revenue turnover? What does that mean for our margins? What does that mean?” So this is both the opportunity and the challenge of the edge. But I love it, because basically it’s what also brings our world to life. It’ll be the creator of the metaverse. It’ll be the creator of the next generation of customer experiences. And of course it will be an opportunity for so many enterprises to do more and be more successful.

Patrick Moorhead: Yeah. So I feel like key message here is, listen, you have an architecture for your on-prem data center. You have your architecture for cloud. You need an architecture for the distributed edge and an architecture that ties all those together from a data perspective. So I think it’s a good way to end this here. Great topic, but hey, I am super excited because next up we have Jay Gambetta, director of research of quantum computing for IBM, and he is going to close out our chapter on the future computing and things that IBM is doing to lean in and lead and help its customers.

Daniel Newman: Yeah, it’s been great to see this full stack story start from the inner workings of research into semiconductors, move all the way through the cloud, the data center, the prem, we just got through the edge, and now we’re going to look at really the next wave of accelerator, which Jay will help us do. And it’s going to be a great way to wrap up our future of computing.

Patrick Moorhead: Let’s dive right in.

Jay, it’s great to see you again, and thank you so much for kicking off the quantum computing track at this year’s Six Five summit.

Jay Gambetta: Great.

Patrick Moorhead: Had a lot of people watching, it’s really exciting. But we’re here to talk quantum computing, but in the context of the big picture. We’ve been talking to a lot of your fellow compatriots about the future of computing, IBM’s full scale approach, but it’s time to talk quantum. You’re the last in this series of the future of computing, and let’s knock it out of the park here.

Jay Gambetta: Sounds great.

Daniel Newman: Yeah, Jay was a great guest at the summit. And so hopefully we can tie this together a little bit. It is a really big story, this whole full stack approach, and quantum is probably the one that people know the least about. And so I love maybe starting with that big macro view, Jay, of how IBM sees quantum fitting into its full future of compute and full stack view.

Jay Gambetta: Yeah, I think I would take it one step, I think the future of computing is not going to be computing without quantum. So if you think what quantum does, is it does some math that is really, really hard for classical conventional computers to do. So if we’re going to build a future of computing and it doesn’t have quantum, you haven’t got the future of computing. So it fundamentally has to be part of it.

Patrick Moorhead: We can just do the mic drop now, right?

Jay Gambetta: Exactly.

Patrick Moorhead: Interview’s done.

Daniel Newman: Well, I actually like something you said off camera though, because you said something about quantum computing and maybe using some other vernaculars there, because you’re alluding to that already about being part of the story and quantum accelerators and quantum. And I think that’s an important point to make early on in this conversation, is that part of what’s going to make it so important and such a big part of this future computing story is when people realize where it really fits in.

Jay Gambetta: Yeah, I think you’re exactly right. If you think of compute, it’s everywhere. You check your weather, you’re calling a compute. You do anything, it’s compute. Computing is in our lives, it’s everywhere. What quantum computing, and if I could get rid of the name computing, as we discussed offline, what quantum really does is it adds something new to computing. And that something new is something we’ve never been able to use. And there’s so many problems that we have trouble identifying, be it in business problems like optimization, chemistry problems simulating new materials, even going into some of the ideas in finance and math. There’s all these really, really hard problems, and we have new math. And when we can scale that math, that’s where it opens up a lot more business.

Patrick Moorhead: I think generally people understand that it is the next generation of computing. I really do appreciate the notion of quantum acceleration, maybe a QPU or something like that. It makes total sense, because when you look at the grand scheme of it, people understand accelerators and how to address them. So I like that a lot. I might take that and use it in the future.

Jay Gambetta: Please do.

Patrick Moorhead: So the other thing, people generally agree that this is so big that amazing things in the future that we’ve never even thought of can be solved. But I also get the question, “Hey, what can we do right now? What kind of tasks and applications can we do right now?”

Jay Gambetta: Yeah. So we’re in this really interesting time. So I would say we’ve been doing a lot of lab experiments and we put it on the cloud and we’ve got numbers that are really, really great of how many users doing it. But most of them are still studying the noise in the devices. If you want to get to do something of business value, we’ve got to move beyond that. And so what I’m most excited about is, I agree that there’s this thing called error correction, everyone is excited about is, I agree that there’s this thing called error correction, everyone talks about it, we’ve got ideas of error mitigation. But we’re charting a path where very soon we’ll be able to run these things we call quantum circuits faster than a classical computer can do it. And so we’re right at that tipping point of creating a tool that you cannot simulate with a classical computer.

So we’re doing that, we’ve charted that, and we’ve got technical roadmaps. But at the same time, when you create that tool, you got to start talking to the client. What problems map to that tool? And so what we actually see with all the clients we work with is, we are actually learning, they’re learning, we’re understanding their use cases, and we’re trying to understand how we can take that use case and map it to this new math that we know will have that tool.

So what they’re doing right now is they’re doing exploring, but they’re exploring this new type of math that does things like it changes machine learning with different types of, we call them kernels. Or it allows you to simulate quantum physics by emulating it with a quantum computer, rather than just using a big HBC computer that comes to an ad. But it’s doing everything a different way with a different set of math. And so we’re at that point, I think in the next year, where you’re going to see this breaking out, and how you use this tool, I think it’s going to be the exciting thing over the next few years.

Patrick Moorhead: That is exciting. And we talked in the green room too, kind of the flag plant of, you have to give them the tools that are useful to get business advantage out of, and we talked about 2023.

Jay Gambetta: Yeah.

Patrick Moorhead: Gosh, I think you did six or seven announcements at IBM Think, I think it was Kookaburra that was the flag plant.

Jay Gambetta: Yeah, so Kookaburra was 2025.

Patrick Moorhead: Oh, excuse me. I was getting ahead of myself. In 2023, you’ve set the table so an enterprise could actually take your system and create something himself. So maybe in 2025, they might see some value. And I’m just making this up on my own, adding two years. I can do this. I’m an industry analyst, I don’t actually have to do this.

Jay Gambetta: So, yeah. The one thing that’s important is, we’re thinking long term. So our roadmap goes beyond, right? We have the Heron, which is a 123 and we have multiple of them, and we have cross build and Flamingo, then Kookaburra as you said. And so I imagine keep building these up until really, really big systems so we can do more and more with it.

What’s exciting about 2023 for me is, if we can cross that point of being able to do something we couldn’t do classically, and then how we map it to clients, and that’s going to start. I agree with you, it’s going to take a couple of years to turn something into real business value, but my hope is in 2023, it’s not physicists talking about the noise in these systems, it’s us trying to understand how business problems can run on them, and the noise is all handled in the software.

So I agree in our roadmap, we talked about much more further in the hardwares, because we want to make bigger. But we also talked about simplifying it and making it easier for people to use. And so when we start to invent these things, like when we make quantum serverless [inaudible 01:06:20] run time, these things that start to abstract away the noise, so physicists are not characterizing, and you can start to actually use it by sending… When you use a classical computer, you don’t worry about the voltages.

Patrick Moorhead: Exactly.

Jay Gambetta: You call a library, and that library does the math that classical computers or GPUs are really good at. So it’s an important inflection point, because that’s going to be a point where I think we can talk much differently. I mean, it’s not about how many papers do you see or how many people talk about error mitigation, error correction. Hopefully all that gets buried and it becomes, “How are we using it?”

Patrick Moorhead: Daniel, you might like that. [inaudible 01:06:57]

Daniel Newman: I’m pretty sure that the first year of quantum briefings I took were almost entirely about error mitigation, gates and how [inaudible 01:07:04] do you keep a qubit? You know.

Patrick Moorhead: Yeah, and I think most of the people in the room when I was doing it, it was like PhD, PhD, PhD. They got to me, not PhD. But no, I’m super excited. 2023 to me is the flag plant where, not that it all starts, but this notion of enterprises having the tool, not focusing on error correction, but maybe working on a security application, maybe working on something like that. So.

Daniel Newman: So classical computing and just computing, because we don’t normally call it that except when we talk about quantum, tends to be built with a vibrant ecosystem. You’ve got startups, you’ve got big players, you’ve got a lot of collaboration. Quantum’s kind of interesting.

So you’ve got this full stack story that IBM’s trying to tell. You’ve got this full stack quantum approach that you and your team are working on building. How do you balance trying to take the whole problem on, from the hardware, the software, and all the other abstractions that you mention, and at the same time create that vibrant ecosystem and be inviting? Because that’s what’s going to make this really practical, is when the right applications mapped to the right customers become readily available to be run on quantum circuits.

Jay Gambetta: I agree. I think the first part is, yes, I call it classical computing, quantum computing. We’ve got to start calling it computing. And when we get that quantum in it, I actually think when we say full stack, we’re talking pretty low in the stack. You can totally imagine a startup creating a library or a software application that calls computing.

So I envision us creating something very similar to accelerators, NVIDIA and things like that. There is software. It’s not just hardware that gets that accelerated to work. We have to create that software, because we know our hardware best, that gets that to work. But if we’re going to create this industry you’re talking about, that software’s got to connect to their software. It’s got to connect to data. It’s got to connect to other clouds. And all of that has to work together.

So we see ourselves creating, yes, some verticals all the way up, but really focusing on a compute layer that includes software and hardware interacting very much together. And I think this is what’s different about accelerators to CPUs. Traditionally CPUs is you build your hardware, and then someone builds the operating system. When you have an accelerator like a GPU, there is software there. Quantum’s going to be the same. You got to have that software that gets the most out of that accelerator. That is how you build this full step.

Daniel Newman: I love that analogy by the way. The GPU is such a better analogy than the CPU for quantum.

Patrick Moorhead: It is. And by the way, before that, there are hundreds of ASICs through history that have done the same thing, they just didn’t get enough of that play. But I think for understanding purposes, I love it. This is an awesome accelerator to do some cool stuff.

So I’ve been interacting with IBM probably since the mid nineties, IBM semiconductor. And you’ve developed a lot of IP around semiconductors, but also around high performance computing. I was struck at IBM Think with many of your announcements. I’m thinking, “I think I’ve seen this before. I think I’ve seen something that’s similar to that.” And then big company like IBM, I’m wondering, “Gosh, big company really changing the game, versus maybe a smaller company.”

And I’m wondering, is this an advantage for IBM, versus maybe a startup that doesn’t have a whole lot of IP and semiconductors and HPC?

Jay Gambetta: The short answer is yes.

Daniel Newman: Leading the witness.

Patrick Moorhead: I think I led the witness on that one, so yeah.

Jay Gambetta: But if you look and you look at the details that go, the reason we’ve accelerated so much on the packaging and the things that go is, we can leverage everything that we’ve done in semiconductors in the past. We’re taking semiconductor physics, superconducting materials and physics, microwave technology, and we’re merging that. And so that semiconductor history, we’re using it all the time, be it from bump bonds to through-substrate vias, to all the things that make traditional computing work really well, we’re leveraging and putting it [inaudible 01:11:43].

I think this is what gives us an advantage and is why I am confident, and we’re working so hard to win that sort of accelerator space. But I do think there will be startups that will come up with key IP in the stack that work with us, or work calling those accelerators. But to compete in the accelerator, it’s going to be hard to compete with the rich history of all the semiconductor knowledge and all the infrastructure that is needed to build these.

Patrick Moorhead: Well, and for years, IBM was king of the hill in HPC. How does HPC relate to this? I don’t want to put words in your mouth again, but I look at the scaling and things like that. How does that help?

Jay Gambetta: I think it comes back to what is the future of computing? I’m putting this word out and starting to see if it sticks, of quantum centric super computing. And the idea there is, we’re going to think of our accelerators, but then we want our accelerators to work with HBC or some more advanced general-purpose classical computing to be able to do more.

So how do we actually start to make workflows that call a HPC and call a quantum accelerator, and how do we integrate that tightly? Can we learn from where classical has gone with serverless on these other technologies to do it? So I think the story of future of computing is quantum HPC, AI, all of these things converging.

Daniel Newman: Yeah, there’s a really strong symbiotic relationship between what… you want us to now say classical computing and quantum. Stop. We’ll stop there. And I think it’s important for people to understand that. You’ve done a nice job here of doing some of the mapping, talking about how, A, the R&D and historic intellectual property development of IBM… Which by the way, I think often doesn’t get enough credit.

Patrick Moorhead: I think, not that patent counting is the ultimate way to view it, but IBM’s been top in the number of patents for, I don’t know, 30 years.

Daniel Newman: A lot of innovation, though, in transistor technologies and IP for semiconductors. But I’m also a guy that likes to talk to markets. I like to talk about practical business value, and you kind of started going down that path. But let’s fast forward a couple years ahead.

We talked about 2025, ’26. What are some of the things that you’re advising to the ecosystem of customers that are going to be adopting this? Financial services, healthcare, chemistry, and of course academia, but all the places that want to really put quantum to use. What does that next few years look like as they prepare themselves for a quantum future?

Jay Gambetta: Yeah. It’s one of the things that we’ve tried to do differently in our IBM Quantum team, is how do you create an offering where you can work with a client that is not research based? Traditionally, if it’s this type of technology, a lot of them start like, “Let’s get together and research on algorithms.” That we still do and is needed, but what’s more important to a lot of clients, they’re asking, “How does quantum fit into my future? What use cases will map to it? How will I be able to explain to my customers quantum, how will I be able to explain to my external stakeholders quantum?”

And so we’ve tried to develop a way. We actually brought people from IBM Consulting, and we made a small team inside IBM Quantum, which we mixed with a few researchers. And they’re exactly doing this with a lot of clients, and why it’s so important is you’ve got to answer all those questions. “How’s quantum going to matter for my business model? How’s quantum going to matter, sorry, for my business. How am I going to communicate that I’m using quantum to my clients? How am I going to get internal stakeholder understanding the value of this?”

And this is all a discussion and relationship coming backwards and forwards. And at the same time, we are learning what use cases matter for these industries. And then our researchers, which are researching these algorithms, they get a bit of guidance of, what type of algorithms should we actually be determining the quantum circuits for? Because we can start to connect those dots. So right now it’s about connecting dots. As we said, our goal is 2023 to have something that’s useful and keep scaling beyond, but it takes a while to connect dots. And I agree with you ’25, ’26 of when it really starts to matter to business, but it’s connecting dots and understanding what is the long term return.

And so I think of the future of quantum computing is going to support various different businesses. We’ve talked about the compute one. It’s going to be an accelerator as well. There will be companies that will have expertise in chemistry. Will they be able to use this compute to come up with a new catalyst, and then be able to use that compute once, but then use that catalyst in many different places? They really got to get the expertise of how to use that compute. And so are the chemistry companies getting involved to work out the compute? But eventually they will want to be a solution, provide some type of solution or create something.

And this goes all the way across finance. They may want to consume the compute to redo calculations, or they may want to do some logistics optimization, and then so they may want to create solutions. So I think we’re going to see all this emerge, but right now it’s about, “How do I understand the value of quantum, and how can I map it to all my stakeholders?”

Patrick Moorhead: That was one of the most understandable, “What do I do next?” So first of all, thank you for that. Maybe it’s just because I have to hear it two or three times to fully understand it. That’s a possibility. But I think it’s a lot of your ability to put it into simpler words. And I think as part of what we need in quantum is maybe a simpler vernacular. And I think naturally we’ll get there. Again, ’25, ’26, I think I’m super excited about.

But Jay, I really appreciate the time. Once again, an incredible discussion about quantum. I learned a lot, which, I guess that’s a good thing, right? Or a good thing or a bad thing if I didn’t study the notes upfront well enough. But I just want to thank you very much for closing out our discussion of the future of computing and what IBM’s doing about it.

Jay Gambetta: Thank you very much.

Patrick Moorhead: Thanks. It’s great talking to Jay. I mean, my biggest takeaway was when things are going to happen. And I know it was on the slides, I know I got the briefings, but 2023 is when the technology’s going to be there for enterprises, not physicists, to start to create something, to provide business value. So we can extrapolate that out to ’25 or ’26. That was my biggest takeaway.

Daniel Newman: Yeah, that really resonated with me as we were sort of looking for that answer. I like to find a way to always create a thread between the technology and the business value. Otherwise it’s pretty nascent, it’s a science experiment, it’s for academia. But we know, as we saw the rise of AI, of HPC, of accelerators, we know that the goal, especially with a lot of things with complex math, is to be able to go faster.

And quantum is sort of the next, well, quantum leap in terms of being able to make workloads go faster and solve some really complex problems that historically classical computing have not been able to solve, or solve very quickly. And so that’s really exciting. And also Pat, thought it was a really nice way to sum up the whole future of computing discussions that we had here at IBM.

Patrick Moorhead: Sure, and it’s interesting. I do like the notion of a QPU, right? This is an accelerator, right? It’s not going to run on an operating system. Sure, you’re going to have APIs and low level things like that, but this is an accelerator similar to a GPU, which then when you come out and you dial out, it’s just another version of… It’s the biggest quantum leap in computing, but it’s heterogeneous computing, which I think we can all relate to, with GPUs as accelerators.

And then if I take it a step further to how accelerators help the cloud and the cloud model, the quantum cloud model that IBM is creating, where if you have an API and you can use it even if you’re not booting up an IBM server or something like that, but you can get access to the IBM quantum accelerator and put it into your application. Maybe even application that you’ve had for 10 years, that you just want to accelerate and make better.

Daniel Newman: Yeah. Well, the future of computing is tying all these components together. Right? It starts with the R&D, it starts with the stuff happening in the labs, building out the next technology innovation thinking five years and 10 years out. And then it’s a lot of the stuff you and I talk about every day. It’s the more practical stuff. It’s the data centers full of compute and GPUs, and then applications and data, and companies, businesses being able to do something meaningful with it.

And so what I really liked with this last conversation was basically how it just tied together everything else. A future is all about getting more from our data. It’s getting more from our systems, it’s being able to solve the bigger, more complex business problems. And so hopefully everybody that joined us for these sessions really got that. They saw how all these threads start to tie together to create real meaningful enterprise value through the utilization of a full stack approach.

Patrick Moorhead: That’s right. Hybrid cloud, the edge, AI, semiconductor IP and technologies, and here we are with quantum. Great place to end. This is a great, fun day. I love Compute, and I know you do too.

Daniel Newman: Absolutely. So thanks everybody for tuning in, we really appreciate you joining The Six Five on the road here at IBM.

About the Author

Daniel Newman is the Principal Analyst of Futurum Research and the CEO of Broadsuite Media Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise. Read Full Bio