Search

The Main Scoop, Episode 10: How to Scale AI as a Transformational Business Technology

On this episode of The Main Scoop, Daniel Newman, CEO of The Futurum Group, and Joe Doria, CMO of Broadcom Mainframe Software, talk with Tarun Chopra, Vice President of IBM Cloud and Data at IBM, to talk about how the rise of generative AI driven by Chat-GPT, DALL-E, and others has reimagined how to unlock new value. But like any new technology, organizations must be able to scale to meet core business needs. Learn how to get the most value from these models across your business.

It was a great conversation and one you don’t want to miss. Like what you’ve heard? Check out Episode One of The Main Scoop, Episode Two of the Main ScoopEpisode Three of The Main ScoopEpisode Four of The Main ScoopEpisode Five of The Main Scoop,  Episode Six of The Main ScoopEpisode Seven of The Main ScoopEpisode Eight of The Main Scoop, and Episode Nine of The Main Scoop, and be sure to subscribe to never miss an episode of The Main Scoop series.

Or stream the episode on your favorite platform:

Don’t Miss An Episode – Subscribe Below:

 

Disclaimer: The Main Scoop Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.

Transcript:

Daniel Newman: Welcome back to the Main Scoop. I’m your host, Daniel Newman, and I’m joined today by … Wait, where’s Lotko? Oh, my gosh. Joe Doria. How are you, my friend? Welcome to the Scoop.

Joe Doria, Jr.: Thank you. It’s great to be here on the Scoop. It’s great to be here today and with you.

Daniel Newman: We’re live here in New York City, sitting outside of Times Square. Beautiful billboard in the background, if anyone sees it. I’m pretty sure Lotko was on the billboard, and that’s why he is not sitting here next to me. Is that what happened?

Joe Doria, Jr.: I think there might be more to it than that, but he is definitely not here, and I am, so I’m looking forward to-

Daniel Newman: He got warped into the machine.

Joe Doria, Jr.: He got warped into the machine.

Daniel Newman: And now we’ve got Joe, you and I, but you and I have shared a stage before. We haven’t shared a camera. Well, there might have been a camera there, but where were we, in Madrid together?

Joe Doria, Jr.: We were in Madrid. I remember there was a live audience numbering 400, 500 people, and I was actually ahead of you, I didn’t know you then, but you came on and you killed it. And that’s probably why I’m here right now doing the Main Scoop.

Daniel Newman: That’s probably why you’re here right now doing the Main Scoop. Well, it’s really fun to have you here. We will miss Greg, of course, but Joe, I’ve seen you before. You’re no slouch. We’re going to talk some AI today.

Joe Doria, Jr.: No slouch. Is that a compliment?

Daniel Newman: I’m just trying to warm you up, buddy. Yeah, we’re going to talk about some AI today on the Main Scoop. This has been pretty pervasive. It’s timely. AI’s been around us for a long time. Anybody that’s used Siri on their phone understands the natural language and understands some of the ways that we’re getting engaged, and that’s just an example. And, of course, if you’re running an enterprise, we’ve seen how analytics are being used to drive decisions up and down. But I think in recent times, OpenAI, ChatGPT, these things have gotten hot. Have you played with them at all?

Joe Doria, Jr.: Yes. I had my first experience over the last couple of months with ChatGPT, like probably a few million other folks, and I could not believe it. My first prompt was “Write a limerick for cloud and mainframe.” And I don’t know, three seconds later, it came back to me, and I looked at it, and I was like, “Wow, that’s amazing quality.” It was actually stunning for me. I’m somebody who does write a lot, I’m a marketing guy, and I love limerick writing for family and friends because there’s a creative dimension and there’s an analytical dimension to solve for the lines and the syllables and the words. So it’s complicated in a way. It takes me some time to do good quality ones. ChatGPT pumped that out, and I started running a bunch more prompts in there and seeing what it could do. It tells me that AI is really on some kind of steep acceleration.

Daniel Newman: Well, every company on the planet right now that’s either in tech or that uses tech is looking at the ways that AI can shape the future of their business like I said, whether it’s in the customer experience side of how a customer interacts with the brand, or whether it’s technologically to say how do we do more with all our data. You hear all kinds of data’s the new oil, and you hear all the new kind of data is exponential. These are true, but the real thing is that most people that are trying to solve the business problem just want to understand, “Okay, I have all this data. How do I put it to use?”

And so, in the case of the ChatGPTs, these are things, I like to say, that democratize and popularize something like AI. But in the background, enterprises have been doing really important work to try to get their data in the right organization to start to utilize it at scale. And companies like yours at Broadcom, another one, IBM, which we’ll be talking to today, this is something that’s become really part of their everyday focus of how they’re going to deliver value to their customers. So let’s talk AI, but let’s bring in a guest. What do you think?

Joe Doria, Jr.: I think that would be a great idea, and we have Mr. Tarun Chopra here from IBM. Welcome, Tarun, to the show.

Tarun Chopra: Thank you. Thank you, guys. Thank you so much for having me here.

Joe Doria, Jr.: And for the audience, I just want to say Tarun and I have intersected in our paths in the mainframe space. I know you’ve moved well beyond that into data and AI for IBM. What’s your take on where AI is at today?

Tarun Chopra: Well, first of all, I was just listening to you guys. It was a fascinating intro. So for me, just the ChatGPT discussion, Dan, that you started with, for us, it’s good because, at IBM, we’ve been, as Dan, you mentioned, we’ve been talking about AI with enterprises, and we are working. For example, we have 100,000 customers and 40,000 AI engagements over the year that we do with customers. ChatGPT, to me, brought that into the limelight, into the normal nomenclature, and it has created more interest in the field.

So to me, as you guys talked about, we are already working with clients all over the globe. So to me, it’s not the question of is AI in the enterprises. It’s really customers who are working toward how to scale it. We have already seen many, many examples across industries. For example, in insurance industries, we are working with clients who are using real-time AI to do pricing of the policies. In healthcare sector, from creation of new medicines to medical imaging and those kinds of things. In financial sector, from real-time fraud to pricing the assets. So the applicability of AI is, from an enterprise’s perspective, we see it cutting across the globe, cutting across industries. And I think ChatGPT just brought this into the mainline. So I love it. I love it because we get more and more questions asked.

Joe Doria, Jr.: Let me ask you a question. I’m obviously a marketing guy for Broadcom software, so I care a lot about client experience. I’m curious what your thoughts are, and maybe we tie in a little bit of that mainframe world with respect to all the data, is there, some would say, a trove of business data that is yet to be unlocked in terms of its value, but how could AI play into that with respect to customer experience, changing customer experiences in a way that actually distinguishes the folks that are listening to us?

Tarun Chopra: It truly is because of what customers are looking for, again, everything, even from an AI perspective, customers are looking at how they can solve their client pain points and business problems. I’ll give you a real example with CVS. During the COVID timeframe, their call volume got so high that they didn’t have the manpower to answer all the calls from a vaccination perspective. So at IBM, we helped them create conversational AI models that took almost 60% to 70% of their call volume during those times and helped schedule vaccinations.

Conversational AI is a big topic in front of our clients, as ChatGPT made it a little bit more human and in front of the main public, but that conversational AI topic has been front and center with our customers. We work with huge banks, even in their compliance regulations and streamlining that process, because the information is so much out there, our customers have thousands of auditors and regulators sitting in the banks and in the financial and the insurance industries, but they just can’t handle all that volume ourselves.

So leveraging AI to streamline that process and quickly come to the conclusions, both from a regulatory and audit perspective, saves our clients billions of dollars, but also provided that experience that you talk about, quick resolution from a customer endpoint perspective. So AI is a transformational technology from that perspective, and our clients are leveraging it to really solve their main business points.

Daniel Newman: You brought up some good points, and I think ChatGPT and this OpenAI thing has become a bit of a backdrop, but really it’s more symbolic of where we’re at. For the better part of a decade now, Netflix has been using sophisticated filtration of their data recommender engines to create that experience that we all know on Netflix now, where it’s, “Hey, I’m getting the right content sent and put in front of my eyes.” When you go to Amazon, and you go shopping, there’s a ton of AI and ML being powered there to make sure that the thing you want to buy is landing on the right part of the page. All that data and all that activity, it’s a circular output of better, more targeted content, better, more targeted shopping experiences, and these are the real AI-powered experiences that have been going on for some time.

And so I love the example of CVS, and I think a lot of it also is about creating better, more human-like experiences. So empathy’s kind of a gap, empathy and being able to look forward are two of the gaps I still think are there. We don’t mind a chatbot, but we want a chatbot that feels more human. And so we don’t want chatbots that feel like, “Oh, we can answer the same three questions.” That’s a little different. But maybe this leads me to the question I’d like to ask you, Tarun, how do you scale it? So everything you mentioned is like a, “Oh, my gosh. CVS came to me. We’re in a panic. What can we do?” Working with these large enterprises, what are the challenges to actually scaling AI to address these needs?

Tarun Chopra: That’s a great question, and to me, it comes down to two fundamental things. One is access, and the second is trust. You talked a little bit about empathy, and I will categorize trust in that segment as well. First thing is there’s no AI without data, and our customers are still, believe it or not, if you ask any CDO or any business owner, are you satisfied with the quality of data that you have, are you satisfied that you’re getting all the information off your data? I will say 9 out of 10, the answer you get is no. I personally feel, and at IBM we feel, that the demand for data has far outpaced how we are managing these infrastructures and complexities.

So number one thing is access to the data. How can you give them access and the right governance? Policy-based is super, super critical. And mainframe, you brought, Joe, up, how you bring some of your transactional data that’s super important data into the mainstream, but in the right way to make sure you get the information with the right quality and the right process in place. So access is number one, Dan. That is the number one fundamental problem in terms of scaling AI.

And the second is the trust. ChatGPT is good. You might want to ask some random questions and get information out of it. It really has democratized, as you said, Dan, AI. But from an enterprise’s and customer’s perspective, you can give wrong answers. You can’t give answers that are biased answers. If you talk to the C-suite, one out of four execs now will say that AI governance is a board topic. You have to show to the board the models that you have built. Are they appropriate, fair, they are sticking to their bias, they’re ethical? All that kind of questions.

So all a lot of these enterprises are looking at ways on … Look, I build one model, two models, but once you have hundreds of models in your enterprise, how you make sure that they are doing what they’re supposed to be doing, so when you open up the black box, you can show that this is the right thing that I’m supposed to be doing? So, to me, helping our clients have AI governance around it is a huge topic for us. So, one is around access to the data and the concepts like fabric or data mesh, and the second is around AI governance, how we make sure when, as a consumer, you are getting the experience from that innovation, you can trust it, you can make sure this is the right thing. To me, solving those fundamental problems is the way to scale AI that our enterprise is looking at.

Daniel Newman: Tarun brings up a good point, Joe, and I’d love for maybe us to take this to the mainframe a bit, because it is the Main Scoop. But one of the things you were talking about is about how some of these early tools that we’re all familiarized with, and we’ve referenced one of them a lot, but we can talk about any of them, is that what is going to differentiate in the future AI for a lot of companies is the data that’s not all publicly available. Now, we’ve got a whole bunch of use and license issues with things like ChatGPT. When you start creating content with other people’s content, there are problems with that that we need to solve, to your point about ethics.

But the other problem is that once it’s easily available to everybody, then it’s no longer a competitive advantage. So what I’m saying is if anybody can go in and ask the question and get the output, it’s not an advantage. Now what makes an Amazon, a Netflix, or a CVS is they have an immense amount of data, and that data has to live somewhere, and have unique data, unique customer insights, unique supply chain data, unique transactional information over history, and they can feed that into a system that now has the also widely and publicly available data, you can start to be able to create unique advantages, models, algorithms, and that’s really powerful. And by the way, where does all that data live, Joe?

Joe Doria, Jr.: It all lives on the mainframe.

Daniel Newman: I think a lot of it does.

Joe Doria, Jr.: The business production data does.

Daniel Newman: A lot of it does.

Joe Doria, Jr.: We’re seeing use cases, too, with sentiment data that comes from social media channels, for example, so you can look at consumer types of behaviors and reactions and have that real-time capability inside of these AI models. I think another thing to talk about here for us on, since you mentioned let’s take it back to mainframe, is how infrastructures of the enterprises that are out there and running those infrastructures optimally, and looking at how AI and machine learning can look back at data and see things that are happening and knowing what the green highway is of performance and operating in a performant way, versus hitting maybe some spikes or anomalies that you didn’t expect to see and dealing with them proactively.

Daniel Newman: So maybe the question to ask Tarun is on all this infrastructure, all this mainframe where all this data lives, it needs to be unlocked to be part of this AI story. It’s not just cloud data and edge data. It also needs to be the data on the mainframe. So what’s your thoughts about the approach to unlocking all that mainframe data as part of an AI story?

Tarun Chopra: Now, look, in my career, short career, I have the privilege to work both on mainframe and data and AI, and one of the things we’ve been doing in the mainframe is making sure it’s open. That’s the fundamental thing we are doing over the last, I will say, 10 years or so.

Joe Doria, Jr.: Yeah, we’re on that same exact strategy.

Tarun Chopra: Making sure customers have access to the data that they want in the right, though governance and policy-oriented ways. So, if you look at our software stack on the mainframe, as an example, before I get into the infrastructure, make sure customers can easily access their DB2 gigs, IMS, Visa, and all the data in the right way without having the need to move the data. A lot of people think you have to really move the data around to all these different repositories to get the value of the data, but technology has advanced so much over the last five or six years that you don’t have to move data nowadays to get the value out of the data. So first of all, from a software ecosystem, we have done that, both within IBM and Broadcom and other ecosystem players, to make sure you can access the data.

The second thing is, as we mentioned, infrastructure. People think software runs magically, but you need the hardware behind the covers to go make it work. If you look at our recent Z-16 announcement and the Telum processor that we announced, we purposefully built AI-inferencing right into the chip itself because for our customers to do real-time analytics on their core mission-critical transactions, you just can’t move the data around far away and then do the transactions and analytics at the same millisecond or the microsecond.

So doing training somewhere with the data that you have, mixing the mainframe data from the spot itself, and then bringing those models back into mainframe to do the real-time transaction is a great example of how we are unlocking the value for our clients. But then, as I talked about bringing the tooling on top of it to make sure you have the right governance tools, the right access tools to bring the whole enterprise ecosystem, to me, that’s where we and the partners are really moving to bring value to our clients.

Daniel Newman: Well, it’s good that he told him about Telum. I wanted him to tell them about Telum, but now he’s told them, so he told you. All right, so bringing this all home, we always like to take a look at the crystal ball, Tarun. Where do you see all of this heading? Where do AI and the intersection with the mainframe, the data, and the future, and how do all these things come together?

Tarun Chopra: Yeah, I think you touched on that, Dan, in the beginning. I don’t think AI is going to be any separate magic. It is going to be infused into our processes. So let’s not wait for some separate tool or separate software to come in to say, “Oh, this is what AI is.” To me, it is going to be infused into our existing processes and existing ecosystems. It’s not going to be some separate thing. That’s point one. Second, as I talked about, as we look to scale AI in enterprises, the questions of data access and AI governance is going to be forefront and center because we will have to explain what we are doing to auditors, regulators, and government entities, above and beyond.

And then finally bringing it back to the mainframe. Look, as we all know, most of the transactional data sits around mainframe. So how enterprises can build architectures around, so mainframe is part of that ecosystem in the right open way, making sure it’s not a closed-loop system, like mainframe is somewhere and then you’re doing something else over here. I think the customers that are succeeding in the marketplace or the customers that are winning the game are really making sure mainframe is a central part of that policy around AI they are building because once they do that, then, as you said, they have precious data that they only control resides in mainframe. So bringing that data and then merging it with the data that is available to everybody is the unique insights that the customers can drive, that only they can drive. So, building their architecture in such a way to make sure the mainframe and the other technologies are playing cohesively, to me, that’s the winning strategy that a lot of our customers are working towards or adopting in the marketplace.

Joe Doria, Jr.: Let me just build on that and say, for me, the crystal ball is about how fast things are going to be coming at us in terms of new use cases that we can’t even necessarily conceive that’ll come to the fore, and there’s a lot of opportunity around that for businesses, if you can be on top of that.

Tarun Chopra: And I will say, Joe, like you said, just for the audience that are listening, people think, “Oh, this is some new territory.” I just want to leave the audience that our customers are already doing it. So if you’re not doing it, that’s a different discussion, but the customers are already doing it and leveraging it.

Daniel Newman: But I think the point you’re both getting at is the architecture has to be flexible to be able to endure these new use cases as they come up. You can’t be in a situation where, like you said, “Oh, my data’s here, and it’s hard to get it here.” And by the way, even if it’s possible, you have all kinds of things like security and such. The data that exists on the mainframe tends to be extremely proprietary, confidential, security-rich data. So we’ve seen with Hyper Protect and other services that are being developed to make sure you can actually get the data from the mainframe to the cloud in a safe and secure way so it can be utilized. Because if AI has to use it in a segmented way, you get less value, meaning if all the data can’t be used to build the models and derive the algorithms, derive the outputs, it gets harder. Tarun, it was great having you here on the Main Scoop.

Tarun Chopra: Yeah, thank you so much. It was fun.

Daniel Newman: It was a lot of fun.

Joe Doria, Jr.: It was.

Daniel Newman: Joe, thanks for filling in for Greg. We do love having you here. It’s always good to bring IBM on the show, and of course, it’s always fun to do the Main Scoop live in person. We are here in beautiful Times Square in Manhattan, New York City, overlooking the beautiful skyline of New York. Thanks, everybody, for tuning in. We can’t wait to see you again soon.

 

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

The Futurum Group’s Guy Currier provides his insights into the advancements in the creation and operation of applications and their foundational data, along with AI, showcasing the rapid progress being made in cloud and application development.
Kubecon and the Vendors Lay Out Strategies for Driving AI
Camberley Bates, Vice President at The Futurum Group, covers the pressing issues of memory constraints and highlights from Memcon 2024.
Empowering Developers with Advanced AI Capabilities and Enhanced Data Analytics Solutions
Paul Nashawaty, Practice Lead at The Futurum Group, provides his insights on the transformative impact of Google's Data Cloud innovations and the implications for developers and enterprises navigating the evolving landscape of AI and data analytics.
Navigating the Future of AI: Analyst Perspectives on Google’s Latest Innovations and their Impact on Developers
Paul Nashawaty, Practice Lead at The Futurum Group, provides his insights into the transformative impact of Google's AI announcements at Google Next and their implications for the future of AI development and adoption.