Search

We are Live! Talking IBM, Slack, Google, Tableau, GlobalFoundries, and Luminar

On this episode of The Six Five Webcast, hosts Patrick Moorhead and Daniel Newman discuss the tech news stories that made headlines this week. The six handpicked topics for this week are:

  1. IBM Think
  2. Slack GPT
  3. Google I/O
  4. Tableau GPT
  5. GlobalFoundries Earnings
  6. Luminar Earnings

For a deeper dive into each topic, please click on the links above. Be sure to subscribe to The Six Five Webcast so you never miss an episode.

Watch the episode here:

Listen to the episode on your favorite streaming platform:

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Patrick Moorhead: Hi, this is Pat Moorhead. We are back for another episode of the Six Five Podcast. And we are both back in Austin, spent the week in Orlando, two-timing events. It was great. We’re glad to be here. Daniel, how you doing, man?

Daniel Newman: Good morning. Hey, don’t forget we went to New York.

Patrick Moorhead: Oh gosh, I’m sorry. So many cities. So little time.

Daniel Newman: I’m just saying, I know New York’s a little town. But I mean, don’t forget we did that.

Patrick Moorhead: We did. We did shoot one of our day openers for the Six Five Summit in New York. It was beautiful, had a background view of the Empire State Building. Stayed in a nice – we were in New York, I think a total of 13 hours.

Daniel Newman: It was not long. It was a came, we saw, we conquered kind of thing. And it went well though. And it actually created a pretty great photo op, off the wing where we saw, Pat, the sun, the moon, the stars. Maybe your best tweet of the week.

Patrick Moorhead: Yeah, I appreciate that. The best tweets of the week have nothing to do with the intelligence that actually goes into them, as we had discussed.

Daniel Newman: Picture of your dog.

Patrick Moorhead: Yes.

Daniel Newman: Picture of your dog, dude. Not to go on a bit of a tangent, but-

Patrick Moorhead: But you’re going to go on a tangent.

Daniel Newman: But I mean, look, is there anything more frustrating than when you really put some thought together, you put some great tweets out, maybe about Google I/O or you’re really doing some thoughtful analysis and assessment of the state generative AI and you…bullet one, bullet two, bullet three, great analysis. All into that character size and it’s like three likes. And then in the next picture is like, “Here’s me and my bestie sitting awkwardly in chairs,” 75 likes.

Patrick Moorhead: Yeah. “Here’s me hitting myself in the hammer after realizing I’m going to be working on vacation.”

Daniel Newman: Don’t. Just tweet about-

Patrick Moorhead: 115,000 likes.

Daniel Newman: Yeah, just tweet about Elon’s new CEO. That’s fun.

Patrick Moorhead: Yeah, wouldn’t it be great to hire a CEO, Dan?

Daniel Newman: God, it’d be… I don’t want to put anything on the record right now.

Patrick Moorhead: All right. Hey, we’re getting so off track here. We’re sorry. But hey, if it’s your first time at the Six Five Podcast, you’re familiar with this. If it is your first time though, we got to wonder kind of what’s wrong with you? We do cover six topics, a little bit of news, get some context, but we’re all about hot takes and analysis. If you want the longer take, go on to our corresponding websites, moorinsightsstrategy.com and the Futurum Group, where you can pick one of 14 different companies to get your sauce from. We’re going to talk about publicly traded companies too, but don’t take anything that we utter that we’re inferencing.

Daniel Newman: Inferencing.

Patrick Moorhead: Yeah, I like it better than inferring.

Daniel Newman: Are we training?

Patrick Moorhead: So much more fun.

Daniel Newman: Are we training?

Patrick Moorhead: Or that we’re training. And if you’d look at my portfolio, you would do nothing that you might infer from anything that I say. So no, we had a great week. There’s so much stuff to talk about. We’re talking about IBM Think, Slack GPT, Google I/O Event, Tableau GPT, GlobalFoundries and Luminar Earnings. Dan, let’s jump in, I’m going to call my own fricking number on IBM Think. So, so many announcements, so much to talk about, and I’m going to do it in three minutes. So a little bit of backstory. When Arvin came in, really, the thought of the day or the leadership statement said we’re going to lead in hybrid cloud and AI; and I totally got the cloud part because quite frankly, I mean, the company had bought Red Hat, right? It makes total sense.

They have a hybrid multi-cloud stack that connects with the AWSs, the GCPs… But I was always wondering about the AI, was that a bolt on? I got quantum, but AI was like, “Oh, are you talking about the old Watson you brought out a decade ago?” I don’t get it. Well, this event was all about the AI part. It was really the coming out party for generative AI and foundational models. Not that the company hadn’t talked about it before with foundational models. Not that the company hadn’t actually opened up an entire data center that was optimized for AI, but this is really what it was all about. And if I step back, some of the big messages for me was that IBM offers a full stack for AI to clients, all the way from the applications in the top, all the way through to the actual silicon itself and then everything in between, and then wrapped around services that can help clients if IBM wants to lead them to the water, do that as well.

So on the whole, some of the more important elements of their strategy, aside from obviously delivering real client value and their client’s clients is, it’s multi-could, multi-model and open-model. And those are the three characteristics. And I’m getting to the… I still have to do a lot of work on really understanding why this needs to be hybrid because there are companies like AWS who have the end-to-end. Now, AWS is not putting Outpost capabilities on prem yet. So I think you might argue that that’s not possible. So company came out with a Watson X, which is foundational models, generative AI, has a studio, a data store, and a governance toolkit. Watson Infusion came in as well for code AIOps, digital labor, security and sustainability, which is the key access area, like the partnership with Hugging Face. Who isn’t partnering with Hugging Face? I think everybody’s partnering with Hugging Face.

Daniel Newman: Hugging Face, what a dumb name. Did I just say that?

Patrick Moorhead: You can’t forget it. I had a group chat pat somewhere along the lines where someone said that to me too. I was in a chat and they’re like, “So-and-so wants an introduction to so-and-so at Hugging Face.” And then the next person was like, “God, what a dumb name.” But at the same time, maybe that’s it. He can’t forget it.

Daniel Newman: Maybe both of you and I are too old to appreciate this. Can you imagine? Maybe that’s something in your 20s or something in your 30s. I mean, I get what it is. Hugging Face, emoji, yada yada. But anyways, I think it’s a dumb name.

Patrick Moorhead: So back to IBM Think. So in the future, here’s what I’m going to be looking, here’s what I’m going to be analyzing with this. First of all, speed. Okay, I would say that on the whole, IBM doesn’t stand for speed, it’s about safety and trust. And can IBM amp up the speed of this to move forward? Because that is important. But I do think that IBM won’t cross the line on trust. We did hear the parallelization between research and products when we were talking to Dario and Kareem, thought that was interesting. The second thing I’m going to be looking for is how does it actually operate with the data layer?

You have Watson X AI data and my question is, are they – I never saw one AWS, GCP or Azure logo that actually showed that this solution was multi-cloud. I know I heard the word multi-cloud, but I just haven’t seen that yet. In the past, IBM has been a little bit afraid to show any of those logos in the past. And then I saw them starting to be infused in. I didn’t see a single one of those multi-cloud logos at the show, aside from, obviously, sponsorships but not in slideware that made it just absolutely simple that this company is supporting multi-cloud. But again, a lot of research to do, good time. We spent three and a half days there, good show.

Daniel Newman: It was a big moment, Pat, for IBM, I’ve been writing endlessly about the kind of convergence of enterprise AI. And over the last two years, I think there’s been a lot of interest in AI in general, Pat. And generative AI has actually been something that we’ve been experiencing for some time. I don’t think a lot of people realize that. But when Google’s finishing your sentences in Google Workspace, that is a version of generative AI. The ability for multi-term conversation that we’ve been having with Amazon, with our devices, it’s still in its early days. But we’ve been seeing large language models being deployed, whether it’s Jarvis from NVIDIA, that model that they developed some time ago. But the truth is that a lot of the real value of generative AI is unlocked in enterprise data. The enterprise data that we hold, that’s in our systems of record, that’s in CRM, ERP and our HCM solutions within our supply chains, ambient data that exists within our ecosystems, it’s video data, it’s customer interaction data, CX data.

Companies want to build workflows. And with this onset of generative AI that’s taken place in the last six months, companies want to build sophisticated proprietary generative AI capabilities where they can add value to products and services that they’re bringing to market as well as deliver better customer and employee experiences within their organizations. IBM is basically standing up and saying, “We want to be to enterprise AI what ChatGPT or Google Bard or any of these sorts of large languages models, Facebook Llama has been to consumer AI.” And I say consumer, I just mean user interactions with the open internet, okay? That’s what basically is popularized generative AI. It’s the user interaction with search. It’s the user interaction with a chatbot that feels very human or little human depending on which one you’re working with and more real well.

The bottom line is this, is that companies have to have the data to train, they have to have the system and fabric to build this on. They have to have the applications to deploy this. And they need some consulting in order to actually figure out how to build these workflows out. It’s not as easy as… I know we’ve heard things about speech to code and yes this is happening. The role of the developer in the future is going to change. The role of the data scientists in the future is going to change because these are models that have the capability to be continually reinforced, learned. And with a company like IBM offering their foundation models, which is basically validated models for different kinds of things, digital labor, IT observability, they have the potential to basically say you can plug right into this and then put a layer of your own proprietary data on top of this. A much smaller subset of data.

It can cost millions of dollars to train a large language model. So you can take a data set that might be 10% of the size of a traditional data set required to trade a large language model. And then you can deploy it on IBM Watson X and you could therefore, implement into your business meaningful generative and AI capabilities on a much lower cost with the architectural support of a hybrid fabric that is Red Hat. And then you can take that all the way to utilization. So that’s my both assessment and question mark. The assessment is IBM offer the toolbox. It’s literally the toolbox, it’s the actual data layer and fabric. And then it’s interestingly, Pat, you didn’t mention this much, but the governance and the governance is really an important thing because we’re deploying this so fast. We went from zero to a million in six months and we actually don’t have very good policy frameworks and regulation around how we’re going to allow this to continue to proliferate into society.

So that’s super interesting. The Watson governance is going to come later this year. This is not an entire framework for all governing of AI, but it’s kind of within the work you’re going to do with Watson X. It’s, hey, how do we make sure our model doesn’t drift and change and become something we don’t want it to be as new data is introduced to it? Governance is going to be able to help with that. So lots going on, Pat. And I think you hit the big – this is the home run comment you made and I don’t like agreeing with you because I want everyone to think I’m smarter. But in all serious, it’s speed.

Patrick Moorhead: But you know you still do.

Daniel Newman: It’s speed. Speed is the question mark. I think we both have, we were in many executive meetings, one-on-ones conversations with the… And I kind of just kept saying, how fast can you get traction? The hyperscale cloud providers are the biggest threat. And while IBM certainly has partnerships with all of them, you can absolutely be certain that Microsoft is not going to limit its stuff to consumer or search. They’re already embedding it into plenty of applications. They’re going to make it OpenAI trainable, you can train right on top of it with your own data. How does this versus maybe using a foundation model plus your own smaller data set, which one develops and delivers better outcomes? The other thing is you’re going to see tons of stacking and with Auto-GPT, you’re going to see models stacked. You’re going to use Watson plus, Open plus, Bard plus, Llama because you’re going to take the best, just like we’ve seen with cloud, Pat.

And just like we’ve seen, you’re going to take the best of all these different models and you’re going to start gluing them together and you’re going to start influencing against more and more and more, which is going to drive tons of compute, tons of interest and tons of excitement. So listen, I’m stoked. Let’s go Gen AI. Pat, in two weeks, three weeks, three years, maybe you and I can be having the Pat & Dan Show while we’re still in bed getting some sleep, some rest in by the pool side. That’s what I’m hoping for because boy, we are moving quick.

Patrick Moorhead: That would be great. Based on some technologies that we’re going to talk about Google a little bit later, there would be a sign that says machine-generated. And I wonder if that’s what people want or do they want the real Pat and Dan, we will see. Because I’m looking forward to trying out these technologies.

Daniel Newman: Hey, Pat.

Patrick Moorhead: Yes.

Daniel Newman: Oh no, I was saying we should talk more about generative AI on this show.

Patrick Moorhead: I know, we really should. Hey, one thing I wanted to sneak in here, I mean, it was all about AI at IBM Think, but they did bring out a pretty incredible set of quantum safe technologies. You may or may not be aware, but there are bad actors who are harvesting data now to be used in the future when they can apply crypto breaking technology to get at that data. And IBM brought out a full array of quantum safe technologies, quantum safe explorer that looks at source code and object code, quantum safe advisor. That’s essentially a view of cryptographic inventory, quantum safe or mediator. And this by the way is to add to their quantum service that they have, that leverages quantum safe cryptography that IBM customers can use today. And I’m not pumping this just because I was in the press release or anything, but you can also read a lot about it on Paul Smith-Goodson’s Forbes article. Let’s move on and talk about more generative AI. This time, Slack GPT from our friends at Salesforce. What’s happening here?

Daniel Newman: I think we are legitimately going to talk about generative AI for our first four straight topics and then we’re going to talk about more AI when we talk about some semiconductor companies. So can we just change our terms to AI analyst because I’m pretty sure that’s the next thing.

Patrick Moorhead: I think it’s Six Five GPT.

Daniel Newman: You know sometimes a company comes early and you know how they say being early and being wrong are the same thing. I still remember that one of my favorite movies is the Big Short, and I love that part when there’s a part where they’re arguing about his short on the housing market in 2007 and he is like, “I may be early but I’m not wrong.” And then the guy’s like, “It’s the same thing.” It’s the same thing because when you’re trading… Well, this is the condition that is Salesforce. I think it was what, three or four years ago now that Marc Benioff really came to the market with Einstein. And the idea was basically that you would have this chat type assistant and he actually made some pretty ambitious statements in his early days that you were going to basically have Einstein in the boardroom. That you were going to have this AI that’s going to be in the room, that you’re going to be asking questions, that you’re going to be making business decisions.

And it’s not going to be just your board members, it’s going to be your board members plus your machine and you’re going to talk to the machine. You’re going to say, “Hey, do we need to cut staff by 10% in order to meet our cash flow requirements that we’re committing in this quarter’s earnings report?” And you’re going to have some AI assistant that’s going to do that. Well, it didn’t happen very quickly and there’s been a lot of skepticism about Salesforce. But Salesforce has leaned into a partnership with OpenAI, which by the way, if you ever want to know if someone’s partnering with OpenAI versus maybe a different transformer model, just if they use GPT, that almost certainly means they’re training and building on OpenAI, just FYI for those of you that are out there. But anyways, long and short is for three or four years people just said, “What the heck happened to Salesforce?”

They did this thing and they fell kind of flat, Pat. And now though, the era of generative AI has brought this back to the surface. So what you’re seeing now is basically across the whole portfolio. A few weeks back you saw Salesforce GPT, now you’re seeing Slack GPT, and in a little while we’ll talk about Tableau GPT. And what the company is basically doing is they’re adding generative AI to help create automated processes in Slack, code free, using a workflow builder or a no-code automation tool. You hear what I said earlier about developers, Pat, and their role changing in the future? I just want to reiterate that I’m right. It’s very important to me that we get that on the record.

Patrick Moorhead: Listen, the victory lap is important because we’re not always right. We’re certainly not going to bring that up, but…

Daniel Newman: We never actually go back and talk about when we’re not right. Thankfully, it’s so rare.

Patrick Moorhead: Actually, I do on things that really aren’t important that nobody cares about anymore.

Daniel Newman: Like sometimes at home it’s like, “You’re right, honey, I did buy decaf.” Anyway, but they give some really great examples in this, but think about sales and lead workflows. And by the way, this is very similar to some of the stuff we’ve seen with Dynamics, with what Microsoft is doing. So the comparative now is that we’re starting to see the unwinding of early competitive advantages where you were like, “Oh my God, we could take a Teams meeting and then we could take the transcript of the Teams meeting and that would automatically populate something into Dynamics and then it would create a proposal and then it would…” So now you’re seeing that basically you could have an alert that a new lead came into Sales Cloud…your GPT can then take a workflow that it can take a sales lead to a draft a personal prospecting meeting, put a document I can share in the channel.

Now, the whole sales team has and visibility into the deal and they’re building this all natively into Slack. So it’s going to get you up to speed, faster on messages. It’s going to help you prioritize what’s important. It’s going to help you create HR workflows for new employees to come in, get welcome, get up to speed more quickly on what’s going on. It’s going to give you better customer data. So it’s going to integrate right in with Customer 360, service data. It could do case summaries of service where you get quick abstracts of what’s going on with a case. So what’s really going on here is that basically all the chat and day-to-day and asynchronous interactions, we’re going to get back on a page where you can quickly with very little to no programming and developer capabilities, get your employees up to speed on service workflows, sales workflows, project workflows, HR workflows.

And it’s going to be built on a bunch of pre-created simple-to-train sale bot type of interactions. And it’s going to be available to all companies using Slack. And now, another important point to make by the way, is there’s a lot of interesting integrations across the Microsoft Salesforce ecosystem. So while both companies are obviously competing head-to-head for certain parts of the business, there is a lot of integrations now where things will work. And because of the OpenAI partnership, you’ll see more and more integration across that kind of Microsoft Salesforce portfolio. But Salesforce is going for it, man. And they have to. And I absolutely believe they have to. But this is a great indicator that Marc Benioff was actually on the right track for three or four years ago. But now everything’s coming to fruition and we’re going to start to see some really powerful features inside of our day-to-day productivity tools.

Patrick Moorhead: Yeah, good analysis. So context here is that Slack GPT fits into the entire end-to-end AI platform. There are embedded GPT features inside of Slack. And as you mentioned, it is closely correlated with Einstein GPT for overall Salesforce. So my first thought is absolutely expected, you didn’t say it this way, but if you think about how Slack started and the ability to invoke different superpowers that it has, you could do many of the types of things that you can do today in Slack GPT, but this just does it better. Maybe you couldn’t do some of the summaries, but if you wanted to order this from a store or get reservations, you could actually invoke one of the bots that were integrated in and actually do this. What this does is it’s an easier and more effective and more powerful way to integrate into these chat services.

It’s funny, I look back, Dan, and it’s almost like Slack was architected or built for this day when you could do almost anything automatically inside of a chat type of motif. And I think you said it well, which is like, okay, these guys have been waiting for this day, but literally, this is everything that the founders of Slack said it would be when they founded the company. Now, is it a little bit sad that they’re not going to get as much credit for this and that others have kind of stormed in? Kind of. I’m an ex product person and you kind of hate to see that. But then again, having to compete with the likes of Microsoft and Google on things like this is just the real world and that’s what ferocious competition is all about. It’s going to be interesting to see the antitrust stuff between Slack and Microsoft inside of the EU. And my guess is they’re going to use this very example which says, listen, we were kind of hobbled by Microsoft giving away Teams inside of its E-class licenses for free. And we basically couldn’t do this.

We didn’t grow as quickly as we can. We weren’t able to fund this and therefore, had to be acquired for that poultry amount of money that Salesforce paid. Actually, it was a lot of freaking money. But anyways, this to me is the essence of what Slack was created for and we have it now and I can’t wait to use it. So Dan, let’s shift topics here and gosh, we’re going to go to a completely different topic, Google I/O and talk about generative AI. And I love this. So I was not able to go to the event, but I did watch the slideshow on YouTube. So it was pretty good, it was over three hours long. I didn’t watch the whole thing, but I watched the parts that I thought were super important. So I’m going to rattle off some of the announcements where there were holistically… This was all about generative AI and Google’s incorporation of it into their product.

So first and foremost, Bard is open to everyone. I made a comment that it was a very good sign that Microsoft opened Bing Chat to everybody and it signaled to me that the company was ahead of the curve. And I mean, that’s just a black and white. You open up to everybody, you’re making a huge commitment and it’s in all your data centers all around the world. You’re abiding by international laws for each country. So Bard is now open to everybody. It is not, however, integrated into search and I’ll get to that. PaLM 2, which is the new super awesome model. You can compare that to a GPT-4 type of thing. It’s Google’s version of it said it’s already powering Bard and getting better. I wish there was this is your brain on PaLM 2, this is your brain on PaLM 2, kind of like we see with GPT-4. OpenAI where you can pick 3.5 or 4.

Next big announcement was Workspace, which is the number two productivity and collaboration app across the globe to Microsoft 365. It brought out its own copilot. I know Microsoft is using that and a couple other people, but it’s called Duet AI and it’s doing exactly what you’d expect it to do, like create templates and spreadsheets and slides. I am totally jumping on that wait list because our backend is Google Workspace. I’d mentioned before, Dan, Google’s introducing automatic watermarking for AI generated content. So if you think you’re going to crank out a blog, rip off somebody’s content or make people think that… Not you, I’m talking to the audience here, but you’ve created this amazing piece of artwork and it was actually cranked out by something that was generated or maybe by Adobe Firefly, it’s going to let you know.

And I actually think that’s a good idea and quite frankly I think it’s going to put quality of content into classes, machine-generated and human-generated. And actually, I just think that’s a good thing. And by the way, sometimes the machine-generated content’s going to be better than the human-generated, but really thing to put your eye on. Vertex got an upgrade, that’s Google Cloud end-to-end data pipeline. I’m probably going to write up something on that. Google Search, new look but flirted the idea of integrating ChatGPT but not integrated. This is behind of what Microsoft is doing with Bing, that if you go to the Bing search and you do have a question, it will on the right-hand side, pull in Bing Chat information if it’s appropriate.

I mean, this is not a compliment. I know this, if you’re at Google and I say this, but it’s just the truth. I think the context was the Google was way behind in foundational models even though they had written literally the defining paper on it. And then Microsoft just absolutely comes in torrential speed, puts it into everything, opens it up to everybody beforehand. And just from a timing perspective, Microsoft is ahead. Google stock went down 5% and then there was another disclosure where the stock went down another 5%. So taking a beating here, but-

Daniel Newman: It’s up more than Microsoft year to date now.

Patrick Moorhead: Yeah, interesting. Here’s my net-net, Google made the case that it isn’t hugely behind on foundational models for consumers, B2B cloud or developer. I wish I could give you something more definitive, but they brought out 50 products and it takes a long time to digest it. Opening up Bard is a really good sign, putting it into search would’ve been a lot more impressive. I still don’t know how Google search integrates the foundational models and can afford it. It’s one thing when you have very small market share like Microsoft with Bing, and then you can make the case like they did, that says every incremental profit dollar is a good thing for the company that I get in advertising. When you’re the incumbent though and you have a certain profit model and your story is potentially lose share and increase costs, that’s not a good sign. Now, in the last earnings, we talked about CapEx is only going to be anomaly. I just don’t know the puts and takes. I don’t know where they’re going to take the money to invest the money, but I am standing by. Dan, what do you think buddy?

Daniel Newman: Well, thanks for taking all seven minutes, you left nothing. No, I’m kidding. So listen, first and foremost, Google is back in the eyes of the street. This event was big for the company after it kind of got some upgrades in rallies. The actual stock was up more for the year than Microsoft, at least at some point yesterday following Google I/O. So what does this mean? I think it means that message well received. Now, Google took immense criticism for its reactive response to GPT and Microsoft’s announcements with Bard and then it kind of fell on its face. So you’re absolutely right there. Having said that, this is really a classic buy versus build story. Microsoft bought the hottest, prettiest best looking and brought it to the prom. Google had been in a lab building, developing, investing and at least wasn’t totally ready and has been forced to continue to accelerate its build structure and come out with something new.

Opening Bard to everybody was a really good sign. But remember, when you open up Bard, it says experiment right next to it. And I was talking to a journalist about this yesterday and I said, “Really? The whole internet’s an experiment.” I said, if you think about it, everyone’s like, “Well, what if the information’s not correct?” Well, right now when you search, you then choose your own adventure and then you click on something and then you’re like, “I’m going to source this.” And you’re like, “How the heck do you know that that was right?” So I think the entire experience is just a sort of the evolution of the internet experiment. And I think for Google, you may as well roll it out, start training on it, building more utilization. Now I totally agree with you – having Bard exist in sort of a parallel versus having it immediately integrated into search – I think it does a couple of things.

One is using PaLM versus Bard for these different large language models for generating, it gives them the ability to do some AB testing. It also gives them the ability to train in parallel and then it gives the opportunity to stack models to create value. Because I’m betting you each model probably has its pros and cons. So now you’re seeing it actually building a multi-model experience, kind of delivering different user experiences inside of Bard versus Google Search, allowing people to stay within their familiar search, that is Google. And by the way, Google has so much search volume compared to Bing at this point. The expense would have to be as astronomical if they were going to immediately open up generative and use it at the same capacity that Bard is for every search inquiry they get. Remember, 90% of society probably has no idea generative AI is even a thing yet.

As much as we think it’s everywhere, Pat, that’s just because that’s what we do. Well, it’s our world. It’s the people we are around. And yes, our parents are aware of it, but I don’t know that they care yet that they get generative responses inside of Google. My folks have just become comfortable using traditional Google search at this point. It’s moving really quickly. I think in time, it’ll get picked up. Pat, I think the Duet stuff, is that what it’s called? Duet. That’s killer. Because we are also a workspace backend. Look, I write things. I would love to see them quickly be turned into slides. I love to – I am the worst PowerPoint creator on the planet. I mean, if you want to talk about making ugly slides, have me create your next slide. It is horrible.

But just anything that can be done to create an expedite productivity is really sweet. I think some nice upgrades to Android in terms of capabilities. And then also it’s kind of interesting, Pat, they quietly revamped or relaunched their home app. And this is kind of a weird thing because I’ve been wondering whether it’s been Apple Home, whether it’s been Google Home, whether Amazon, like everybody wants to build the smart home, but nobody’s really gotten that yet. Is Google the company that could do that? I don’t know. I have a fairly smart home here. Lighting systems and drapes and things like that that are on control and using a system called Savant. I mean, look, there’s not a great home control system, but with everything being connected to the internet, it really should be by now. So that’s a weird one.

But I thought it was kind of cool that they brought that back to the surface. But listen, make no mistake, this was an AI event about the AI search, about AI and apps, about AI in their phones and AI in their devices. And I guess, what I would walk away with is Google’s on the track, and you and I say this all the time, I’m going to say it again, we’re going to get onto the next thing because we’re going way too slow, is that AI and competition is good. We want competition in this space. So all right, I’m going to pause there so we can keep this thing moving.

Patrick Moorhead: No, good way to end on a competitive front. Dan, let’s move into the next topic. Surprise! Tableau GPT, Tableau owned by Salesforce.

Daniel Newman: I always feel like we could have consolidated this to Slack and Tableau into one, but I guess, each deserves their own moment here.

Patrick Moorhead: No, this is exactly what they would’ve wanted, right? Different brands, different sub-companies, different CEOs, right?

Daniel Newman: Yeah, I’m just saying though.

Patrick Moorhead: I got you, bro.

Daniel Newman: So I’m going to make this kind of brief here. So Tableau on the other end… So Slack is more that user interface every day. Well, Tableau is kind of that decision driving engine of a company. If you’re a CEO, Tableau is that visualization, that dashboard insights that you would need to run your company, run your business unit, run your sales team in a very concise way. So I guess, the company is now – just like we’ve been talking about with Microsoft – looking across their portfolio and all the different products. So sales, the data with Tableau, Slack with chat, and they’re basically looking to say, how do we make GPT and OpenAI capabilities enable companies to make better decisions? And the bottom line is that, that is the number one is that over 80% of IT leaders believe that generative AI’s biggest opportunity is to make help organizations make better use of data.

I’ll file that under duh, kind of obvious. But at the same time, what what’s going on is companies have these vast data ecosystems and how do you utilize that technology in something like a GPT, something like a generative to say we’re going to pull from all the data sources in a very comprehensive way using natural language to query, to get visualization of what you need to know in a really short order. That’s really what Tableau GPT is going to enable. It’s going to enable faster decision, more personalized analytics. It’s going to basically give more capabilities for decision makers to get the dashboard they want. So if you’re building Tableau, historically, you needed kind of a developer or at least someone that understood low code to a good extent to develop a dashboard for you. But when you wanted to basically get any inquiry or insight that was not part of your dashboard, it was really hard.

So think about being able to query your business analytics system the way we are querying a Bard or ChatGPT or Bing with generative to say, “Hey, I want to know how many refrigerators we sold in the UK between January and March of this year.” So while you may have had a dashboard that was full year to date, you just wanted to see three months and then you wanted to see it by a salesperson. Or then you wanted to see it by a region or by a certain retailer store or, or, or…the ability to use NLP natural language generative to get those dashboards updated in real time is huge for executive decision making. It goes back to what I said about Benioff saying you’re going to have Einstein in the boardroom.

This is that kind of insight with analytics, with data visualization that becomes the genius in the boardroom that can instantaneously get you those numbers. If you ask anyone on our team at Futurum, I ask these kinds of questions all the time. How much did we sell year to date to this customer versus how much have we sold this year to date? These are things that are buried in your CRM. It’s never been impossible to get, Pat, it’s just been really a big pain in the rear end to be able to pull it up and it ends up taking time from someone that could be doing something else and now they can be doing this.

Patrick Moorhead: That was a great breakdown, Dan. So a couple of things here. This Tableau GPT is based on Einstein GPT technology as you would expect. A little bit confusing, having different names across different products. My guess is, or my hope is that if you have customers that are using the entire Salesforce Suite, I don’t think you want different. You don’t want Slack GPT, Tableau GPT, Einstein GPT. I don’t like that move, but that’s branding. And I think it got us to talk about, like we’ve talked about Salesforce flavors of GPT three times now. So mission accomplished, marketing folks. One thing that I wanted to point out too is this is not pulling data from the internet. This is not pulling data that has been trained publicly, this is information coming directly from your enterprise.

So anything that’s in Salesforce Customer 360, it gets access to that. Any place that you had previously pointed Tableau to pull this in, it will work across all of those platforms. And I said on Slack GPT, this was kind of what it was made for. If you look at what GPT is adding to it, it’s making Tableau better. When Tableau first came out, essentially it was intelligently through analytics technologies, being able to… You could script different queries to make it easier. It did have the ability to ask questions, but what came back based on analytics and machine learning wasn’t all the best. So this is supersizing it, I haven’t used this, I’ve seen some videos on it, but should lead to better answers more quickly. And at the end of the day, that’s what a tool like Tableau is made for, is to pull in the best data when you need it and keep people like Daniel Newman, if you have to work for him, very happy.

Daniel Newman: Sorry about that.

Patrick Moorhead: Nothing at all. I mean, whether you’re Daniel Newman or a PC company that wants to know how many notebooks and the revenue acceleration and profitability in Malaysia in the third quarter, you should get that information as opposed to having to go out to an analyst who has a giant spreadsheet, has to put together some data and pull it in. So I like it, very consistent with some of the data plays that we’ve seen from the other folks, whether it’s BI or some stuff from Oracle. Dan, let’s move into the earnings segment and that is GlobalFoundries. I am going to pick my own number on this. For those of you who might not be aware, GlobalFoundries is the largest international foundry out there defined by being in as many countries as they’re in. They’re not as big as TSMC, but they’re very focused on that.

So for the quarter, they beat on EPS, they met/beat on revenue. And the thing that’s weighing on the stock right now is a light guide. And it’s the first time in a long time I’ve even seen sequential declines from the company in revenue, gross margin percent and earnings percent. But listen, it’s everything the company told you before. I read the transcript and Tom basically said, “Hey, we anticipated that the first half in the first quarter would be the low point in revenue in the peak of the inventory cycle.” And basically, it happened, and the declines is exactly where you would expect. Smartphones, big dollar declines and the company is a big supplier of analog chips like RF that go into these phones, particularly on the Android side. PC’s – huge decline as a percent, but it’s not really a big dollar value. At some point, I’m pretty sure GlobalFoundries will take that out. That’s really historical based on the business that they used to do with AMD.

IoT, which currently combines consumer and commercial, that was down. And if I could guess, the biggest decline was on consumer and they probably didn’t have big declines in industrial. They were up exactly where you’d expect, automotive. I don’t know of a single automotive chip supplier or foundry that’s down. Now, I do think the automotive gravy train is likely going to slow down a little bit because I feel like we’re in a catch-up mode right now. And when I balance that across internal combustion engines, I look at the interest rates that are going sky-high that will disincent new buyers from coming into the market. That could be the last one to go, but it is good to see because the foundries are the canary in the freaking coal mine, how they did so well. Now, the final comment that I thought was positive, they signed five long-term agreements worth $1.4 billion in revenue and LTAs were an important thing. So they’re basically guaranteed revenue and you get guaranteed slots in the foundry. So overall, exactly what you would’ve expected from GlobalFoundries.

Daniel Newman: Yeah, I’ll hit this one pretty quickly, Pat. I mean, look, we knew smartphones would be down, we knew PCs would be down, we knew consumer IoT would be substantially down. What we’re sort of watching is the turn pivot for data center, automotive’s been rock solid. That’s got a lot more to do with the never caught up from the supply chain. And then of course, you’ve got an industry where vehicles are significantly more semiconductor intensive than they used to be. So every car is getting more and more chips in it. So even if we sell less volume, you’ll see the manufacturers, the semiconductor players in the automotive space all perform better. GlobalFoundries is unique, it handles a lot of the lagging process. So what a lot of people still don’t realize, and even though we’ve talked about it, God knows how many times on this show, Pat was, this was a big part of where the actual chip shortage became a problem.

And so, Global has benefited significantly from it throughout because they were always at capacity selling through every single chip they can manufacture. So we’ve finally sort of seen that come home to roost and that’s led to some year on your declines. But I think as you said, the company had a good handle on where it’s at. This sort of seems to match what we’re hearing from Qualcomm. Some of what we heard from Pat Gelsinger at Intel, some of what we heard from Lisa Su, we seem to be hovering around the bottom right now. And it seems that most feel that that inventory glut is starting to be sold off, that we’re starting to move directional into a next wave of increased revenue, which I think will have to be accelerated by all this AI momentum. And that’s one of the things I keep saying, all the AI that we want to embed in new phones and new PCs to support data center servers to make cars smarter is going to mean we’re going to have another wave of replacement cycles.

When you want to edge a new PC that’s going to be able to do edge inference, you’re going to see a new chip set that’s going to come into these devices over the next one or two years and then that’s going to create a forced upgrade cycle for people that want to be able to take advantage of new capabilities. So for a long time, there hasn’t been a new sort of got to have it capability. There was more of a, oops, we’re all going to work from home thing that led to a lot of PCs being bought. That’s just a for instance. But to do all this generative AI, we’re going to have data center buildouts to a pretty massive scale. Companies like GlobalFoundries have critical components to complete products.

And so, they will be along for the ride even though they’re not necessarily the one building on three or five nanometer technologies that everybody gets really excited about. So congratulations, I guess, to Global for surviving this really rough period of time for semiconductor companies. It seems that they were able to push their margins up a little bit, which is a good thing because when you sell less, you at least want to always try to make more. So all right, let’s keep going, Pat, because I’m almost at time and I think you both have to run at the hour.

Patrick Moorhead: Speaking of automotive, Luminar had their earnings. Luminar high-flying lidar company, growing a 100% plus a year. Dan, how’d they do?

Daniel Newman: Yeah, so I mean, look, they did, they doubled revenue. They shrunk their loss. I think they lost a little more than what was expected. But this is a story about design, and this is a story about adoption. I want to be very clear. Luminar, they’re doing $14 million in revenue. Which again, not small, but most of these companies we talk about when we talk about earnings, we’re talking about billions of dollars in revenue. So why is this so relevant? This is really more about a massive shift in adoption of new lidar or new sensing technology that’s going to exist in vehicles. So while you have a company that did 14.5 million in revenue this quarter and grew significantly, you’re talking about a tiny number of volume in these automotive deals. But you’re talking about design win, this is a company that won a design agreement with Mercedes. This is a company that won a design agreement with Volvo. This is a company that’s won a design with the new Polestar. This is a company that’s got a number of different vehicles in China that are committing to using its technology and lidar is patent.

And we’ve shown the demonstration many times, just more capable of offering the sensing technology that’s going to be required for true driverless and next generation ADAS capabilities. So what I keep saying is if you’re really an investor in a company like Luminar, looking at the quarter to quarter to quarter revenue growth is one indicator.But looking at the designs and the number of design wins and when those design wins are most likely to come to fruition, much like what we’ve seen with Qualcomm, their revenues in the hundreds of millions, but they’re design win pipelines. In the billions, you’re going to see a similar trajectory. Well, this right now is Luminar’s core focus area. When these new designs get ramped up and come to market, you’re going to see scale of revenue, scale of volume and scale of the company’s growth. So the fact they’re being picked, chosen, and even at this size to me was the most important thing. And that’s what I look for in every earnings report.

Patrick Moorhead: Good breakdown, Daniel, some of the adders that I think it’s important to reinforce that this is a startup with massive growth that happens to be public. What’s unique about the company? Well, it’s actually shipping. And if I look at other companies being able to ship compact, high performance lidar, they just don’t exist. And therefore, when the company does not just break ground on a facility but open it and start producing, that is a big deal. And the company is already set up globally to create a ton of these devices for their Western and Chinese customers. So that’s a 100%. And this is why their competitors are clawing all over them and complaining and sometimes making up information about the company. Volvo and Mercedes would not choose the company if it weren’t the safest, most credible, accurate technology out there. And they’re long-term agreements. So you’re going to be able to take advantage of the being first price, which is always higher margin than the trail.

And if you’re Mercedes, any car company that has a decent brand, you want to save five bucks and take a risk to kill passengers or have recalls, it’s just not worth it. You kind of go with the best, particularly in the beginning. And then what you see as markets mature, they get cheaper and move down the cost curve and then the people who deliver premium technology around to the next premium thing. So Dan, great show. Good to see you. I’m going to be gone for a while. I’m going to miss you. I don’t even know if I can do the pod next week, but we’ll see.

Daniel Newman: We’ll see. We’ll figure it out. And I just want to give a shout-out to my daughter Hailey, graduating from college tomorrow. I’m proud of you, girl. Keep rocking.

Patrick Moorhead: It’s so nice and Happy Mother’s Day to everybody out there.

Daniel Newman: Oh yeah, that too.

Patrick Moorhead: If you are on this planet, you can thank your mom, which is basically everybody. So don’t be a dork. Contact your mom regardless of what kind of relationship with you. You might have with her if she has passed like mine, just to have good conversations about her. Remember the best. She’s your mom, and that will never change. So with that, thanks for tuning in wherever you are on the planet. Good morning, good afternoon, good night. Take care. We appreciate you. Hit that subscribe button.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Unveiling the Montreal Multizone Region
Steven Dickens, Vice President and Practice Lead, and Sam Holschuh, Analyst, at The Futurum Group share their insights on IBM’s strategic investment in Canadian cloud sovereignty with the launch of the Montreal Multizone Region.
In this episode of Enterprising Insights, The Futurum Group Enterprise Applications Research Director Keith Kirkpatrick discusses the news coming out of Google Cloud Next, focusing on new product enhancements around AI, workspace, and security.
The Futurum Group’s Dr. Bob Sutor looks at five generative AI Python code generators to see how well they follow instructions and whether their outputs check for errors and are functionally complete.
Cerebras CS-3 Powered by 3rd Gen WSE-3 Delivers Breakthrough AI Supercomputer Capabilities Matching Up Very Favorably Against the NVIDIA Blackwell Platform
The Futurum Group’s Ron Westfall assesses why the Cerebras CS-3, powered by the WSE-3, can be viewed as the fastest AI chip across the entire AI ecosystem including the NVIDIA Blackwell platform.