Search

Cloud Performance Made Flexible with Intel’s Rebecca Weekly – Futurum Tech Webcast Interview Series

On this episode of the Futurum Tech Webcast – Interview Series I am joined by Rebecca Weekly, VP and GM, Hyperscale Strategy and Execution for Intel. Rebecca leads the team that ensures that all products across the spectrum are optimized for hyperscale.

Our discussion centered on Intel’s the 3rd Generation Xeon Scalable Processor launch and the role cloud innovation has played in Intel’s ability to deliver high performance for the most in-demand workloads.

Cloud Performance Made Flexible

My conversation with Rebecca also revolved around the following:

  • The role cloud technologies have played not only in business but in our everyday lives in the past year
  • A quick look into Intel’s agile and flexible approach
  • How Intel unlocks the full potential of hardware with software optimizations
  • The drivers behind the world’s largest cloud providers using the 3rd Gen Intel Xeon Scalable processor
  • An inside look at Intel Software Guard Extensions

Cloud computing is, as Rebecca said, how wonderful gets done. This technology is driving the future of digital transformation and will likely lead to other advancements for years to come. If you’d like to learn more about how Intel is leading in the cloud space, be sure to check out their website. And while you’re at it be sure to hit the subscribe button so you never miss an episode of the podcast.

Watch my interview with Rebecca here:

Or listen to my interview with Rebecca on your favorite streaming platform here:

Disclaimer: The Futurum Tech Podcast is for information and entertainment purposes only. Over the course of this podcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.

More insights from Futurum Research:

IBM Returns To Revenue Growth During Its Fiscal Q1

NVIDIA’s AI-On-5G Ecosystem Kicks Off At NVIDIA GTC 2021, Designed To Turbo Boost 5G Vertical Use Cases

Dell Formally Announces Spin Off Of VMware Set For Q4

Transcript:

Daniel Newman: Welcome everybody to the Futurum Tech Podcast. I’m your host, Daniel Newman, principal analyst, founding partner of Futurum Research. Back again with another edition of the Futurum Tech Podcast Interview Series, or I like to call it FTP the Interview Series because saying entire words has become hard these days, but in all serious, very excited about this show. The Futurum Tech Podcast Interview Series brings posts of top level executives from some of the world’s most innovative companies. And this show today we’ll have Intel joining us on the back of their 3rd Generation Xeon Scalable announcements. We’re going to be bringing Rebecca Weekly on to the show, and we’re going to be talking about the Cloud and a lot of other things can be a great conversation.

I’m excited, I’ve had a couple of Intel conversations following this big news. It’s been a big news month for Intel with the IDM 2.0 announcements and you’ve heard me talk a lot about it, but here at Futurum and we love chips and we love chips and SAS. And by the way, there is no SAS without great chips. Anyhow, quick disclaimer, this show is for information and entertainment purposes only and while we will be talking to executives about publicly traded companies, please do not take anything we say as investment advice. Without further ado, I’d like to welcome Rebecca to the Futurum Tech Podcast. Rebecca, how are you?

Rebecca Weekly: I’m great. How are you, Daniel?

Daniel Newman: It’s great to see you. You’re in the green room, everybody’s always in the green room and always wondering what they’re thinking when I’m making that introduction. But you know, we just have to assume that you’re thinking, “Let’s go, let’s do this.”

Rebecca Weekly: Let’s go, let’s do this.

Daniel Newman: So, quick introduction for everybody out there. Talk about your work, your role at Intel.

Rebecca Weekly: Well, I run the Hyperscale strategy and execution team, which is all about making sure that our products across the portfolio and our solutions for our customers are optimized for Hyperscale. So it’s an amazing opportunity. We get to work closely with the world’s greatest innovators to deliver a portfolio and to deliver services and solutions at scale. Nothing more interesting in life, frankly.

Daniel Newman: How long have you been there?

Rebecca Weekly: I’ve been here almost. It’s crazy to say this, six years. I think I have a month and like a week before I hit six years.

Daniel Newman: That’s a good tenure. I’ve had people on this show that have been there over 20 and you get people that are like, “I started two weeks ago. This is my first podcast that I’ve done since I got here.” So either way I love hearing from all of it. It’s funny, you get somewhere for a while and you kind of get that insulated perspective. So it’s always great to get out and talk beyond. But then at the same time, it’s always kind of a little bit fun to get the newbies in and you ask them the hard questions to keep them on their toes.

Rebecca Weekly: It’s definitely, you know if you introduce yourself by the length of your sabbaticals, like you’re on your second sabbatical, then obviously you’ve been there forever. I haven’t yet hit sabbatical tenure, so I still think I’m a newbie.

Daniel Newman: Well, I mean, there’s probably a lot of people when you get back to walking the halls and we’ll get to that, cause nobody’s doing a lot of hall walking these days. And certainly not if your headquarters are in California, we’re a little ways away from doing that. I’m very excited to have you on the show, yesterday it was a big day for Intel, long overdue. If anything, the company has kind of faced a little bit of scrutiny for, it’s not been what you put out, it was when it came out. And I think a lot of people yesterday probably looked at the sky, took a big sigh of relief, took a long breath and said, “Yes, we did it. Ice Lake is here, 10 nanometers.” because by the way, for a lot of people out there, they hear 10 nanometer, they go, “Oh, AMD’s on seven nanometer.”

Well, when it comes to the specs and performance, Intel’s 10 nanometer plus and AMD’s seven nanometer are very close. I just have to remind people that all the time, because they get really caught up in numbers. And I didn’t want to force you to have to tell me that. But you know, we talked about walking the halls, Rebecca, in 2020, it was a wild year. It really was, it was depressing at points, for tech it was kind of thrilling because tech was a huge winner. And if you think about just keeping our economy going tech and the Cloud, your world were particularly influential. I’d love to hear your take on the importance of Cloud is Intel sees it, because I don’t think people always realize how closely intertwined Intel as a chip maker is with pretty much all the major Cloud players.

Rebecca Weekly: Yeah. The Cloud runs on Intel. That is for sure. Certainly Cloud had an amazing year. It was our biggest year ever. We are an essential part of everyday lives. You saw in 2020, as people really started to look at their business continuity planning, disaster recovery planning, a really different mindset towards Cloud from a business perspective. And then obviously consumer changes everything that we’re doing with video conferencing every day, which is an enterprise use case now. But certainly, the number of times you’re Face Timing your grandparents, because your kids haven’t seen the grandparents in almost a year, right? Vaccine discovery, everything that happened in that domain space for remote research, across different regions, all of the remote learning activities, the e-commerce, the content streaming , Netflix had a really big year. It was definitely a year where user behavior and business behavior all converged in Cloud Computing growing more than ever before.

So in terms of how Intel views that and how we partner in that, obviously it’s everything from supplying that and being able to supply the insatiable demand for compute in that domain space, dynamically with all the capabilities there. But it’s also how we are helping people get better efficiency across the data center, across the different components. And one of the things that I loved, yesterday especially is that we talk about 3rd Gen Intel Xeon Scalable and of course that’s so critical to our business, but it wasn’t just about that. It was everything we were doing across the portfolio, the work with Optane PMem, the work that we’re doing in our SSDs, the work that we’re doing with our networking cards, everything that we do together collectively, holistically to solve end user problems across high-performance computing, artificial intelligence, Edge use cases with consistent strategies from Edge to Cloud.

I mean, that is where, as the world goes Cloud native. So we have more consistent experiences for end users, whether they’re on their phone or their laptop or logged in to their actual networks for work. That is where we’re going to see such an amazing breadth of the portfolio that Intel can bring to support these Cloud service providers in their aspirations to connect the world.

Daniel Newman: Yeah, absolutely. I love that you brought up the breadth. I wrote up a piece and I kind of said there was like two major narratives that kind of came out. One was this robust, flexible, agile platform that is still serving about 90% of the data center CPU market, which is a pretty respectable market share for, let’s say for a company that many in the media have been pretty critical of at times in recent years, as and then there’s a second one, which is like this very sort of small, but important group of like benchmarkers that were kind of going crazy about, well, what about on an OpenSSL? How does this one perform versus this Milan or this Skew and this, how did this float point compare to an A100? And I’m like, this stuff matters.

But I said, I really look at Intel as sort of this, Hey, we’ve got this general purpose computing set of resources that is not just about CPU, but also really all the way from networking security, memory, storage, and my point is inform almost every workload, very puredo asks, meaning for that puredo portion of workloads, you will get all the performance you need and more to do what enterprises and Cloud providers and Edge network deployments need. And so I kind of look at that and then I’m much more reaching that business media and I’m reaching that big exact suite and just really saying, and by the way, the partnerships, the software and all these other things, and we’ll get to software by the way in a little bit.

But I thought it was a very successful announcement in certain places you guys absolutely killed it. And in some places it was just very good next gen iterative, but overall there was not a lot of bad. And like I said, other than a few benchmarkers that want to find those like, oh, we’re … This one very specific workload, it’s like, just stop, like, you’re always going to find that in anything that you’re going to test, but I want to talk a little bit about-

Rebecca Weekly: And that’s the major general purpose, right? Is you won’t always be good at everything or best at any one thing, it’s the ideal is that you’re good at so much. So I love the haters [crosstalk] it’s where we aspire more.

Daniel Newman: I’m going to be fine with someone that’s 80-90% I like, and maybe has a few flaws then try to find the perfect friend, you’ll never have any. It’s simple, there’s your most simple way of digesting what happened yesterday. So, but in all serious, one of the things that really did catch my attention was you had this whole wonderful … How wonderful gets done and that’s cute, but I mean, the agility and continued optimization was something that caught my mind, flexibility, agility, optimization, improvement, iterative, and innovative software workloads, enabling digital transformation, and also being very diverse, looking at your enterprise network need the Hybrid Cloud, therefore the Cloud itself and the Hyperscalers and the Edge, which is where so much of the data is going to reside in the coming years, but this has been an evolution. So Intel has evolved to be able to be so comprehensive, meeting general compute needs in all of these areas, but also still being able to address the more custom and specific application needs as well. Talk a little bit about the whole agile flexible approach.

Rebecca Weekly: Yeah. I mean, obviously there’s also the announcement that pat made in terms of how we want to get even more agile and flexible in the idea space. So the way we tend to look at this is because we have the opportunity to work across such a wide swath of different use cases. We can look at where people are doing specialization of compute and see the mix of utilization at scale to say, “Hey, this is an opportunity to improve and deliver better performance in a general purpose, CPU, it’s a common enough workload now that we really need to look that way.” So a great example of this is DL Boost on our CPU. The goal here is not node to node level performance comparisons against what Nvidia is going to do with the latest and greatest GPU. That’s not the right way to think about it the right way to think about it is there’s a ton of recommendation systems that happen in a CPU in line, in an overall flow, right?

Like somebody grabs their cell phone, they log into your app. The app is rendering a dome to you. There’s going to be some sort of a database fetch of you, who you are. And then some sort of an inference of what should be served to you in that web feed. That is all happening with DL Boost, four times faster on a standard CPU. You could of course go across the network to another node, but has an accelerator on it. And that node would perform faster, but you also then have the memory to network card over the network, transferred to the new CPU, transfer out to the GPU. And when you look at that end to end system time for the workload, that’s where we see these great opportunities to just have it on the CPU, running a little faster, and it will actually allow you to use general purpose compute for more.

And I think honestly, ups the game for the accelerators. It’s like, great, if we can do that on general purpose now is the next massive parameter set that we should be focusing on for XPUs of the future. So there’s this fabulous tension and healthy tension, I think between the two domain spaces as it pertains to applications, but you hit on the right thing, which is when you are involved in the developer ecosystem, when you’re involved in the software optimizations that help unlock the full potential of hardware.

That’s really where we can find different opportunities across the system to do optimizations and where and when and how we should bring it into a general purpose framework because there’s that higher utilization versus in line to a network card versus on an XPU happening in some sort of a scaled up node that is really focused on deep learning training. So it’s that combination and we’ve always taken that approach with an open source environment, open developer tools, create a platform where people can innovate and leverage, and then Jevons paradox. Ideally, we are able to participate and join in that process, but it’s about benefiting in many cases, the overall ecosystem, and then competing to win.

Daniel Newman: Yeah, absolutely. And you bring up a lot of good points and by the way, you get me a little bit excited about like the XPUs, DPUs, I know there’s different terms, disaggregation that’s going on because you know what we have come to find out. And Intel’s obviously been pretty vocal about some of its diversification, strategies, it’s chiplet and sort of XPU, disaggregated chips on …. As we can let CPUs do more of what CPUs are supposed to do, meaning taking some because especially in your world, in Cloud where so much of the data center infrastructure workloads are being handled by the CPU, which is reducing the power. So now all of a sudden all those cores and all that capability becomes more usable for things like accelerating those workloads and not for networking, not for handling security requirements or for storage and memory requirements.

You start to be able to say like, “Hey, we can put this really optimal package together.” and that’s clearly where you’re going. And of course, there’s great innovation happening across the industry. I mean, you mentioned it like certain things, certain training of volume as data for high-performance or you’re right. Maybe going across to a GPU that’s just dedicated for accelerating a training makes a ton of sense. But for a lot of those day-to-day applications where you’re just saying, “Hey, we’ve got this database and we want to be able to identify a subset of customers that we want to be able to reach with a certain message at a certain moment, or we want to be able to optimize our billings” things like that, that you can just make what I would call iterative differences in performance.

Why would you want to go through that whole exercise when you can accelerate right on that CPU and right on that instance that you have at your disposal, that resource is just sitting there and it’s performing better than by far, then it’s last generation, but also as well, or better than a lot of competitive products for that particular, specialized needs that you guys have worked to optimize because you’ve really focused on those high volume needs and optimizing them. All right, you said it better than me. I’m repeating it, so I can like … In all seriously, it was a really good explanation though, Rebecca and I appreciate that. I mentioned you earlier about the relationships the company has across the Cloud landscapes. I think Lisa Spelman in the presentation yesterday said every one of the major Cloud players is going to be running the 3rd Generation Xeon Scalable.

So I think it’s great to know before you even officially start announce the product that you have all the big customers, not a typical, but the case, what have been some of the reasons and the ways that you’ve been able to partner to build such loyalty. I mean, and let me give you a caveat because I’m not going to make this so easy for you. I’ve seen and heard about Homegrown and we’re all seeing ARM infiltrating and stuff, but Intel, again, I go back to that 90% number, even with all these new architectures, all these new advanced that Cloud has really heavily dependent on Intel, what’s driving that.?

Rebecca Weekly: There’s a lot of things driving that, partially it’s who is coming to the Cloud. So again, we talked about this change of 2020, and nothing is the same anymore. There are so many different workloads coming to the Cloud, whether it’s SAP HANA or people who are using VMware and trying to whether they run VCF and a more Cloud native environment, On-prem, or then are looking to flex capacity into the public Cloud. Increasingly, the people who are showing up in the public Cloud are not who was there 10 years ago. So those customers we’ve been partnering with as well. So as much as I love to talk about how I love to support and partner with my beloved Hyperscalers and they really truly are. I mean, they push you to be best in class and I love it, but it’s also that, what Pat mentioned, right?

There’s 80 million, I think he said lines of code written for x86 out there in the ecosystem. And as more and more of those customers look at hybrid strategies, they are looking for a trusted brand that runs, that has had the ability to run their product forever. And we’ll continue to support the ISV community, everybody who lives and works and breeds computation from the Cloud to the Edge. And that’s true with our telco partnerships, it’s true across the board. So I think that’s part of it, but in terms of what we’ve done with our Hyperscalers directly, we have worked so hard to ensure through partnerships through forums like OCP so much that we do enables vanity free hardware at scale. We’re really trying to support them with the highest reliability, the best TCO, the best methodologies for management at scale. And when you think about how much of general purpose compute starts in a mobile footprint, if you’re talking about an ARM architecture or starts in a client footprint, if you’re talking about even x86, right, where AMD and others are playing, what the principles of the Cloud are, is very different.

And a very simple example is when I’m in on my laptop at home, I expect IT is going to force me to do a reboot at least once a week, right? That’s just, we’re all trained, you’ve got your Microsoft update and it’s going to push your patches and they’ll do our reboot and this is what we do. This is not how the Clouds, we cannot come to the world and I’m running my Fortune 500 company using flexible capacity from the public Cloud. And oops, sorry, we need to do a reboot and you’re just going to lose all that data or have to get migrated or downtime, oops, 404 error, that doesn’t happen, like they can’t do that. And what it takes to support at scale is completely different around reliability, availability, serviceability, and the work that we do with transparency and visibility into our performance counters out of band telemetry, how we support custom BIOS versions, where we are able to lean in with seamless firmware updates.

These are not like the performance things that everybody talks about and those benchmarking blogs that you were mentioning, but these are the things that allow you to operate at scale, and it’s really critical. So when we talk about how we support our partners here, that’s the number one it’s quality, reliability, availability, having, making, and meeting their expectations in this domain space, and it is critical. And I believe fundamentally more than anything else, more than supply, of course, there’s lots of other great things that we do, but more than anything, that is why the Cloud runs on Intel is that we are that trusted platform that is everywhere that is able to deliver a reliable experience to end users.

Daniel Newman: Yeah, actually, it’s one of those things that I think everyone’s kind of got their eyes on right now, Rebecca is, as the market continues to shift, as you have, these A6 being built and these workloads is what happens. But yesterday it was definitely a day where a compelling case was made that so much of what companies are doing and you didn’t specifically say it, but Hybrid Cloud sort of has become the accepted standard of enterprise now, like you love your Hyperscalers, they push you, like you said, there’s no downtime. There’s no downtime for enterprise data center either though, none of these things. But the point is, is that building a kind of congruence between Prem and Cloud, Intel has been focused on this, which sort of leads me to my next question because the people leading IT want that consistency of experience.

So what they know is as these workloads migrate and the data flows between Prem and Cloud, consistency and compute and resources, and the way resources perform gives a certain amount of dependability in terms of how their enterprises run. And again, these are people trying to access applications and do stuff, they don’t care. They really don’t care about the blinking lights of the data center they’re in their application and they’re trying to do something. What did you guys say trying to get wonderful done at work, but a lot of this also is software dependent, right? I mean the software that’s co-developed your select solutions, the overall ecosystem I know on AI, or OpenVINO and different things where you have all these partnerships, but software is a big focus for the company.

It’s also a sticky point, right? It’s been a big sticky point. People don’t want to refactor workloads. They want workloads to be portable and easy to move and migrate and update and anytime you change, even from x86 to x86, when you change from the platform, it does require a lift, which has to be something that’s been super good for keeping people on Intel. Talk about that.

Rebecca Weekly: I believe people will do whatever is necessary for best performance and best time to revenue, no matter what, there’s a ton of code out in this ecosystem and aspects of virtualization, certainly for encapsulation, where the multiple generations of Intel in the Cloud and on Prem has massive value to just give them that seamless migration capability you mentioned, right? And I very much agree I wouldn’t bank on that forever. That’s only the paranoid survive to code a pretty incredible founder. Our job in the Cloud is also to see where the world is going and there are runtime environments and there are things that are easier to lift and shift anywhere. So we always have to compete for what is critical to our customers every day. So I look at it as it’s almost a grocery store, right? You’re going to need to have the right trusted platforms with the right capabilities that will run anything and everything, everywhere.

And that is Intel today, and that is Intel where we are across our Clouds across On-prem and in the public Cloud, supporting best-in-class hybrid solutions, seamless migrations. I could talk about SGX for days, or our new crypto features just to improve the security of your fleet and experience, but those are today’s most important things. And don’t for a second think, we are discounting where the world is going from Cloud native environments, containerized environments. And we are innovating in that domain space as well, around isolation capabilities, around quality of service around reliability, the way that we were talking earlier. So software is critical, it’s important, we will always kind of invest in that broad ecosystem. And we have zero, I would say, hubris in the domain space that is sufficient, that is necessary, but insufficient and we will continue to optimize and innovate across.

And, I view the work that we’ve doing with AES-NI in this 3rd gen platform, as a perfect example, you go look at OpenSSL and how much faster it runs on Intel versus any other architecture. I don’t care how many cores you have. Like, that’s the kind thing where it’s just ubiquitous, who doesn’t do SSL handshakes in the Cloud. And that is a Cloud native workload, and that is a name, your favorite flavor. So I get really excited about by looking at all of these markets and the way we serve them and the agility to serve them even better in the future through things like IBM to Datto. I mean, there’s just, the sky is the limit, Pat is amazing, we’re all fired up.

Daniel Newman: It’s great and we can do a whole another podcast. I’ve got a lot of opinions on that and I thought overall that-

Rebecca Weekly: Oh, I want to hear them.

Daniel Newman: Well, they’re out there. I’ll send you some links. They’re well published. Well, my opinion has been well-documented on that topic, anyway, but now where I want to take this home, because we only have a few minutes left, but it is something that tends to be talked about a little, but really needs to be focused on a lot and that security you started alluding to SGX. One area I’ve taken a lot of interest in recently has been confidential computing, which is something that is a big part of what Intel’s doing with SGX. You talked about partitioning data and being able to actually have enclaves, secure certain data for like highly regulated industries and stuff, but there was a lot of talk.

Lisa talked a lot about it. Pat talked a lot about it, whether it’s been SolarWinds, the exchange hack. Intel had a few things happen over the last few years, but the point is the security has been in the forefront of attention of many people and you guys seem to be really serious about this. I’d love to kind of get your take on that. Like how is being buying and investing in more secure architectures at the ship level and for the Cloud providers critical to hardening and reducing threat surface and is this something you guys are finding to be a winning formula when you’re talking to the Cloud providers?

Rebecca Weekly: Absolutely. So when I think about security, I think that in some ways it’s gotten a bum rap in the Cloud world for a while, because it’s always been this trade off between, “Well, you can have performance VMs or you could have secure VMs, but you can have both.” Right? And that is usually when we get the best feedback from people of like, “No, that’s unacceptable. We got to go further. We got to go faster. We got to go bigger.” So what I love about, obviously 3rd gen is software guard extensions. This is the most validated, tested, researched TEE out there. Nothing else can be said about that. I mean, we started this back with us with the Skylake generation in our client business. This has been out there for a while as an attack surface and opportunity to really make sure we’re doing the right thing at scale.

So I am ecstatic that now it is in our Xeon Scalable product line to be able to drive even more interesting opportunities for the market. We had seen a lot with key management opportunities. We had seen some HSM capabilities and engaged with that first-generation of SGX, but as we get to these larger enclave sizes, as you can bring more data in to the CPU in a secure enclave now, federated learning opportunities unlock. I mean, that was one of the examples that Lisa walked through around brain tumors and a federated learning model with secure, highly regulated HLS use cases and there’s financial service opportunities and any number of industries. But to me, it’s that combination of the availability, the better performance, because you’re on that scalable platform, much more memory capacity. It’s just beginning what we will see what will happen in the market.

And I personally, not that I predict these things, you’re the analyst, but I look at some of the controversy that’s happened in the last year around people’s privacy and their data and end users desire for privacy. We always think about secure technologies and we think these regulated industries, but to me, don’t you want to know that the article that you’re reading actually is what it was and hasn’t been modified that the pictures you’re looking at haven’t been tweaked in some interesting ways? I think that there will be so much more opportunity as users understand that they can be sure of the information they’re consuming. And that desire for privacy, I think will just create a desire for confidential computing with ubiquity everywhere, not just at the Edge, not just for key management services, but literally not just for regulated industries, but literally for us, the regular old people who want to know that the news we’re consuming is actually correct and validated from a validated source or some other capabilities like that.

So again, when to take it full circle, when general purpose compute absorbs capabilities that have been niche before and makes democratizes them, that’s us at our best, right? That’s our best platform. That’s our best day, because we are just starting to see where the world will go with confidential computing.

Daniel Newman: Absolutely, I can tell your passion about it. I’ve just published a pretty long report about on this topic and I’ll make sure I share that with you because it sounds like that’s the kind of material you want to read at night. And hopefully you can take a break from your paranoia and know that the material I’m sharing with you is in fact, the material that I wrote, but it is actually really funny you point that out, Rebecca, because over the course of the last few years, all this stuff, we’ve heard, quote, unquote, fake news. No matter what side you’re on or no matter which part of the political spectrum you take, it wears on you. You hear it enough, you start to go, “Oh man, am I reading? Can I trust the source? Is this really that person? Is that a deep fake?”

I mean, “What happens to that picture of me on the internet? Where’s my data going?”, all these questions are going to continue to rise and we’re going to expect as society, more answers, more clarity, and we’re all going to be looking for more trust. Rebecca, I could talk to you for a lot longer about all kinds of these things. Unfortunately, this show is a sit-com length, 30 minutes [crosstalk] to let you go, but thank you so much for joining me here today on the Futurum Tech Podcast Interview Series, I’m going to send you off to the green room and we’ll be back with you in just a minute. Wow. What a great show. So much to learn, I hope you had the chance to check out the announcement from Intel on the 3rd Generation Xeon Scalable, so you can get it for yourself and if you didn’t have the chance, hopefully getting to listen to this interview today and having the chance to listen to other interviews I did.

Whether it was my analysis with my 65 partner in crime, Patrick Moorhead and I did a little live show yesterday, which you can catch on our YouTube channel. Or I did an interview that focused on the AI and ML announcement yesterday with Wei Li at Intel, another really great podcasts. But Rebecca was a terrific, a lot of insight, especially if you’re interested in kind of Intel’s role working with the Cloud providers and a lot of other things that go into that, which by the way, I learned a few things today. And I might even write about that later, but anyhow, for this episode of the Futurum Tech Podcast Interview Series, I got to say goodbye, hit that subscribe button, check out our show notes. I’ll put some of the links I mentioned in there, but we’re out of here. See you later.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

In this episode of Enterprising Insights, The Futurum Group Enterprise Applications Research Director Keith Kirkpatrick discusses several new generative AI-focused product announcements and enhancements focused on contact centers and service applications.
Anthony Anter and Tim Ceradsky from BMC Software join Steven Dickens to share their insights on fortifying mainframe operational resilience through a strategic CI/CD pipeline approach, emphasizing the importance of early integration and comprehensive testing strategies.
Dario Gil and Ion Stoica, from IBM Research and Anyscale & Databricks respectively, join us to share insights on why an open future for AI is critical for innovation and inclusivity. They delve into the AI Alliance's role in this vision.
The Six Five team discusses Synopsys Investor Day 2024.