Search

Intel Architecture Day: All Things Datacenter

The Six Five team provides insights into Intel Architecture Day and all things datacenter.

Watch the clip here:

If you are interested in watching the full episode you can check it out here.

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.

Transcript:

Daniel Newman: Pat, take us through Intel data center on architecture day.

Patrick Moorhead: Yeah, so I took everybody through the cores, and the cores can go client and data center. And then I hit Alder Lake, which was the client product. And now I’m going to hit all things data center, which was as equally as impressive. And from a data center point of view, you’re looking at AMX, you’re looking at XCHPC, which is an architecture for graphics for HPC, AI, and ML. You’ve got Mount Evans, which is an IPU also known as the DPU in regards to Marvell, and the vernacular that NVIDIA does.

But let me jump in here. So first off, if anything, Intel’s going straight after NVIDIA’s A100 with a product called Ponte Vecchio. And Ponte Vecchio is just a … it’s the most beastly product that I’ve seen after the NVIDIA A100, over a hundred billion transistors. Essentially what it’s doing is it’s accelerating AI, HPC workloads, and that’s exactly what the A100 does.

And Intel’s doing it, gosh, with a ton of technology. They’re 3D stacking this thing, then they’re connecting the 3D stacks via EMIB. TSMC is doing part of it with N5 process, which is the real name for the fake five nanometer. And then the base tile is an Intel seven. This thing is a work of art. And quite frankly, this thing is either going to go down in flames as the second biggest bust in Intel history. I would say the first was the many core architecture that they were trying to put together years back, or this is going to be an absolute stunning success. But I believe just based on some of the commitments that some of the national labs have made, they are going to sell many of of these things.

Let me go to Sapphire Rapids. Sapphire Rapids is essentially the code name for the next Xeon, but essentially it’s not a monolithic die, it’s a distributed die that’s more similar to AMD’s Epic architecture. And then it’s put together via EMIB technology. And that’s really good. Intel’s issue is not necessarily design, it’s execution. And what we know from a distributor architecture is that your compute tiles are really the only things that have to be in leading edge. So compute tiles are on Intel seven. I don’t know what the rest of the chip is. My guess is that’s probably a 14 nanometer, but essentially lowers risk for manufacturing. I know that’s not sexy for everybody to hear, but it’s reality of where Intel is.

It’s all a bunch of P cores. There’s no E cores in this. We have no idea how many P cores. We have a little bit of idea as to the performance, but in the server world, quite frankly, it’s about delivered performance, not some benchmarked performance. Interestingly enough, even though it’s a radically different architecture, Daniel, the messages between this and Ice Lake are very similar. You’ve got a ton of acceleration. You’ve got acceleration for data streaming, crypto, compression, decompression, AI, which we had before, but there’s an adder which really hits some of the issues with some of the major CSPs out there. And those are things like data streaming and microservices.

So with microservices is there’s this start-up penalty and there’s this wind down penalty for microservices, and Intel is going to be accelerating that. Now, with acceleration some of it is you have to directly write to an API, which takes heavy lifting. And some is just inherent in the design itself. Obviously the ones that aren’t going to take acceleration are things like microservices acceleration. But again, this wasn’t a product launch. I have no idea how this compares to AMD. My guess is if you had to put a gun to my head is that AMD is still going to win on multiple things and Intel is going to win on everything that’s accelerated that’s out there. Because quite frankly, AMD doesn’t accelerate anything with their last three generations, it’s just hardcore integer and floating point and just a ton of cores down there.

My final thing I’m going to talk about is, as you know, one of the hot topics out there is acceleration at the edge. So offloading the main server to run apps and everything else is offloaded, networking functions, crypto security, even accelerating storage. Intel came out with its first ASIC based design called Mount Evans. And that’s ASIC versus an FPGA. That means it’s hardened. That’s mean it’s going to be lower power. It’s going to mean it’s higher performance in the same die space, but this is flying primarily in the face of Marvell and a little bit in the face of NVIDIA. NVIDIA is really focused right now on data center networking, offload, and Marvell is primarily focused on and seeing success in CSP and carrier networking offload. That’s it. That’s the tweet.

Daniel Newman: That’s it? That’s the tweet. Oh my gosh. Yeah. Follow Pat, tweet, stream. Pat, write an article for crying out loud. Jeez, Louise, you’ve got this covered. But no, lots of things here. I’m going to spend just a few minutes, not even, because you spent all my minutes talking. You ever sense that out there? All of our listeners, that sometimes he and I get a little jelly of each other when one of us takes all the oxygen out of the topic.

But look, when it comes to the data center Intel’s got a few major opportunities and constraints. The opportunities and constraints are one, I talked about the core. It’s returning on process leadership. It’s execution. Delays have been its biggest thorn in its side, our recent Sapphire Rapids delay. Even though it was a very brief one, was looked upon very negatively.

You’ve got a lot going on with both DPUs and acceleration. You saw on the client side, some announcements about Intel and discreet GPUs, but the space NVIDIA plays in, the company has been very successful. It has migrated from training to inference. Cisco has some things, Ponte Vecchio and One API that it’s trying to build to become more the center of an AI acceleration GPU existence, but that’s a huge market space. And these architectures are going to be key to the company participating and competing in those spaces.

I like what you talked about with the edge. I like everything DPU. It’s coming in a number of different fashions though right now. And it’s going to be very competitive. Marvell executed very well in the DPU space. So Intel is going to definitely have to work very hard to compete there. And I have no doubt that it is capable of doing so.

And then of course you have to compete with what cloud providers are doing on their own. You look at things like AWS Nitro and what has been created in the public cloud to be able to do these things. But of course that’s for public cloud workloads. Intel is great not only in the public cloud and being a cloud partner, but also obviously to all of those prime data centers that are going to require compute power and all those OEMs that are going to build hardware to support that investment.

So good coverage, Pat. Write an article. If you don’t write an article, everybody just follow his tweets. I’m sure it’s in our show notes. He likes to share tweets. I don’t do that. So you’ll have to read my articles.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

On this episode of The Six Five Webcast, hosts Patrick Moorhead and Daniel Newman discuss Intel Vision 2024 event, Google Cloud Next 2024 event, Apple M4 chips and new macs, TSMC gets $6.6 billion CHIPS Act funding, Marvell Accelerated Infrastructure for the AI Era event, and U.S. inflation data.
Patrick Moorhead and Daniel Newman are joined by Intel's Justin Hotard and Sachin Katti for an insightful discussion on Intel's strategic direction regarding Enterprise AI, which was covered this week during Intel Vision 2024.
The Futurum Group’s Guy Currier provides his insights into the advancements in the creation and operation of applications and their foundational data, along with AI, showcasing the rapid progress being made in cloud and application development.
Kubecon and the Vendors Lay Out Strategies for Driving AI
Camberley Bates, Vice President at The Futurum Group, covers the pressing issues of memory constraints and highlights from Memcon 2024.