Search

The State of Persistent Memory with Intel’s Kristie Mann Part 1–Futurum Tech Podcast Interview Series

In this special edition episode of the Futurum Tech Podcast Interview Series, Daniel Newman welcomes Kristie Mann, Sr. Director of Product Management for Intel’s Optane DC Persistent Memory products. Over the course of the last year, Futurum Research and Intel partnered to do some extensive research around memory in compute and the future of memory and the data center. Our society has come to depend on data consumption and creation for almost all aspects of our lives — and our data architecture needs to keep up. Intel’s Optane, in the works for 10 years, was launched eight months ago. It merges the best of storage and memory to improve data accessibility and performance in today’s data centers.

The world is changing and becoming more data-centric. The amount of data we are creating grows exponentially year over year, but the amount that we are able to process isn’t growing at the same rate. Kristie shared an interesting statistic that 90 percent of the world’s data was generated in the last 2 years but only 2 percent has been able to be processed. The businesses that aren’t able to take advantage of the massive data economy will fall behind.

Throughout the podcast, Kristie and Dan discussed issues impacting memory in business now, like the need for business leaders to proactively plan for tomorrow’s data architecture now, recognizing that data is the engine of one’s business in today’s marketplace. That means thinking beyond step-function improvements such as upgrading systems every few years to planning years in advance and optimizing on an ongoing basis as tech changes. Doing so will help companies take full advantage of the massive amounts of data being produced today.

Kristie notes that data used to be a liability, but now it’s an asset that can create value for businesses. Still, most of the data produced today isn’t used because we don’t have the technologies in place to pull value from it in a timely manner. Optane aims to bring the amount of data produced into greater alignment with the value companies are able to gain from it by making data processing instantly available.

Optane is still in its early phases, but Kristie notes 500 POCs are already under way, and Intel is seeing a 90 percent+ conversion rates with high degrees of cost savings and performance improvements from users.

According to Kristie, a number of challenges do still exist in terms of pushing Optane to the forefront of memory compute in the data center. First and foremost, many companies simply don’t have the time or resources to try new technologies like Optane, so there’s a problem with trying to overcome the “momentum behind the status quo” that exists in business today. Second, Optane does require a full ecosystem of partners to work to its fullest potential, and right now many of those partners are still in development.

Want to know more? Check out our recent study, The State of Persistent Memory, which forecasts how businesses are using persistent memory. The study, created in collaboration with Intel, includes useful data and information that will help businesses navigate the years to come. Also be sure to tune into Part 2 of this interview coming later this month.

Transcript:

Daniel Newman: Welcome to this special edition of FTP, the Futurum Tech Podcast. I’m Daniel Newman, Principal Analyst of Futurum Research and your host for this edition and I am excited to be joined by Kristie Mann of Intel.

Now, before we have Kristie join the show, I do want to let everybody know that this special edition is being done in partnership with Intel and this show, of course, is for information and entertainment purposes only. We are not providing any financial advice, so do not buy stock because of anything you hear in this show, but tune in, listen in because there is a lot here to learn and we’re going to have a great conversation here with Intel and Kristie Mann.

So moving into this, Kristie, are you there, and welcome aboard on this week’s edition of the Futurum Tech Podcast Interview Series.

Kristie Mann: Hi, Daniel. I’m here and I’m so glad to be here.

Daniel Newman: Yeah, I really do appreciate you checking in. Now, for everyone out there, Futurum Research partnered up with Intel over the course of the latter half of 2018 and 2019 to do some extensive research around the topic of memory. When I say memory and Intel together, hopefully, you know I’m talking about the memory in compute and not being able to remember where you placed your phone or your keys. But we did a big study specifically looking at nonvolatile memory and the technology that’s really being put into place to help bring data closer to where compute happens.

Kristie happened to be the executive sponsor of the project, so I’m really excited to have you here. But better than I could ever do to introduce you, can you go ahead and tell everybody out there that’s listening in on this edition of the Futurum Tech Podcast, tell them about yourself, your role at Intel and a little bit more about the work you’re doing there?

Kristie Mann: Sure. You got it. I’m the director of product management here at Intel for Intel’s Optane DC Persistent Memory product. I manage the product managers, the marketing teams and the solution development organization for this amazing new product category. I always tell my kids this means that I come to work each day and teach people about an amazing new technology and how it’ll transform the data center. They always seem to be a little less impressed than I think they should be.

Daniel Newman: Yeah, I talk about that sometimes with my kids, too. As a technology analyst, I get to play with all kinds of cool toys and tech and my kids are oddly unimpressed with it. You think they’d think it’s really cool, but I guess it’s so ubiquitous for them that that’s just sort of an expectation. Like, “What do you mean you don’t do cool stuff with technology? Look at what we’re doing.”

Kristie Mann: I know.

Daniel Newman: But, no, it sounds like a really cool role. After spending a lot of time and, obviously, we spend a ton of time looking at computer architectures including memory, what Intel is doing is really special and it is really important. So I’m hoping to take the next 15 minutes and just talk to you a little bit about it for everyone out there.

I’m excited that I’m not only going to actually get to do this with you once, but I have lined you up twice for two different conversations. So you’ll definitely want to tune back in to the next month because we’ll have a little bit more from Kristie.

But I wanted to start this one talking a little bit broader than just about the memory. We will dig into that as it continues, but Intel as a whole has historically been known more as a microprocessor company. So even your role alone says a lot to this, but talk a little bit about how that is changing.

Kristie Mann: Yeah, absolutely. Intel really grew up by becoming a superpower in processor development and manufacturing. But as our data centers have developed, the complexity of our data center architectures has become immense and our society has come to depend on data consumption and creation for almost all aspects of our lives.

So to support these changing data requirements and architectures, Intel is really shifting our company focus from being a microprocessor company to a data-centric company and you see that across all aspects of our strategy. We want to be there to provide that Silicon foundation that allows our customers to analyze their data no matter where it sits from the data center to the edge. To do that we need to move data faster, store more data and process everything.

So you see us investing in Silicon and solutions to deliver the performance that’ll unleash that data for our customers. You see us investing in networking technologies, edge technologies, Accelerator technologies, Fabric client devices, software and even automation companies.

Daniel Newman: Yeah, there’s been so much investment and from where I sit analyzing it this week, the company is making announcements of new custom ASICs being built for AI training and for inference expanding edge AI, so AI is definitely a big focus.

In order to do that though, so much of the architecture in the data center, the connectivity and network that allows for the data center and the edge to connect. So I mean up and down the stack, Intel is definitely contributing and participating in a lot of ways that just are beyond sort of the Intel Inside that so many think about when they think about maybe on a laptop or a tablet. So the company’s doing so much more so it’s always really good to have a chance to kind of share that vision.

One of the key terms or trends that Intel is really building on and actually came out for your data-centric event, but is this sort of term data-centric, rather than data center? As you’re seeing it, what are some of the trends that are really driving that need for that more data-centric approach?

Kristie Mann: Sure. I think that enterprises really need to realize that the world is changing and it’s not a step function. The example that I use is that even in years in my lifetime, and we won’t talk about how old we are, but we’ve gone from paper maps where there was really no data that was being generated or consumed, to a world where my Tesla drives me now when I need to and we’re actively generating data and using data real time in our day-to-day activities.

This is really just the beginning. The amount of data that we’re seeing every day is just immensely growing. A statistic that I hear Nevine use a lot of times is that in the last two years, 90% of the world’s data was generated and only 2% of that data is able to be processed. And it’s a shame. And then the businesses that aren’t able to quickly adapt and make use of that data are going to be left behind. So we need to be thinking about how we can use all the tools in our toolbox to be able to take advantage of this massive data economy.

The equation used to be buy your hardware and then upgrade every three years. But now we have to be smart about architecting our data center to balance and optimize all the tools that are available. Do we put it in the cloud? Do we go hybrid? Do we use CPUs, GPUs, accelerators? How do we orchestrate all these pieces? And it’s just become very complex. Businesses have to ensure that the tools are sharp and that they’re utilizing them effectively. So I always tell my customers and my partners, we have to start now, realize that software optimization and investment in the future is the key to really stitching together all of these pieces.

Daniel Newman: Yeah, I think you said that really well. I’m not sure I could have even said it better myself. It really is all about tying together not just where we are today, but really where we want to go and how we want to set up… I use the term modern IT architecture, much more flexible. We need to be able to handle these hybrid workloads, migrating between on-prem, cloud. Data is going to be in motion. We need to be secure. We need to have the right data available for quick access in the right place so that it can basically support enterprise productivity.

Because in the end, that’s really what it’s all about. It’s not really about how the tech works. It’s about how people work and the tech enables that.

Kristie Mann: Exactly, yeah.

Daniel Newman: So with that in mind, then let’s switch from the trends in the tech and talk a little bit about the enterprises. When it comes to the enterprise and what the CIOs and the CEOs and these executives are thinking about when it relates to the ability to be more data-centric, what are your thoughts there?

Kristie Mann: Well, and that’s really a little bit about what I was talking about. I mean they have to be thinking now three years ahead. As we see all of the data-centric thinking, it used to be focused so much on the core compute and the blogs. But now if you think about how you get the insights from your data, we have to be thinking about how we use analytics to harness that data. And where data used to be a liability to us, we just stored it and kept it for future use, it now becomes an asset. So what are we doing to architect our data centers so that we can be making real-time decisions based on that data?

I use the example of real-time fraud detection, with a credit card company. What’s it worth to that company if they’re able to architect their compute solutions and their data-centric solution so that the minute fraud happens, they’re able to detect it, rather than catching it an hour after it happened or the day after it happens. If you can catch it while it’s happening, you can shut it down and stop it and minimize the loss that today is amounting to billions of dollars. So-

Daniel Newman: Yeah.

Kristie Mann: Oh, go ahead.

Daniel Newman: No, no. Actually, I just really liked that. I was thinking to myself, they’re really micro-innovation in a lot of ways. It’s these small deltas in strategy and changes of architecture that can make the difference between how data is encrypted in flight that could just purely save immense amount of dollars. And if companies just think a little bit more strategically and partner a little deeper with companies, they can architect something that’s just going to be that much safer.

Kristie Mann: Yeah, absolutely.

Daniel Newman: I don’t know if I cut you off, did you have anything else you wanted to add on that one?

Kristie Mann: No, I guess to tie it all together, I think that the statement I would say is that when you start thinking about data being the core engine of your innovation and the core engine of your business, it changes the way you think about architecting your compute solutions. So I think if you start with that as the basis of how you optimize and how you invest, it might change the conversation a little bit. That I think, the advice that I give to the CIOs and the enterprises that I work with, is just really think about being data at the center of the universe or data-centric and that’s where we came up with that terminology and you’ll be making the right level of investment.

Daniel Newman: Basically, you build your compute around what you anticipate and desire to be able to accomplish with the data at your disposal and the data you anticipate having in the future, which most people can’t even really accurately predict just how significant that volume of data is going to be.

Now I’m going to pivot a little bit here, Kristie, because I want to talk. We started off, I kind of mentioned about the study and you talked about how you focus on Optane DC Persistent Memory. Now it’s a mouthful, so sometimes I just call it Optane Persistent Memory. Don’t hold it against me. The brand police might get me, but I think just to show that we’re all human, the term is a mouthful, but the technology is really, really important and this is what you focus on deeply. So talk a little bit about what you’re building and when it comes to memory, what Intel is building to drive this data-centric enterprise that you’re so passionate about.

Kristie Mann: Yeah. You’ve asked the great question that I love to answer. You’re going to start me talking about my favorite topic and that’s persistent memory. So Intel has been working hard, it’s been a 10-year journey, but we’re working to build out a portfolio of products to fill in the gap in what we call the traditional memory storage hierarchy. I know you and I discussed this at VMworld last year. The large gaps of performance and cost have existed between various memory and storage technologies for over 40 years. But at Intel we’re introducing a portfolio of Optane-based memory and storage products to help fill those gaps so that software developers and architects have the ability to better optimize for these huge growing datasets.

That brings me to my favorite offering, which is the persistent memory. This is a product that takes the best elements of memory. That’s the speed and the byte addressability and the best elements of storage and that’s the persistence and the capacity. You combine them and provide it at an affordable price and that becomes a game-changer. It leads to significant overall solution savings at the data center level. It’s just amazing.

It’s actually a memory DIMM. It’s physically and electrically compatible with DDR for memory DIMMs, but it’s built using 3D XPoint technology as the media, rather than DRAM. So with this new category, the previously impossible is now possible. It allows much larger datasets in memory right next to the process on the DDR4 bus, real-time analytics at large-scale. I mentioned the real-time fraud detection, faster financial trading, real-time video recommendation. I don’t know if you’ve heard of TikTok, if you have kids. I know you have.

Daniel Newman: Oh yeah.

Kristie Mann: I know, right? Better automated driving technology. I mean, it’s just amazing the things that you can do now that you can do these huge datasets right there next to the processor.

Daniel Newman: Yeah, I think that’s the key. And for anybody that’s out there and we have a diverse listening audience from very technical to business-savvy, but really bringing the data closer to the compute is a key and critical opportunity because having that’s kind of like hot and cold for storage and when it’s in cold storage, it’s not immediately accessible and the resources it takes to make it accessible are more complex, right?

Obviously, Kristie, you’re the expert here, so correct me if I’m wrong and please feel free to elaborate, but essentially by having the data in memory, it makes it instantly available to the application so that it can be accessed. So in a database-driven applications, ERPs and CRMs and other critical applications to running the business, it really does provide that uptime, that real time and it’s going to be so important as you’re tying it to things like inference for instance, for AI.

Kristie Mann: Yeah, and you got it exactly right. I actually love the analogy that you used for storage where you talk about cold and warm data in storage. We’re kind of doing the same thing with memory. We now have two tiers of memory. We have the hot and the warm memory, so I love that analogy and you got it exactly right.

Daniel Newman: Awesome. I love that. You hear that? I’m quoting that out. That’s going to be the title quote for the social media share when I share this post out, Kristie.

I got one more question for you and thanks for bearing with me. You know this show we kind of talked a little broader about the Intel, the data-centric approach. On the next time I have you on, by the way, I’m going to hit you a lot harder on Optane because I do think this is a super-interesting technology.

But I do have one really Optane-specific question I want to ask you on this episode and that’s around what’s happening with Optane Persistent Memory. It’s clearly gaining momentum in enterprise applications, it’s tying together. But how is it being received? What have been the challenges? You said it was a 10-year process, so clearly this wasn’t built overnight and market penetration at huge volumes straightaway. You’re doing the work. How’s that going?

Kristie Mann: Yeah. It’s been a long journey. It took us 10 years to get here and April is when we launched, so we’re about eight months in. It’s a very exciting time for us and I would say that, by and large, we’ve had great interest and reception in the market, but we’re still very early in the journey.

Some of the key metrics and things that we look at is how many PoCs do we have going? How are we seeing what those PoCs turning into production, how is that going? At this point in time, we have over 500 PoCs underway and it’s covering multiple industries and vertical markets. Of these PoCs, a hundred of them have completed, and I’m really excited to say that we’re seeing a very high conversion rate. It’s greater than 90% and for a new technology that’s phenomenal.

The largest adoption that we’ve been seeing has been in cloud and financial services and so that’s 26% in cloud, 25% in financial services. For cloud, it’s been primarily internal usages, like search and recommendation. Some of them have been in services like large memory or SAP HANA. After that, we’re seeing several enterprises looking to do one of maybe two things. Either they’re looking for TCO savings through infrastructure consolidation or they’re looking for faster analytics or IO storage. So, by and large, really stellar results for most of these customer trials and adopters. We’re typically seeing TCO savings between 30 and 50% or we see performance improvements between 1 1/2 or 8X depending on what you’re measuring.

But you also asked about the challenges to adoption and growth and I think that’s the other part of the story that we really should discuss. I think it’s important to understand that every enterprise is challenged to find time and resources to try new technologies and it’s always natural for customers to say, “I’ll just wait until everyone is using it.”

I think one of my favorite quotes came from a CIO at a manufacturing company who said, “Kristie, no one ever got fired for using DRAM.” And it really struck me because even if there is a pain point that we can address, whether it’s costs, capacity, second tier of memory, high availability, data security, we still have to overcome the momentum behind the status quo.

I think the other challenge we’re facing is just ecosystem readiness. Because this technology is more than just a commodity DIMM, we rely on a full ecosystem of partners for that ease of adoption in the enterprise. We need our OEM partners to offer systems designed around the memory and they’ve only just become available in the last few months. Then to get the benefits of persistent memory, you really need software optimization. We have a large and growing ecosystem of software and OS partners, but not every application out there has finished that optimization or has done it yet. This is just going to take time and investment by the entire industry.

I guess I’ll wrap it up by saying that in our first six months we’ve done amazing things. We’re thrilled with the momentum and the penetration, but we are just beginning this amazing journey.

Daniel Newman: I think you hit on a couple of things. Our research, what you’re finding in real world, is matching very closely. It’s these very database-intensive applications that are driving the adoption. It’s precisely what we talked about, the critical nature of having large datasets being able to be accessed by compute very quickly, rapidly for analytics.

On the challenge side, my analysis is you will scale with Xeon Scalable because, obviously, there’s that tight integration and for Optane DC Persistent to be adopted, it needs to be in applications where Xeon Scalable has been adopted. Is that correct? Just double-checking my math?

Kristie Mann: It does. It only pairs with Cascade Lake’s Xeon Scalable processors to-date, too, so that is another limiter.

Daniel Newman: Correct. So I actually think, and thank you with the Cascade Lake, I’ve missed that part, but with Cascade Lake’s growth, it’s obviously going to help Persistent Memory more naturally because it integrates so seamlessly. But it has to be aligned to where that’s already been done.

Kristie Mann: Yes.

Daniel Newman: That’s really, really valuable, Kristie, and I want to thank you for joining me today on this episode of Futurum Tech Podcast interview series. I think the whole shift in data-centric architecture is really, really an important one and it’s important for companies to consider.

And it’s very neat to talk to someone who’s working deeply in one of the particular technology sectors within that data-centric strategy. Sometimes things like processors and CPUs and memory cards aren’t “the sexy technology.” They really are so critical and if you’re an ITDM, if you’re driving strategy, if you’re a company that’s building your infrastructure around data, these are critical considerations that you’re thinking about. What is the best memory technology? Where could a nonvolatile memory technology potentially help us get more from our applications, be more efficient with our compute and be more productive with our workforce? So very, very helpful.

I look forward to having you back on the show very, very soon, Kristie. Thank you so much.

Kristie Mann: Yes. Thank you for having me.

Daniel Newman: And that wraps up this edition of the Futurum Tech Podcast interview series with Kristie Mann, director of product management for Intel Optane DC Persistent Memory. I had that memorized in real time. I want to thank everybody for tuning in. Hit that Subscribe button, join us again. We will be back really soon with more episodes, more interviews, talking to some really, really smart people. But for now, I got to go, so thank you very much for joining and we will talk to you soon.

Thank you for joining us on this week’s Futurum Tech Podcast, The Interview Series. Please be sure to subscribe to us on iTunes and stay with us each and every week as we bring more interviews and more shows from our weekly Futurum Tech Podcast.

Disclaimer: The Futurum Tech Podcast is for information and entertainment purposes only. Over the course of this podcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.

Thank you to Intel for sponsoring this edition of Futurum Tech Podcast

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

TSMC, Samsung, and Intel All Announced Agreements
Olivier Blanchard, Research Director at The Futurum Group, shares his insights on the geopolitical, market, and supply chain implications of finally securing domestic semiconductor chip production.
The Strategic Acquisition of Netreo by the Global Software Solutions Leader Has the Potential to Reshape the Future of IT Monitoring and Management
Discover insights from Steven Dickens, Vice President and Practice Lead at The Futurum Group, on how BMC's strategic acquisition of Netreo will shape the future of IT monitoring and management.
April 19 ‘Halving’ and New ETFs May Alter the Finance Ecosystem
Steven Dickens, VP and Practice Leader at The Futurum Group, highlights that as Bitcoin has introduced spot Bitcoin ETFs and experiences its fourth halving, it continues to redefine the financial landscape.
Unveiling the Montreal Multizone Region
Steven Dickens, Vice President and Practice Lead, and Sam Holschuh, Analyst, at The Futurum Group share their insights on IBM’s strategic investment in Canadian cloud sovereignty with the launch of the Montreal Multizone Region.