Data Protection is Driving New Business Opportunities – A Futurum Tech Webcast Interview
by Daniel Newman | December 14, 2022

On this episode of the Futurum Tech Webcast – Interview Series, I am joined by Brian Richardson, Security Marketing Lead for Intel Data Center & AI (DCAI) Marketing Product Strategy. Our conversation takes a look at the security market and what enterprise IT leaders need to be aware of for the future.

In our conversation, we discussed the following:

  • What role Intel is playing with security in today’s business market
  • How Intel’s approach to security is influenced by their mindset, technology, and assurance policy
  • How Intel is contributing to the global ecosystem
  • Intel’s latest innovations
  • Recommendations for enterprise IT leaders for the future

It was a great conversation on a timely topic, and one you won’t want to miss. To learn more about Intel, check out their website here.

This webcast is sponsored by Intel.

Watch the full video of my conversation with Brian here:

Or stream the audio here:

If you’ve not yet subscribed to the Futurum Tech Webcast, hit the ‘subscribe’ button while you’re there and you won’t miss an episode.


Disclaimer: The Futurum Tech Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.


Daniel Newman: Hey, everyone. Welcome back to another episode of the Futurum Tech podcast. I am your host, Daniel Newman, principal analyst, founding partner at Futurum Research. Excited for this interview series brought to you in partnership with Intel. We’re going to be talking to Brian Richardson and we’re going to be having a conversation today about something near and dear to my heart and should be near and dear to all of yours, and that is security. But before we jump into this topic, let’s go ahead and bring our guest onto the show. Brian Richardson comes from Intel, super interesting guy. I had the chance to talk to him a little bit backstage, learn a little bit about what his thoughts are on security, and that’s exactly why we decided to bring them out to all of you. So Brian, welcome to the show. How you doing?

Brian Richardson: Pretty good, thanks for having me on. By on, I mean on your laptop because we’re not in the same time zone, I think, much less room.

Daniel Newman: Thanks for inviting me to be a square on your screen.

Brian Richardson: It’s just rectangles all the way down.

Daniel Newman: Thanks for not Zoom bombing me. Thanks for getting permission to send these packets. We could talk about all that stuff. Brian, I guess first off, before we jump in, quick introduction and tell everyone about yourself and your role at Intel.

Brian Richardson: Right now, at Intel, I am a marketing strategist for security in the data center and AI group. We’re call DCAI for short. We love our acronyms at Intel. Prior to that, I worked in general marketing for security and I refer to myself as a reformed BIOS guy. I’ve done firmware since … I don’t want to say exactly when, but I did actually write assembly code for money so that I’ll give you a date to where I was in that part of the industry. I learned about security as I progressed through doing firmware evangelism, doing some open source projects related to firmware like TianoCore or open compute platform, OCP, and have just more drifted into more purely security marketing roles, but more from a high level of looking at the full picture of what we’re offering to customers and how we describe the different security problems to customers and how they can actually have solutions for those issues.

Daniel Newman: Well, just note that I’ve got bias for people that have BIOS experience, so …

Brian Richardson: Yeah.

Daniel Newman: I practiced that one before we moved on.

Brian Richardson: Nice work. I have hit delete far too many times in my life.

Daniel Newman: Oh my gosh. Okay. I got this thesis right now that as we hit this slightly more difficult economic period, Brian, that we’re going to see security come more and more into focus. It’s one of those things that just can’t not be in focus, so let’s start just with the macro. In your view, you’re in the space, you’ve been in the space how do you see the overall security market? What’s happening in this space today and in this current business environment?

Brian Richardson: Yeah, if we look at trends in security, I think the thing that we’re thinking about overall in security is that even as spend might go down for a lot of capital investment just in the general compute market, I don’t think a lot of companies are in position to reduce or dramatically cut security spending. Now, a lot of these numbers have come out earlier from earlier studies, but if we look at cyber-attacks in general, we’re looking at more of what’s now a cybercrime economy. I didn’t really coin that term myself. If you look at other research companies, say like mainstream technologies, they’re looking at that cybercrime economy to be about $1.2 trillion just in 2022 alone and that is a rough estimate, but again, it’s also a rough estimate. This is the amount of money that people are making off of attacks.

Now we’re looking at cyber security being something that a company has to continue investment in even when they’re looking at potential redirects and spending in other areas. The other thing that we’ve looked at from a macro trend, along with the fact that most companies are looking at increasing their security spend, increasing their hiring of security talent, is that even if they keep everything up to date, a software approach alone isn’t really cutting it. I pulled up a SOFO story from 2019. They’re looking about 75% of companies that were attacked by ransomware. They were running what they considered to be up-to-date security protocols. They had their malware protection, antivirus protection up to date. People are going beyond just the normal software style attacks.

Now, once somebody gets into a network, we work with CrowdStrike a lot. CrowdStrike had looked at the progression. Once you get inside of an infrastructure, inside of a firewall, inside of a VPN structure, you’re looking at roughly two hours or less to go from the first infected device laterally to a second infected device. Those kind of trends are showing that security overall, whether you’re doing things on premises, you’re doing work at home, you’re doing in the office, you’re doing cloud, there is a larger need for security overall within an industry, within an ecosystem. It’s motivated a lot by the financial part of the attacks, the cool hacking of the ’80s, like, “Let’s just dial up a bunch of random numbers, see what the modem connects to, play around with BBS’s war drive and people’s driveways to figure out if I can get on their Wi-Fi network,” which is a thing that friends of mine did in the early 2000s. That has now moved on to entire attack as a service type of infrastructures that we have to defend ourselves against.

Daniel Newman: Yeah, there’s a ton of just visibility to the impacts of security and it’s really interesting, as you mentioned about a multi-trillion, you said this is the amount that people are making. I mean, it’s crazy that that’s, A, the case, and B, the other word could be stealing, misappropriating. I mean, this is money that’s supposed to be going one place and these hackers have been basically moving this much … By the way, this expands and changes and creates real impacts to the broader economy and we’ve entered an era now where people don’t … well, people do, but it’s not about pulling up in your unmarked white van or the bank anymore. It’s literally with masks on and guns. I know it’s Halloween time at the time of year, but it’s really now where most crime is being committed behind keyboards in dark rooms or bright rooms in the middle of days with high speed connectivity and people that are finding vulnerabilities.

That’s something that a company like Intel, when you’re building CPUs that are running servers and that businesses are dependent upon, it certainly has to be top of mind for you, Brian. I guess talk about that, because people think of Intel, they think about their PCs, they think about it with the CPs and the data center. I think obviously people understand there’s a security implication, but I’m not sure I understand just how important the role of a company like Intel is in security. Can you just give us a little bit of color on that?

Brian Richardson: Yeah, we do think about security quite a lot more than people I think honestly give us credit for in some areas, whether it’s the types of features our customers need, the regulations that we have to comply with, both as a company ourselves and also what our customers have to inherit and turn into products. We’re the base of a product for multiple markets, whether we’re talking about edge devices and the Internet of Things, your traditional client gaming PC laptop type of environment or data center cloud, hyper scale, high performance computing. There’s a security element in all of that and because we’re looking at this as … in marketing, I try to be a little bit of an optimist. I don’t look at this as a problem. I look at it as an opportunity for a solution. Data protection really drives the way that we think about the design of our products and the opportunities we’re looking at our customers to get into.

If we look at this from the security standpoint at Intel, we look at it as more of a security mindset from a design standpoint, a technology development, the actual ways that our technology gets surfaced or the features get surfaced to a customer and then the actual assurance and cybersecurity part of it. From a mindset standpoint, we’re doing a lot of things in the design of our products. Once we’re starting at the simulations that years later will turn into a, as I call it, a magic rectangle made of magic sand, those processors start out as RTL, they start out as simulations, they start out as a design pipeline in the back of a napkin six years previous. Those ideas have to be developed securely as they go through.

Whether we’re isolating our own IP or third party IPs that get incorporated into a design for a customer, whether we’re looking at the way that we can update software in the field after something comes out to the way that we essentially firewall parts of the system from other parts of the system so we can perform proper isolation, those things get extensively designed and tested internally. We do have our own red teams. We also work with a lot of the top researchers through bug bounty programs or Project Circuit Breaker, which is like an enhanced bug bounty program that we work with select folks on pre-release products, time-bound bug bounties for instance. We’re doing that as part of internal and external validation before we even release something into the market. The technologies themselves are the way that you get value out of that, so how do we design a technology of customers going to want and need and that’s a lot of work with our OEM and ODM designers, a lot of work with maybe customers two levels out the door.

The old market of designing specifically for a hardware manufacturer doesn’t exist anymore. Now we design more in concert with our end customers, maybe with the hyperscalers that are buying our processors or integrating something out of a hardware solution. The assurance and cybersecurity I think is a big part people miss as well is because once you buy an Intel, let’s take a Xeon for instance, let’s put a Xeon into a traditional cloud service rack. They’re going to run that for between three and seven years, basically get as much capital investment value out of that until all the smoke comes out of it. In that kind of environment, you have to keep those systems up to date and with the firmware and software components that are on these platforms, we’re still continuously putting out updates through a long part of that system life cycle.

At some point if there is a vulnerability and we work with the industry to resolve that through the traditional CVE process, bug bounty process, reporting process, we have a mechanism to keep those customers up to date so that that machine, even if it’s several years old, is still keeping up to date and keeping with the current security trends. We’re trying to make sure that we do that on a cadence. We’re used to patch Tuesday, for instance. We have a similar system with our product updates where it’s not every Tuesday, but there’s a cycle that our vendors can depend on so that they have a chance to validate everything, put it out into the world before all the disclosures come out. There’s an entire life cycle that we’re looking at as one of those core component manufacturers for the industry.

Daniel Newman: It’s really important that you point that out because you have so many interdependences in your ecosystem, companies that … Intel Inside. I mean, I know that’s not the main marketing campaign anymore necessarily, but Intel legitimately, whether it’s in a hyperscale cloud, whether it’s in a private data center for an enterprise, whether it’s in someone’s commercial or personal use PC, Intel’s architectures are dependent upon and whether that’s running applications, whether that’s at the processor level, whether that’s part of the system on a chip, they depend on your company to be continuously on top of the security challenges and making the meaningful upgrades.

Talk about it through that lens. Obviously Intel has to, A, be super focused on building secure products and services for its customers, and then the second of all, you have to focus on doing it for your ecosystem. Talk about that inside out approach. How do you start with Intel and make sure you’re building the most stringent secure products on the planet, and then how do you make sure that your ecosystem can trust that that’s coming from you?

Brian Richardson: Yeah, I think the thing that we try to do when we start a design is, one, it’s not as insular as you think. It’s not necessarily just six engineers in a traditional let’s sit down in the cafe and write something on the back of a napkin. There’s a lot of customer input and feedback that comes into that. The ideas might be internally generated, they may be generated by looking at a trend. They might be talking to customers and saying, “Hey, you know what? These six or seven customers, they all have the same problem. They all need to isolate data a certain way. They all need to provide root of trust for their systems in a certain way,” and that generates the initial thinking of how we develop a product. But the entire thinking we have is that once we create that hardware capability, how does somebody unlock that through soft … I’m sorry, let me say that again.

Once we come up with that technical capability, once we decide that, “Hey, we’re going to make this cool technology,” how do we get the hardware value proven through software? Because if the software can’t actually do anything with the hardware itself, then you’ve just created a space heater. That’s an ecological problem, which is another entire podcast series to talk about. But we’re looking at the way that you can do things like, “Hey, what do we already have in a system that might, say, help a malware software be detected early?” If people are trying to sneak around the operating system to install their malware, then how can we as the processor provider give information to your endpoint detection that says, “Hey, you should check out this process. It’s doing something kind of fishy.”

Because at the low hardware level, we can do things, like our threat detection technology and our clients, for instance, we can do things like look at the CPU processing telemetry. The same kind of information we use, say, in debugging and look at the pattern it’s running and saying, “Well, that looks like encryption. Is it the good encryption where you keep your files safe when it’s at rest? Or is it the kind of encryption somebody does when they’re about to lock your entire system out through ransomware?” That kind of hardware software cooperation through ecosystem companies is one way to get that value out. We’re also looking at things like when we accelerate workloads. Let’s say we want to … Greg Lavender, who’s our CTO, talks a lot about post quantum computing, meaning that at some point … it’s not a fixed point in time like Y2K, which we previously dealt with, but at some point quantum computing is going to get so available that it will be able to very quickly break weaker encryption keys.

Now we want our customers to up their encryption strength. That means more compute resources, so how do we help them cut down on the amount of overhead that that may take up? Maybe that’s offloaded processing through a network accelerator or through a special set of instructions that handle matrix multiplication faster and are specific to certain kinds of workloads that you can push off and not burden the main CPU with. One of the larger trends we look at, which has a huge ecosystem behind it right now, is confidential computing where we’re looking at as more companies want to move stuff to the cloud, well, the software developers now have a problem of they have data that needs to be isolated in some way. There may be a technical isolation that they need to perform through hardware isolation and attestation. There might be a regulatory.

A lot of the things that we’re doing, a lot of the trends we’re seeing in security are not just a cooperation between us and the developer ecosystem, it’s also between the ecosystem of generating regulations. If a lot of people hear the term BIOS and think, “Well, that’s one of my favorite four letter words in computing,” so is GDPR.” There are a number of regulations that we have to help our customers comply with and help our ecosystem providers, whether they’re ISVs system integrators, the end result is they need to not only perform a service to provide the privacy technology, but also the compliance to prove that it matches the regulations and whatever geospace that they’re operating in.

Daniel Newman: Okay. Brian, first of all, I’m glad you touched on confidential computing because I’m going to want to talk about that a little more on this particular question. I think you make a good case for the way the company approaches the ecosystem and I think it is sort of a lead by in terms of Intel leading with its own strengths and then of course working closely with your partners to make sure that you’re building A, economies of scale and B, the most secure products that they can then bring to market jointly. That’s, by the way, a lot of how Intel’s built scale. You build scale by having these partners that implement your technology into their portfolio of products and that moves a lot of volume. But on the innovation front, so this is that million dollar … I’m going to throw you over the softball. You got the bat? I don’t know, do you play baseball? Doesn’t matter.

Brian Richardson: Keep in mind, I used to write assembly code for money, so maybe team sports were not my first priority.

Daniel Newman: Tennis? Badminton? But in all serious, I’m going to log it over and give you the chance to peg the ball out of the park here. Talk about your innovation a little bit, because that’s one of the things that I found to be most interesting was I was just going through all the security innovation and now when I look at accelerators, I often look at speeding up workloads. I look at speeding up trainings, speeding up imprints. Don’t always think about security. That’s why I was super excited to talk to you. Talk about the innovation that you’re making in that particular space to enable all this delivery, all this scale for both Intel and your partners.

Brian Richardson: When you look at security, there’s a couple of different problem sets and it’s a big world so I’m just going to focus on the data center stuff because I think that if we get too far into all the different security scenarios … how long is this podcast?

Daniel Newman: Fair. Not that long.

Brian Richardson: So, let’s just look at data center. Most of your data center problems come with trying to figure out an idea called a trust boundary. So a trust boundary, let’s take an example that maybe not everybody has had in the past couple of years but is getting back into, which is travel. Look at an airport as a trust boundary. There’s a certain area where everybody can hang out. You’ve got parking lots, you’ve got lobbies, you’ve got the weird Uber/Lyft pickup, which they moved from the actual airport to the fourth floor of a parking deck somewhere offsite to cut down on the number of Priuses assist circling the airport. Cool.

Now you’ve got the trust boundary inside which is I need an airline ticket and ID and the TSA background check to go through here and then you have some people with elevated privileges. They’ve got clear or pre-check, maybe they get through some boundaries a little bit faster because they have some additional credentials that puts them inside of this new trust boundary, which is beyond the security gates. Now you have an additional trust boundary where now I want to get on a plane. Okay, there’s a re-verification of that, re-verify the same ticket, re-verify the ID. Maybe an airline has a photo ID verification they used to speed that up. Just because you got on the plane doesn’t mean you necessarily get to fly, right? There’s an additional verification for the pilot, for the flight attendants, for the co-pilot, for the mechanic when they come off and on. Within this area you’re seeing that there are different trust boundaries, even if everybody hangs out on the same plane for instance and they’re now magically part of the cloud. They literally flew into a cloud.

They represent another metaphor for the cloud, which is once I’m on an instance, which part of this do I consider to be mission critical and private versus a more generalized part of this area? Once we start looking at companies that want to move their data into the metaphorical cloud, not the one with the airplanes, then at that point they have the same trust boundary issue. The traditional security model for data privacy is … in the ’80s, it’s the computer is in a room, the room has a lock, and outside of that locked door there’s a mean looking person in a blue shirt with an iron on badge and a stick. If somebody tries to get into the door, they’re not supposed to be there, you hit them with the stick. I’m not advocating violence by the way, I’m pretty much against that.

But that’s like a physical perimeter. I mean, I do take martial arts if you need some help with the training with a stick, I know a guy, but it’s all above board. Don’t worry about it. But the idea is that now we’ve traded some of that for, okay, maybe we don’t have the physical perimeter person, we have a badge, we have passwords, we have two factor authentication. We have different ways of handling that isolation. Once you’re on your own premises, once you move things into the cloud, now there’s a concern that you are … now you’ve changed your trust boundary and you’re relying on a third party to provide that trust. I’m not saying those companies aren’t trustworthy, but you’ve now changed your mentality from a security standpoint and the people that you cooperate with when you collect their data under those various GDPR, HIPAA, the new regulations coming out of China, a lot of the U.S. regulations and guidelines coming from NIST. Now you have to make sure that that data, if it falls into a certain category as personally identifiable information or PII, how do you segregate that?

That’s where we’re looking at these isolations within the cloud infrastructure. So even if you are the machine owner, I own Brian Cloud Incorporated, I need a way for my customers to maintain their own trust boundary within my own cloud. Even if somebody can pseudo on Brian Cloud Incorporated, they still can’t see what the customers’ data is. There’s a certain amount of isolation of that data from the rest of the system or attestation externally to say, “Yep, this is the instance I’m expecting. Somebody didn’t just pick up this VM from Brian Cloud incorporated and move it over to a machine outside of his trust boundary.” There’s a way to prove that this is the machine I’m expecting and that data on that machine is isolated from other parts of the system, other parts of the VM, the operating system, the firmware, so that it has its own little magic pilot’s cabin inside of that plane, inside of that cloud.

That’s the crux of confidential computing in general is providing that in a standardized way so that if I say … let’s say I’m using a consulting company to get my cloud application up and running, they can come up with a solution that works with the underlying isolation and hardware components and keep that separate so that they can comply with keeping a personal … that PII. That’s why I make the acronym, it’s hard to say, but that’s how we keep PII separate in those instances is we use that confidential competing concept, which is a hardware software cooperation. We built the hardware fences inside of the Xeon processors so that they are able to take advantage of it if it’s implemented at that software level.

Daniel Newman: This has been a super-hot topic because data and use are the next big frontier for security. We’ve talked about in flight and at rest and we’ve had that down, but in use it’s been a new set of vulnerabilities and this is what you’re set up to solve here with confidential computing. We are getting a little short on time, but I can make some time here because, Brian, I do want to talk a little bit more about SGX. That’s another big one for Intel. Talk a little bit about the motivation and if you could, maybe even more SO, could you share a little bit of examples of how customers are putting that technology to use?

Brian Richardson: Yeah, so Intel SGX, SGX stands for software guard extensions, that is our fundamental confidential computing technology. It’s the one that’s honestly been the most researched, the most deployed in all the industry, and it’s the one that has the smallest trust boundary. Within the operating system, you will have applications that are taking advantage of SGX for their attestation and isolation capabilities. Those instances, once they’re properly enrolled, they are attesting back to an Intel SGX verification server and they are isolated through hardware and IO isolation on that platform. SGX, we’re talking about the three different types of data. So data at rest, data in use and data in transit. At rest, we all know how to encrypt files we’re keeping on our system. In transit, we’re using HTTPS right now to have this conversation and you’re probably using the same technology to view it from a known trusted server.

In use is where we really have the biggest problem, because you normally have to unencrypt the data, basically lay it all out and play with it within that compute space. SGX is that fundamental hardware technology that Intel provides that allows software to take advantage of that data in use protection. This is called a trusted enclave or trusted execution environment, so if you hear me use the term enclave, I’m referring to that TEE, that trusted execution environment that we set up within a confidential competing enclave. The uses for this are interesting because they mostly go to where you’ve got high regulatory issues when you are using personal identifiable information or you’re sharing data between multiple parties.

Let’s take a couple of cases there. In general, if you look at it as a top level, most of your Intel SGX deployments are going to be places that handle a lot of regulated data. So healthcare, financial services, there’s a government level concept called sovereign cloud, which allows governments to operate, put their data into a cloud environment but keep that protected. You can think of a lot of places where even if it’s not, say, a state secret, there’s a lot of government data that’s personally identifiable, your social security number for instance. There’s ways of governments having a compliance space for that. The two that I think of the most is we’ve done some work with a Bosch, so they’re coming out with an automated driving system and they’re training that on what ends up being personally identifiable information.

When you drive down a street with some kind of data gathering for driving data, you’re getting people’s faces, you’re getting their license plates, you’re getting mailbox numbers, and that kind of stuff, especially in Europe where Bosch is headquartered, is PII under the GDPR definition. So they stratify that data, they separate it out like a Photoshop layer. The license plates, the people’s faces will get moved out into a separate database, but they can’t just blur those things out. If they blur out license plates and train the driving robot on that, the driving robot will just, “Okay, there’s no such thing as a blurred number plate in the real world,” so they’ll just run into cars. I am not a smooth faced person after all that assembly programming, so if you train it on smooth faces, people with visible worry lines like myself are just going to get flattened in the street by robot cars. Don’t want that to happen.

One, I want a self-driving car. Two, I don’t like getting it by cars. Turns out my threat model is pretty ordinary. In those cases when they recombine it, that’s a data in use scenario and they may want to share data between multiple instances. They may get their data for multiple providers, but they can’t mix that data together in the same database. Because, again, you end up with a sharing issue. So they can keep that data federated and train the model on it without running into compliance issues and then in medical research, we did some work a while back with UC San Francisco and there, again, gathering data from multiple hospitals, which in the U.S. has HIPAA concerns. There are other regulations that are state and federal that make sure they have to protect that data a certain way, but they want to be able to put that together in a training database to look for trends and patterns in their medical research.

Using Intel SGX, it cuts down to the anonymization they have to use because if they had to further abstract that data before they could put it into a shared pool, one, it’s time consuming, two, it’s expensive, and three, it creates a possibility where modifying the data, they might miss a pattern of some kind. So this allows them to do these wide scale research that comes up with very beneficial medical outcomes.

Daniel Newman: Yeah, I think the long and the short is there is an opportunity, Brian, to develop technology that can … by the way, if you start to think about the bigger healthcare problem of why we have not been able to really use EMRs and why people go from hospital to hospital and cannot actually … you start to think this is something that as we start to modify the rules, we start to look to the future to how the cloud, how compute, how databases could enable universal systems like that to work together, right? Because the reason is with HIPAA is that there’s certain data that cannot be exposed. There are very strict rules for that. However, if a patient could give permission and you knew that the data could move in transit and be in use and be secure, suddenly some of these problems we’re having … I mean, it’s amazing in the era that we live in that you could be brought to an emergency room and a doctor cannot see your records for critical care in real time.

I mean, think about how crazy that is. Now, I’m not saying this is going to solve it, I’m just saying it could solve it. What great technology does not do is fix bureaucracy. It just doesn’t and that’s a topic for another day. Talk about something we could have a whole series of podcasts on. Brian, it’s been super helpful to have you here. I’ve kept you along because this topic is so interesting to me. I’m a student of it, I’m passionate about it and, of course, like I said, I think security’s one of the most exciting opportunities in the market right now. For enterprise IT leaders, what are you recommending, what is Intel recommending that they focus on to be as secure as possible in the future?

Brian Richardson: Again, I’ll stick with more of the data center part because that’s where my brain’s at.

Daniel Newman: Yeah, absolutely. Goes without saying.

Brian Richardson: One is a lot of people, especially with the economic changes, I’ll use the nice, polished words for economic possibilities of the future, a lot of companies are looking at a transition from a capital expenditure to an operational expenditure model. They want to SaaSify all the things. Remember, the cloud is somebody else’s computer unless you do it wrong, and then it’s everybody else’s computer. You have to start thinking about your constraints when you start moving things to cloud or hyperscale. What are your sensitivities around data, what are your regulatory issues and how do you actually figure out if you are compliant with all of the operational parameters, all the regulations, all the government constraints you’ve been handed in your particular industry. Second is you really have to think about what hardware the services run on.

I mean, SaaS and the general concept of SaaS is great because it allows smaller companies to scale with a lower initial investment, but you don’t want to do that at the risk of security. Security is basically plumbing. Plumbing is not sexy. It’s one of the hard parts of my job is that if I do my job well, nobody notices because there aren’t a ton of problems to patch over. Think about it as a plumbing exercise. You have to lay in the base of your secure implementation and that includes figuring out the right hardware. I do a lot of renovations on my house. My house is 50 years old. I’m fixing a lot of plumbing assumptions that the previous owners made that are incorrect, like the size of a drainpipe probably should have been a little bit bigger. Harder to fix once it’s in implementation, and the same thing applies from a plumbing in security standpoint. The minute you notice that there is something bubbling up, it might be a little too late and you should have thought about that more in the architecture.

But don’t think that this is a limitation. You need to look for new possibilities. Confidential computing is a good example. This is an opportunity for solution in the data management problem. How can you take advantage of something like confidential computing? Do more possibility thinking. What could I do if I could scale my data? What could I do if moving to the cloud didn’t create my compliance issues? Take that approach and engage earlier with Intel or whatever ecosystem partners you work with to figure out if that possibility is there and which providers within their ecosystem will help you solve that problem. Think about it as an infrastructure problem. You are building a bridge to a future opportunity. Make sure that bridge is stable. It’s not a temporary thing to get you across the chasm. It is the foundation for how you’re going to operate over the next few years. Because once you lay in this infrastructure, that’s an investment. You want to take advantage of that. So make sure it’s as secure as possible when you actually do the initial build.

Daniel Newman: We hit a lot there, Brian, and I like some of those analogies. The plumbing analogies should make a lot of sense to a lot of people and I also like the fact that infrastructure is something we learn here as a country every day. When you build roads that aren’t wide enough for the future, you end up with big, complicated projects to widen those roads in the future. When you put in drains that are too small, you end up with drains and you end up with backed up … and you can’t plant trees too close to the house either. So all these analogies I think you could apply to security, but I think one of the things that companies have to be thinking about is where and when security gets into focus for the business.

I think for too long it’s been sort of what’s the minimum viable product approach, what’s the least secure we can be and get away with it, and I think in the midst of this rapid growth of proliferation, of AI, of hacker sophistication, of zero trust being a board level concern, partnering with companies that are putting security in focus and too getting security is going to be a non-negotiable. So Brian, I want to thank you so much for joining me here on the Futurum Tech Podcast interview series. Great talking to you. There’s a ton of information here. Hope I can have you back again soon.

Brian Richardson: Yeah, hope to do it. I won’t have to travel far if we keep doing it in the rectangles, but hopefully we can do the next one sitting in some chairs.

Daniel Newman: Chairs or outside or surfboards. Not playing baseball.

Brian Richardson: Next to a bridge to illustrate all of our … We’ll find a nice bridge somewhere around Portland and-

Daniel Newman: It’ll be a metaphoric show with tons of visual support. Brian, thank you so much for joining the show. Hey, everybody out there, check out those show notes. We’re going to put a bunch of links to some of the things that we talked about throughout the show, learn more about Intel security and of course focused on the DCAI business. They do love their acronyms, although as an industry tech analyst, I can tell you it’s not unique to Intel. We’d love to have you as part of our community. Hit that subscribe button, follow us on Twitter and on social across the board. Lots of great interviews like this one on the channel. But for now I got to go. Thanks for sticking with us for this one. We’ll see you all really soon. Bye-bye now.


About the Author

Daniel Newman is the Chief Analyst of Futurum Research and the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise. Read Full Bio