Search

A Deep Dive on Edge+Topologies – Futurum Tech Webcast Interview Series

On this special episode of the Futurum Tech Webcast – Interview Series, I am joined by Rhys Oxenham, Director, Field Product Management at Red Hat, for a conversation around one of our favorite topics: Edge+Topologies. This conversation is the final webcast in a four-part series with Red Hat.

In our conversation we discussed the following:

  • How Red Hat defines “The Edge”
  • An overview of how data is being created and saved within the cloud
  • An exploration into how Red Hat simplifies and creates processes that can be managed by enterprises
  • A dive into Red Hat Advanced Cluster Management for Kubernetes

It was a great conversation and one you don’t want to miss. Interested in learning more about Edge+Topologies? Be sure to read our latest research brief A Deep Dive into Edge+Topologies. Want to learn more about what Red Hat and what they are doing with open-source edge computing? Check out our latest report — A Deep Dive on Edge+Topologies — done in collaboration with Red Hat.

Also, make sure you check out our first episode in this four-part series with Red Hat, A Deep Dive on Edge+Automation with Dafné Mendoza, our second episode, A Deep Dive on Edge+Scalability with Bradd Weidenbenner, and our third episode, A Deep Dive on Edge+Security with Ian Hood.

Don’t forget to hit subscribe down below so you won’t miss any episode.

Watch my interview with Ryhs here:

 

Or listen on your favorite streaming platform here:

 

Don’t Miss An Episode – Subscribe Below:

 

Disclaimer: The Futurum Tech Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.

Transcript:

Daniel Newman: Rhys Oxenham, welcome to the show, how are you doing today?

Rhys Oxenham: I’m doing very well. Thank you. Thank you very much for having me.

Daniel Newman: Yeah, it’s great to have you here. It’s part of a multipod and webcast series that we’re doing. Talking about what’s going on in the edge in partnership with Red Hat. Thank you all for joining us. You are a Director in the Customer and Field Engagement Organization at Red Hat. Give me the rundown. What does that mean you do every day?

Rhys Oxenham: Sure. So I’ve been at Red Hat for around about 13 years now, and the role has evolved over those years, started off doing a lot of solutions engineering type work. We moved into what we call a field product management direction. And over the years I’ve built up a team that does, or has two primary responsibilities. The first one is strategic customer engagement. So we work with customers worldwide, help them solve a lot of the technical challenges, help understand the problems that they have with technology and how Red Hat can potentially help address them. And my team also builds a lot of technical assets, be those blog posts, demo videos, explainers, white papers, those sort of things, that help our field teams go out there and scale the message and help customers architect and implement solutions that really help them solve those challenges.

Daniel Newman: Sounds like a big job, a lot of opportunity right now we are seeing exponential growth. The Edge is the biggest growth opportunity in IT, for sure.

Rhys Oxenham: Yeah.

Daniel Newman: Just based upon volumes of data, of course we’ve seen a back and forth architecture approach over the year, or whether it’s been client server and then centralized compute and then clients, and this is kind of that next iteration. And now we’ve got this big of infrastructure and the cloud, the public, the hybrid, which is something that Red Hat has a ton of pedigree. But.

Rhys Oxenham: Yeah.

Daniel Newman: You’ve been extraordinarily active in building this Edge business, which is really interesting to me having said that I’ll start with the question for you that I like to ask every expert or person that kind of proclaims to have some knowledge about The Edge is, how do you define it?

Rhys Oxenham: Yeah. So the way that I like to think about The Edge is that really we’re talking about a paradigm shift that fundamentally changes the physical location of equipment. So servers, networking, infrastructure, storage, and so on. Moving the processing capabilities and the data storage away from the quote unquote “legacy centralized data centers,” and much more towards the subscriber or the end customer, or it could even be the source of the data if we’re talking about a sensor or something like that. And there’s a number of different reasons why this is happening and why Edge is such a big thing today. And of course it really ultimately depends on the use case or the customer, but more and more customers are looking at Edge configurations and shaking up the way in which they’ve been building their infrastructure to solve challenges around things like scalability for content delivery.

You think of the Netflix of the world, rather than having lots of these big centralized data centers and having all of the huge bandwidth requirements of shipping out all of that video content out to hundreds of thousands of subscribers all over the world, and you think of the huge increases in consumption during the pandemic. Can they push that data and the processing right close to those subscribers, and you’re putting out lots of these different, different Edge sites. You’ve also got things like reliability. You don’t want to be necessarily reliant on these core data centers. Can we not spread that out and push it much, much closer to where it’s actually going to be consumed and have lots more sites and lots of larger ones. You’ve also got to consider things like bandwidth, latency, and performance. Your data and the processing of that data is going to be much closer to those end users.

And really all of these are really going to be increasing that efficiency, and efficiency is king in all infrastructure. So where that Edge actually is really depends on the customer. There’s certainly no one size fits all when it comes to Edge. And you’ll likely hear me say this a few times for some of our customers that edge location could be just as simple as a remote office away from the corporate headquarters. For others it could be as, sounds crazy, but it could be a train or a cruise ship. And going right the other scale could be a telco provider where they have to try and install infrastructure and an antenna site, and they have to try and serve their customers in a single cell. So location of the Edge site and requirements can certainly vary.

Daniel Newman: So you covered a lot of ground there. And, and one of the things I heard that I think a lot about is complexity. So cloud or centralized data centers offer somewhat a streamline approach, right? All the applications, all the workloads, all the data reside in one place, it’s distributed. But you also, like you said, you have a lot of challenges that that creates, especially as applications require greater resources, you want zero latency, you got EGraphs. If the data is being created, how does it get back to the cloud? So we’re kind of in this era of right workload, right location, data needs to move at the right speeds, but that adds a lot of complexity. So how do you address that complexity?

Rhys Oxenham: Yeah. Data centers and the way that we build data centers and infrastructure, I don’t want to say it’s a solved problem. Nothing is a solved problem, right. But it’s something we’ve been doing for a very long time. And when we start looking at shaking up that entire model and pushing infrastructure into much more complicated scenarios, perhaps where of structure has never really been before, at least never at the scale at which we’re trying to implement, there are, as you say, a huge number of complexities, requirements, that we’re having to try and address as we start to embrace some of these edge opportunities. And as I sort of said, just a couple of minutes ago, our customer requirements just vary hugely. As I said, it can be all the way from situations where you might have just a single machine or just a collection of machines in a back office somewhere.

And we could have a situation where people will be there in the location that worse thing happens. They can at least go in and they have physical access to those systems. And then on the flip sides, we have customers that have very stringent requirements. And I mentioned a previous example would be something like in the telco space and the infrastructure there is expected to operate in isolation, right. Zero intervention. And if they, there is any downtime, any problems, how do you update an upgrade or anything like that? The resulting maintenance and the operational cost of that are huge. And so we really have to change our model and really think about tools and solutions to help address some of these customer challenges in a very, very changing world. Sorry, go,

Daniel Newman: Well, so let me ask you, what is the Red Hat answer cause you know you’re not the only player in the space, so what are you guys doing to address this, to simplify and create a process that these enterprises can manage with all that complexity?

Rhys Oxenham: Yeah. So at Red Hat we’ve been working hard on our OpenShift solution, of course, based on Kubernetes. We’ve been doing this for a very long time, and we’ve been trying to build a solution that’s capable of supporting the widest variety of configurations. We have to try and tolerate some of the most challenging requirements that our customers are asking of us. We also have to try and maintain that stability. Security has to be very, very performant, has to be very, very scalable. And we have to make sure that it can be operationalized. Lifecycle management is of course, incredibly important. And we have to try and do this for a wide variety of workloads. You know, customers are coming from a traditional virtualization platform. They also want to think about bare metal capabilities, containerized applications, over a course of the norm.

And so we have to try and do this in a standardized way whilst also understanding that customers also have a wide variety of hardware configurations. Some have the luxury of running full or more traditional hardware that you’d find inside of a data center. It’s just done on a much smaller scale. And they have connectivity back to a centralized core. Some customers want to run much smaller, very specialized infrastructure, even in a disconnected configuration, sometimes just on a single node. And so we have to really think about some of these solutions. And so today we provide three different core topologies on how our customers can deploy our OpenShift container platform for Edge configurations. The first one is what we call single node. The second one is something which we call three node compact. And then the third one is remote worker nodes. And what we’re trying to do, thinking about consistency, all of these have to be deployed with support for Red Hat advanced cluster manager for consistency in management and deployment and life cycle management and so on and so forth. And also they have to be secured end to end. And that’s where our Red Hat advanced cluster security for Kubernetes comes in. And I’d be more than happy to talk about several of these topologies, if you’d like.

Daniel Newman: No, absolutely for this episode, I want to keep moving because short and get that education out there for that listener. But this could definitely provide an opportunity to dive deeper into this stuff Rhys. You started on something, first of all, I just want to simplify everything you just said. Some getting to some specific use cases is really important as it helps streamline that expediency in deployment at scale, despite the fact that there are special complexities, there are also consistencies that you’re identifying, and that’s why you’re taking those three approaches is to address the fact that there are volumes of certain types of workloads being deployed at the Edge. I want to wrap with a quick question from the developer side. So when you’re developing apps with the Edge in mind and Edge topologies in mind, what are you seeing? Is there any big changes or what should developers and IT teams be thinking about that might be different than previous generations?

Rhys Oxenham: Yeah. I guess I’d probably want to give you a fairly simple and straightforward answer to that. This is still OpenShift, right? We have the same principles around application deployment, life cycle management, all of those things still exist. You know, of course, edge applications they have their own limitations requirements and expectations. And the way that we manage the type of infrastructure we’ve had to adapt our tools in the way that we do it, but it’s still done in the same way as any other OpenShift cluster. Consistency is incredibly important for us here at Red Hat. And that’s why we are trying to embrace this new world, various different application platform, very sorry, various different application types, new infrastructure requirements, and building a tool that can move with our customers and provide that consistency, regardless of whether you’re building an on-premise more traditional type infrastructure, utilizing the public cloud, a managed service offering, or indeed you are pushing the boundaries with some more Edge configurations. Consistency is absolutely key for us.

Daniel Newman: Well, I think that’s been a big part of what’s made Red Hat successful. As hybrid cloud has proliferated and now as Edge proliferates is that the consistency in making sure that those that have been building and developing with Red Hat, don’t undergo major shifts in how they’re going to approach their continued deployments and growth of their architectures on a global scale. Rhys Oxenham, I love to talk to you more, got to go for this episode, but thanks so much for joining me.

Rhys Oxenham: Thank you so much for having me. Have a good day.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Zoom’s AI-Driven Innovations, Announced at Enterprise Connect 2024, Are Set to Shape the Future of Collaborative Technology with Enhanced User Experiences
Craig Durr, Practice Lead, Workplace Collaboration at The Futurum Group, delves into Zoom's strategic AI enhancements at Enterprise Connect 2024, highlighting their potential to redefine collaborative experiences.
Dialpad and T-Mobile Continue to Combine the Benefits of T-Mobile’s Nationwide 5G SA Network with Dialpad’s AI-enabled Solutions and Innovations
The Futurum Group’s Ron Westfall assesses how combining Dialpad’s new AI suite with T-Mobile’s 5G standalone network lets organizations perform smarter and seal deals faster through real-time insights from Dialpad capabilities such as Ai Recaps.
An Assessment of The Key MWC24 Takeaways Across the Cloud and Telcos Highlighted by Red Hat, Google Cloud, VMware, and HPE Partnerships with Key Telcos
The Futurum Group’s Ron Westfall and Tom Hollingsworth review the top cloud and telco takeaways from MWC 2024 consisting of Red Hat and Tech Mahindra’s hybrid cloud alliance advances, Red Hat and NTT in collaboration with NVIDIA and Fujitsu readying real-time AI analysis at the edge across IOWN environments, Telkomsel selecting Google Cloud to boost operations and products with GenAI, VMware looking to assure the DISH Wireless Open RAN build, and TELUS enlisting HPE servers to complete its Open RAN mission.
Oracle and NVIDIA Collaborate to Deliver Accelerated Computing and Generative AI Services that Establish Digital Sovereignty and Manage National Data
The Futurum Group’s Ron Westfall tells how Oracle and NVIDIA’s expanded collaboration merges Oracle’s cloud sovereignty, enterprise application acumen with NVIDIA’s AI prowess to create a more secure, efficient, and globally accessible AI ecosystem.