Search

A Deep Dive on Edge+Scalability – Futurum Tech Webcast Interview Series

On this special episode of the Futurum Tech Webcast – Interview Series, I am joined by Bradd Weidenbenner, a Product Manager for Advanced Cluster Management for Kubernetes with Red Hat, for a conversation around one of our favorite topics: Edge+Scalability. This conversation is the second in a four-part series with Red Hat.

In our conversation we discussed the following:

  • A deep dive into real world edge use cases
  • A look at the most prevalent challenges within specific industries
  • An exploration into the five pillars of Advanced Cluster Management
  • A deeper dive into Advanced Cluster Management and how it is helping deploy multiple clusters of scale

It was a great conversation and one you don’t want to miss. Interested in learning more about edge+scalability? Be sure to read our latest research brief –  A Deep Dive into Edge+Scalability. Want to learn more about what Red Hat and what they are doing with open source edge computing? Check out our latest report — The Value of Open Source for Modern Edge Computing — done in collaboration with Red Hat.

Also, make sure you check out our first episode in this four-part series with Red Hat, A Deep Dive into Edge+Automation with Dafné Mendoza.

Don’t forget to hit subscribe down below so you won’t miss an episode.

Watch my interview with Bradd here:

Or listen on your favorite streaming platform here:

Don’t Miss An Episode – Subscribe Below:

 

Disclaimer: The Futurum Tech Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.

Transcript:

Daniel Newman: Hey everyone. Welcome to another episode of the Futurum Tech Webcast. I’m your host, Daniel Newman, and I’m excited for this continuation of our Edge+ Series that we’re doing in partnership with Red Hat. As part of our interview series, we’re talking about the Edge. We’re talking about Edge+ Security, Edge+ Topology, Edge+ Automation. And today I’ve got Bradd Weidenbenner, and we’re going to be talking about Edge+ Scalability. Bradd, welcome to the show.

Bradd Weidenbenner: Hello. Nice to meet you. My name’s Bradd Weidenbenner I’m a Product Manager for Advanced Cluster Management for Kubernetes. It’s part of the OpenShift Container Platform Plus, and I’m excited to be here today. Thank you.

Daniel Newman: It’s great to have you here. It’s been fun having these conversations. The Edge is such a big topic. It’s basically the fastest growing not cloud in the world, but it is participating in the growth of every part of the enterprise infrastructure because of the fact that data, data, data is driving the future. So your topic and the work you’re doing, Bradd, is all about scale or at least that’s what we’re going to be talking about here today. So in the spirit of these shows, which we try to keep short to the point, make sure our audience of enterprise decision makers can get to the facts fast. I’m going to get fast to the questions. So I want to start off talking to you about the challenges that you see with Edge use cases, specifically the biggest management challenges from all the conversations that you’re having with customers out there.

Bradd Weidenbenner: Absolutely, that’s certainly some of the challenges we look to solve with advanced cluster management ACM. You’ll hear me refer to it is ACM, but they really boil down to three things, complexity, the scale, and the vendor lock in. And so, we’re trying to work in a distributed compute paradigm with the traditional data center now flowing out to different geographies. And so you have the far Edge and as you move further away from the traditional data center, the latency requirements, getting the compute closer to the user and actually being able to process it where it matters the most. That’s crucial. And we can with advanced cluster management, we’re able to help solve those challenges with complexity, the scale, and the vendor lock in.

Daniel Newman: Yeah, that’s always a big challenge. You mentioned a few things and Red Hat really has always stood for sort of open, right? So you guys building the Edge business, but the people that know REL and they know OpenShift know that the whole idea was cloud is going to bring in a ton of complexity. If one of the complexities you can remove is the lock in, is forcing you to go down the path of any one vendor.

So it’s good to see you guys carrying that from kind of core to edge in the philosophy. But I also, to be honest, I wouldn’t have expected anything different. It would’ve been a weird departure from the Red Hat provenance. So let’s talk about the industries. You’re out there working with customers. These challenges that you’re talking about, what are the industries that you’re seeing these challenges most prevalent?

Bradd Weidenbenner: Telco comes to the forefront. And we’ll talk about that more. Could be shipping, cruise lines, any of the industrial use cases, factories, retail deployments, where you have thousands of satellite locations that aren’t connected to that traditional data center compute power. We need to now be able to do this compute at these different locations in the Edge. And I do like how you called out Red Hat’s open nature. And I’m proud to announce that advanced cluster management, we were accepted to the cloud native computing foundation back in November of 2021.

So you’ll see that there, as well as some of the components, right? There’s really five pillars inside of the advanced cluster management, which it’s all about management and advanced management, but we have a networking pillar that has a project that was also accepted to the cloud native computing foundation Sub Mariner. So we’re proud to have that open source. Thank you for bringing that up. And as far as the agnostic, we have this single pan with ACM that allows you to bring these OpenShift clusters in, on different platforms. Even now we’re able to import in non OpenShift clusters to look at it in the observability pillar. So from a vendor lock in, we certainly allow you to look at those things in a single pane, your clusters.

Daniel Newman: Yeah, absolutely. And congratulations on the recognition, like I said. Almost be surprised to not see that approach. It’s just become so inherent within the way Red Hat’s built its business. And it seems to be aligning very well with the architecture and the selections and how businesses are going to market with their adoption of cloud. So it’s always good to be on that right trajectory. And you’ve mentioned a few times advanced cluster management, ACM clusters in general. For the technical audience, this probably won’t come as a surprise, but of course, when you’re in IT, there’s so many acronyms, so many terms that come up every single day. And I don’t want to take that for granted. Talk a little bit about what ACM is and what the specific challenge that it’s addressing.

Bradd Weidenbenner: Yep. So advanced cluster management comes into really five pillars in the management realm of deploying your OpenShift clusters. And we have the clustered life cycle, the ability to life cycle your clusters, create, update, scale. You can remove those clusters reliably, consistently in an open source programming model. And we also have application life cycle so we can deploy our applications using open standards and CICD pipeline integrated in. We also have the governance risk and compliance that allows us to use policy to automatically configure and maintain the consistency of our controls and desired state. That’s going to be something we’ll talk about more as we dive into one of the industries. And then we have the observability, the end visibility. So that’s also powerful, allowing your different ops personas to view their system alerts and critical application metrics and really just your overall system health and have that single pane where they can identify and resolve their issues.

And then lastly, I mentioned the fifth pillar being the multi cluster networking, where today we have the Sub Mariner project, which is going GA. We’re currently advanced cluster management 2.4 was released back in November. We’ll be releasing 2.5 coming up at the end of April, last week of April. OpenShift 4.10 just released yesterday. And we trail OpenShift by a couple of weeks. We spend a little more time this release actually getting our bits in order for the contribution to the CNCF there. So yeah, those would be the big, main use cases of advanced cluster management.

Daniel Newman: There’s quite a few there, five pillars. Significant opportunity to address scale when it comes to Edge+Topology. So that is… I’d say it’s not a little thing, Bradd, it’s quite a bit that you guys have going on there. I think it’s always helpful for people, if you can provide a little bit more of an in depth or a specific example. So you said earlier, hey, we could dive deeper. So I’m going to take you up on that offer right now. Give me a little bit of a deeper dive on how advanced cluster management is helping deploy multiple clusters of scale. Give an industry or a specific type customer example.

Bradd Weidenbenner: Absolutely. And allow me to set the stage here with a little backdrop on the Edge+Topology. I’m glad you kind of brought that up because we will have a few acronyms to expand here, but it starts with furthest out. In the example, I’m going to give, you’ll hear us refer to it as Single Note OpenShift Snow. And Edge Topology defined by Red Hat OpenShift, that’s the furthest out. As I mentioned, as you go away from your traditional data center and the latencies are the furthest and most extreme disconnected environments, then we have remote worker nodes and closer to the data center we have three node compact clusters.

But the theme is that the further way you get from the data center, you have less compute, like whether it’s the space, the connectivity. So the footprint gets smaller. The area you have to have… The control and management plan end up getting condensed down into a single node, as opposed to the three node cluster, and in the middle, the remote worker node. So those would be your Edge topologies and we have some great literature out there and blogs on all of these. And so I’ll pause if you have any questions on the topologies. Otherwise, I’ll jump into the actual telco use case.

Daniel Newman: No, actually I won’t, but what I will use this as a moment to plug there will be an Edge+ video part of the series that will focus on topologies. So that link will go into the show notes. So if you do want to know more about it, go over there. For the sake of keeping this moving though, Bradd. Yep? Absolutely.

Bradd Weidenbenner: Thank you. So we’re going to jump into the telco industry. In particular, we have service providers, they’re deploying their distributed mobile network architecture in a modular functional framework that part of the 5G standards. And so this allows the service providers to move from their appliance based radio, the ran to the open cloud ran architecture, which Red Hat’s a big part of that initiative. And so, we’re getting more flexibility and efficiencies in delivering these services by providing these solution sets here. And so I’m going to jump right into it. It’s ZTP is your next acronym, Zero Touch Provisioning, and it uses a Good Ops’ deployment and methodology set of practices for the deployment. It allows your developers to perform tasks that would otherwise really fall under their IT operations team. So Good Ops achieves these using the declarative specifications in the Git repository.

So it’ll be YAML, other constructs that they’re able to use their source of truth, record of truth for the infrastructure is code checked in to get. And this declared of models leveraged by the ACM, that’s the whole nature of Kubernetes being a declarative state model. And let’s look at this too real quick. I’ll back up. Advanced cluster management’s an operator on top of OpenShift. And we’re just an extension. We’ve made it Kubernetes more extensible. So talking about scale with this story, we’ve been able to deploy through ACM with the current release of 2.4, 2000 single node OpenShift clusters through these ZTP solutions set that we have as part of ACM.

And so that is really the motivator of this Good Ops approach is the reliability at scale, when you’re stamping these things out in the natures of the… In this case, these ATP solutions tailored to the far Edge, the single load OpenShift and applying a DU Rand policy.
And we use… That’s for the distributed unit in the Rand architecture. And we’re able to do this at scale reliably. They just stamp them out one after another. We’re treating these clusters as replaceable. They don’t have to be loads of care and maintenance put into these things. If you have a challenge with them outside of a hardware failure, you’re just going to redeploy it. It’s all the configuration information, everything’s in a get outs methodology, able to be pulled down from there. And so, we really address some reliability issues by providing traceability are back the single source of truth to that desired state. And that’s really based on the structure and the tooling we use to, with the event driven operators, through web hooks. And so that’s when we’re deploying the DU.

That’s using a policy generator, which is a part of advanced cluster management. When I talked about the governance risk and compliance pillar, we’re using policy to set that desired state and you can have inform or enforce. And so this is powerful because that’s part of knowing that the cluster’s in the state that it was intended to be.

Daniel Newman: A whole lot there, Bradd, and hopefully everybody out there caught that. There is a lot to think about. You of course have to think about architecture. You have to think about policy and governance. You have to think about scale throughout this series. Again, we have to talk about the overall topology. You talk about how do you automate. How do you secure? And then of course, in the conversation Bradd shared with us, how to scale. Bradd, I got to let you go here. Got to keep it short. Keep it punchy. Lot to learn. Check out the show notes. Lots more links there. Appreciate y’all tuning in.

Bradd Weidenbenner: Thanks everybody.

 

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Zoom’s AI-Driven Innovations, Announced at Enterprise Connect 2024, Are Set to Shape the Future of Collaborative Technology with Enhanced User Experiences
Craig Durr, Practice Lead, Workplace Collaboration at The Futurum Group, delves into Zoom's strategic AI enhancements at Enterprise Connect 2024, highlighting their potential to redefine collaborative experiences.
Dialpad and T-Mobile Continue to Combine the Benefits of T-Mobile’s Nationwide 5G SA Network with Dialpad’s AI-enabled Solutions and Innovations
The Futurum Group’s Ron Westfall assesses how combining Dialpad’s new AI suite with T-Mobile’s 5G standalone network lets organizations perform smarter and seal deals faster through real-time insights from Dialpad capabilities such as Ai Recaps.
An Assessment of The Key MWC24 Takeaways Across the Cloud and Telcos Highlighted by Red Hat, Google Cloud, VMware, and HPE Partnerships with Key Telcos
The Futurum Group’s Ron Westfall and Tom Hollingsworth review the top cloud and telco takeaways from MWC 2024 consisting of Red Hat and Tech Mahindra’s hybrid cloud alliance advances, Red Hat and NTT in collaboration with NVIDIA and Fujitsu readying real-time AI analysis at the edge across IOWN environments, Telkomsel selecting Google Cloud to boost operations and products with GenAI, VMware looking to assure the DISH Wireless Open RAN build, and TELUS enlisting HPE servers to complete its Open RAN mission.
Oracle and NVIDIA Collaborate to Deliver Accelerated Computing and Generative AI Services that Establish Digital Sovereignty and Manage National Data
The Futurum Group’s Ron Westfall tells how Oracle and NVIDIA’s expanded collaboration merges Oracle’s cloud sovereignty, enterprise application acumen with NVIDIA’s AI prowess to create a more secure, efficient, and globally accessible AI ecosystem.