On this episode of the Futurum Tech Webcast – Interview Series, Ron Westfall and Steven Dickens are joined by distinguished guests Andy Hartman, Senior Consultant at Mainline Information Systems and Andrew Gracey, Product Manager for Developer Experience at SUSE.
Our discussion focused on the major trends and drivers as well as the main challenges in optimizing container management capabilities across swiftly evolving developer and IT Operations environments. Aligned with our discussion, we drilled down in our latest white paper, SUSE’s Innovation Engine Roars: SUSE Rancher Now on IBM zSystems and LinuxONE — done in partnership with SUSE — to explore why many organizations such as Mainline Information Systems rely on IBM’s zSystems and LinuxONE infrastructure to fulfill their container and Kubernetes requirements. How open-source adoption and IT operations that rely on technologies like Kubernetes running on IBM zSystems and LinuxONE based platforms can use SUSE Rancher to drive transformation without sacrificing the power of rapidly emerging new operational cloud-native paradigms.
Our conversation with Andy and Andrew highlighted the following top considerations:
- What were Mainline’s main objectives for adopting SUSE Rancher
- Why customers want SUSE Rancher on their mainframes
- Mainline’s experience in adopting SUSE Rancher
- How customers are gaining value from SUSE Rancher implementations
- Why organizations should consider adopting SUSE Rancher over do-it-yourself Kubernetes implementations
It was a great and stimulating conversation and one you don’t want to miss. Interested in learning more about SUSE Rancher on IBM zSystems and LinuxONE? Want to learn more about the SUSE Rancher portfolio and why it’s well-suited to fulfilling the pivot to containerized microservices and reimagining cloud native infrastructures? Check out our latest report — SUSE’s Innovation Engine Roars: SUSE Rancher Now on IBM zSystems and LinuxONE — done in collaboration with SUSE.
Watch our interview here:
Or listen to our interview on your favorite streaming platform here:
Don’t Miss An Episode – Subscribe Below:
Disclaimer: The Futurum Tech Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.
Steven Dickens: Hello, and welcome to The Futurum Tech Webcast. I’m your host Steven Dickens, and I’m joined by fellow Futurum analyst, Ron Westfall, and Andy Hartman from Mainline, and Andrew Gracey from SUSE. And today we’re going to be talking about SUSE Rancher on zSystems and LinuxONE. Welcome to the show, everyone.
Andy Hartman: Thank you.
Ron Westfall: You bet.
Steven Dickens: So let’s just go quickly around and do some introductions. This is going to be fantastic. We’ve got an Andrew and an Andy on the show today, so we’re going to have to sort of have some fun here. But, Andrew, just first off, position your role.
Andrew Gracey: Yeah. So I’m one of the product managers working on Rancher at SUSE.
Steven Dickens: Fantastic. And Andy?
Andy Hartman: Yeah, I’m a senior consultant at Mainline and I focus on IBM Z and LinuxONE workloads.
Steven Dickens: Excellent.
Andy Hartman: Yeah.
Steven Dickens: And, Ron, just introduce yourself quickly for the listeners here.
Ron Westfall: You bet. I’m Ron Westfall, research director and senior analyst here at Futurum Research, and I head up coverage and areas related to cloud, Kubernetes, et cetera.
Steven Dickens: Fantastic. So let’s dive straight in here. Andy, we’ve spoken before, but let’s just get orientated here. What were some of Mainlines objectives for adopting SUSE Rancher? What were the sort of things that were in your mind as you headed into the project?
Andy Hartman: I think from our standpoint, we had basically two objectives. The first major one was ease of implementation. So with the Kubernetes environments, they’re complex, and we wanted to make sure that Rancher on Z was the same to implement as Rancher anywhere else. And that was very important to us. It’s very hard to… If you have a hard time implementing a product, it makes it harder to take advantage of the reasons you actually are using the product. So we were very concerned about that, and I think from what we’ve seen and everything, they’ve come through with flying colors.
And then the second objective, obviously, was Rancher itself. Being able to manage multiple Kubernetes clusters from one pane of glass makes it much easier and much simpler for initial application deployment, as well as management and other things like that. So very happy with that as well.
Steven Dickens: Fantastic.
Ron Westfall: Thanks, Andy. I think that definitely tees up Andrew for a burning question that I believe folks out there would love to hear more about, and that is why do customers want SUSE Rancher on their mainframes? What is spurring broader adoption out there?
Andrew Gracey: Yeah. So I think we’re seeing, kind of in the larger IT industry, we’re seeing a lot of adoption of Kubernetes, and there’s a lot of good reasons for this, but one of the main things that Rancher allows for people adopting Kubernetes to do is basically shorten the time that it takes to actually get up and running on Kubernetes, both from the actual technical implementation portion, where for example, RKE2 is literally you just say, “RKE2 up,” and it starts going. K3s is you’ll literally run a single line script, and then you’ve got a cluster.
So being able to kind of shorten the technical portion of how to spin up a cluster that has everything that you need already kind of pre-built in, as well as the human process and the learning that you have to do, there is so much to learn at Kubernetes. And again, it’s continually changing and updating. So being able to have some nice abstractions, a good user interface to both allow for discovery, as well as your day-to-day work, we believe that we provide quite a bit of value between those two kind of ease of use value points.
Steven Dickens: And, Andy, that leads on to sort of a really sort of question for me. Andrew there talked about sort of that experience of the deployment. What’s been the Mainline experience as you guys have got to play with this technology? Does it align there to what Andrew was saying?
Andy Hartman: Yeah, sure. So what we do, we have what’s called a business partner innovation center, and it’s where we bring customers and other partners in to either test products or show customers new products and technologies and stuff. So within this environment, I have a z15. And so, using some very simple instructions that I was sent, I created, basically based on Suzie’s recommendations, I created six, what in essence is six Linux guests, all running under z/VM, which was the hypervisor, with some very simple characteristics. So they all had the same amount of disc, they all had the same virtual processors, all that stuff. So very simple to implement.
As far as RKE2 and Rancher itself, I don’t think it could be much easier. Mike Friesenegger, who I work with closely from SUSE to do this, sent me a two-page document, literally it was two pages, and then sent me like four videos. And these videos were like five minutes a piece. And so, following the videos and the two-page documentation, I went ahead and implemented both RKE2 and Rancher. So basically what that entails is you deploy your six Linux guests, you then deploy RKE2 on the first cluster, which contains three Linux guests, so a control node and two agent nodes. On top of that, you deploy Rancher. And then in the second set of Linux guests, you deploy RKE2 again, this time without Rancher. And then this is where you’re going to deploy your applications. Once that’s done, you connect the two clusters together so Rancher can manage both clusters, and then you deploy your user apps. It’s that simple.
Steven Dickens: So, Andy, how long was that taking from sort of start to finish?
Andy Hartman: Probably took me, I would say less than two hours. That includes the deployment of Linux guests. And I did those manually. So, I mean, it’s like with a little bit of simple scripting and stuff, you could probably get this down a couple commands. I mean, it’s not that hard.
Steven Dickens: So anybody who’s experienced in this space.
Andy Hartman: Yeah. Anybody that is experienced can do this. Absolutely. And even a junior system admin to do this. So it worked out very well. And a lot of the steps in this are repetitive. So I’m deploying two Kubernetes clusters to do this. So it’s very easy and it’s very easy to do once you’ve done it a couple times.
So I did this, I created three different test environments, so I tested this on SUSE, so SLES 15 SP3. I deployed it on RHEL 8.5 and I deployed it on Ubuntu 22.04, which at the time were the current versions for those distributions. And all of them worked the same. There were no big differences. One of the other things I liked about this was the fact I didn’t have to come up with a lot of prereqs. There wasn’t a lot of stuff I had to do beforehand. It was pretty much I came up with six IP addresses for the guests and that’s about all I had to do. So very simple to actually implement.
Steven Dickens: I mean, that’s one of the key differentiators here, the multi Linux distro support. Did you find that as easy as it kind of sounded there in your comments?
Andy Hartman: Yeah. Yeah, I do. I mean, it’s very important. I mean, we do not want to have that battle with customers where they have a long heritage of this particular distribution and we’re introducing a new product, and now you got to swap out your distribution or something or have two multiple distributions. That never works out well. So being able to deploy this across distributions makes it agnostic and it makes it very, very easy.
Andrew Gracey: Yeah. And I’ll jump in there and say that that’s one of the things that we were very intentional about is being able to provide that support for basically any enterprise Linux distribution because we recognize that that tool change cost can be quite high. If that doesn’t have a value associated with that tool change cost, then why should we force somebody to do that? Because the customers already have tooling built out for their chosen operating system, and if there’s not a technical reason to actually to require a change, we shouldn’t need to, we shouldn’t do that.
Ron Westfall: That’s an excellent point, Andrew. What do you see are some of the other benefits, the value that customers out there are gaining from adopting SUSE Rancher? What are the key takeaways here?
Andrew Gracey: Yeah, so I think the… I mean, besides the kind of simplification of the… I shouldn’t say simplification, but bringing the use of Kubernetes to a more reasonable level of experience. We also see a lot of value around our flexibility. We’re able to meet you where you are in your process. We hope to provide a lot of value with continuing the engineering and kind of continuing to push the envelope a bit on kind of in multiple different aspects, security, observability, developer experience, right? These are all things that are on our roadmap, and we hope to continually be pushing forward on each of those areas.
Ron Westfall: And yeah, I think that brings to mind why customers should go this path, because after all, there are do-it-yourself Kubernetes implementation alternatives out there. And from your perspective, why is SUSE Rancher simply an advantageous approach versus, say, do-it-yourself?
Andrew Gracey: Yeah. So, I mean, do-it-yourself is a perfectly valid way to go, but it does obviously increase your risk as a company. There’s a couple different ways that it does that, or at least this is all my opinion, right? But there’s a couple different ways that it does that. The first is all the tooling that you build up to do the do-it-yourself way, you’re going to be maintaining that, you’re going to be supporting that. And you’re going to be doing that over a longer time. And especially when the Kubernetes ecosystem is iterating so quickly, that risk of breakages between different Kubernetes releases becomes much higher, especially if you aren’t releasing on the right cadence and you start falling behind, it becomes really hard to catch back up, especially if you’re the one maintaining the tooling, which now you have to write all the tooling to catch back up.
And so, utilizing RKE, for example, we do that for you. So now you don’t have to worry about that. You just go and say, “Okay, cool. I’m using RKE2, and I’m going to upgrade.” And I go into Rancher and I go upgrade, and you’re done.
Ron Westfall: Makes sense. Yeah. What’s not to like? From my perspective, yes, it sounds very compelling.
Steven Dickens: I think that support, Andy, is going to be really interesting for some of the clients. Do you see the same sort of dynamic there?
Andy Hartman: Oh, absolutely. I would say in our customer mix, they’re not going to have huge numbers of people to go out and do this on their own. These are enterprise customers, so they need support. They need a structure behind whatever they deploy, so that you’re not rolling your own. You’re supported. You have someone to call if something happens, those kind of things, and it’s just a much more streamlined way of implementing a Kubernetes cluster or a strategy.
Steven Dickens: And is that because of the criticality of these environments? I mean, obviously we’re talking here about LinuxONE systems.
Andy Hartman: Oh, yes. Absolutely. I mean, these are mission critical workloads. These workloads run your business, so they cannot go down. You can’t have outages because of I’m upgrading my Kubernetes cluster, so I have to take my whole system down to do it. That can’t be done anymore. So it becomes paramount to be able to have this managed and supported around the clock.
Steven Dickens: Fantastic. As we look to wrap up here, Andy, what are some key thoughts and takeaways from your experience here with SUSE Rancher on LinuxONE and zSystems?
Andy Hartman: Well, I think since… I mean, I think most of our customers probably have already looked at Rancher or some other Kubernetes implementation or RKE2, but they’re already going down this path. So I think Rancher on Z is a great place to take advantage of the reliability, the scalability, security, and especially the co-location capabilities that you have if you’re using data or applications from Z OS. It’s a very easy thing to implement, it’s easy to manage and you get up and running quickly, and I think those are all great benefits.
Steven Dickens: And you, Andrew, what would you add to that?
Andrew Gracey: Yeah, I think there’s also a lot of benefit that comes with adopting the Kubernetes paradigm, especially around the human processes, right? The way that Kubernetes kind of pushes you to structure your application and manage it really leads to a much more healthy team dynamic. I mean, doesn’t have to, but it allows you to have a more healthy team dynamic and break up roles and responsibilities in a way that makes potentially quite a bit more sense. And it just enables a lot of the lessons that the rest of the industry has been learning in the last five to 10 years, for example.
Steven Dickens: And I think we focus so much on the technology we sometimes don’t take into account some of those wider sort of people and process benefits.
Andrew Gracey: Yeah. What’s the point of the tech if it doesn’t help the people who are having to deal with it?
Andy Hartman: Exactly.
Steven Dickens: I think that’s about as fantastic [inaudible] as we can have there to wrap up. I think really great discussion, guys, really enjoyed this. I think I direct every one of the listeners and viewers here to download the research. Fantastic deep dive on the benefits of SUSE Rancher on LinuxONE and zSystems. And thank you very much for listening. We’ll speak to you next time. Thanks very much.
Ron is an experienced research expert and analyst, with over 20 years of experience in the digital and IT transformation markets. He is a recognized authority at tracking the evolution of and identifying the key disruptive trends within the service enablement ecosystem, including software and services, infrastructure, 5G/IoT, AI/analytics, security, cloud computing, revenue management, and regulatory issues. Read Full Bio.