The Six Five team discusses the SAP Datasphere.
If you are interested in watching the full episode you can check it out here.
Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.
Daniel Newman: We talk a lot, because a lot of this stuff we talk about is the cool and excited and hyped, but underneath all these architectures, Pat, has got to be some practical usable technologies. And SAP announced what it calls Datasphere. And Datasphere is basically built on its business technology platform. And what it’s really trying to do is create an output for companies more efficiently and effectively using their data. And again, liken this to the generative AI conversation, Pat, you’ve heard me say this many times on this show over the last few weeks, some of the most interesting opportunities to be successful with generative AI is going to be based upon companies being able to expose and utilize the vast proprietary data that lives inside of their business.
Much of this proprietary data, much of this unique usable dataset that better understands customers, workflows, business performance, that’s where it lives. It lives in your systems of record. Systems of record like Oracle, like SAP. So this becomes a big challenge for companies like SAP to say, “How do we develop tools that enable our customers and our users to unlock the power of all the data?” And that’s really what Datasphere is. It’s the SAP data warehouse cloud, it’s a data lake warehouse, river stream, mountain view, valley brook. It is going to enable discovery, modeling, distribution of critical data. And it’s going to do so in a way that both lives inside of SAP, inside of Datasphere, but it’s also going to be ecosystem friendly. And so it’s all about A) being able to use your existing data, B) being able to use data models and curated data sets that come from SAP in their Datasphere marketplace.
And then of course it’s about simplification and ecosystems. So building integrations with Databricks, with Confluent, with DataRobot. And these are going to be really important things going forward because Pat, another thing that’s critical for success is going to be connectivity. You’ve got your structured data, your unstructured data, you got your real-time stream data, you’ve got your legacy and then storage data and having all that data, all of it to be accessible, all of it to be utilized and then being able to be commonly shared and collaborated upon, which is part of the Datasphere solution is really important. In the end, what did we get, Pat? We get a data fabric just like my vests. By the way, I just want to say I was the only guy at that party last week that didn’t have a suit coat on.
Patrick Moorhead: Maybe I think I… Did you see Axe there by chance? Was he invited?
Daniel Newman: Axe is my hero, except for all the bad things he does. But anyway, so seeing a fabric created that basically allows companies to enrich all the data across all their ecosystems and then integrated. And then finally, I guess my last note on Datasphere is the tool of the warehouse, the fabric is all really promising. And of course SAP has hundreds of thousands of customers, so this is not going to be something that will not be received and be importantly and critically looked at. But the advisory and large SIs should be beneficiaries of this. And the company is planning to roll this out with IBM, with Capgemini, with Deloitte, with Ernst and Young, and of course Accenture being the biggest of the bunch.
So it’s early days for this tool, Pat, but I think this is the kind of technology that’s saying how do we take all our data, make it usable, representable, collaborative and accessible, and then connect it to the other core data applications that we’re using and then make it SI-friendly because most of this stuff gets done for large enterprises by SIs. And so this is what SAP’s doing. This is what SAP I think has historically always attempted to do, but in this particular moment when data’s going to become even more critical to apps and AI, it’s an important launch from the company.
Patrick Moorhead: Good breakdown, Daniel – a few adders here. This is very consistent with I think both of our talk tracks related to data about this notion of a data pipeline, all the way from bringing the data in, streaming or batch, any way you want. Let’s clean it up along the entire pipeline, know who gets access to it, so the governance, the privacy, the compliance, and then teeing it up either in a data lake, a data warehouse, a structured database, a non-structured database, and then setting it up ultimately if you want to augment it with AI or analytics and then deploying those models to be run. And by the way, every step of that way, a company called Cloudera that you and I have been covering, they actually do every single thing that Datasphere talks about. And the difference is that SAP is bringing out with looks like Cloudera’s biggest competitors, Databricks, Collibra, Confluent and DataRobot.
So it’s super interesting. What I really like about it is the value that it does bring to the table, its inflows and outflows. So if you’re a customer and you have things going on in Databricks, Collibra, Confluent, DataRobot, you can not only pull in data from those services and use it inside of an SAP environment, but SAP can also elegantly export that information out to these environments as well. It kind of reminds me of these data sharing alliances that SAP and Microsoft have done in the past, but this is with some of these smaller niche players.
And exclamation point Daniel, on what you said about the different types of data, the advantage that SAP has is the operation data is there. So instead of ETL-ing it, do it there. So it’s very similar to a mainframe story as it relates to let’s say financial transactions. Why ETL it out when you can do it right there. We all know you add extra cost by moving data. Anytime you have to move data somewhere, it costs you money and it also opens you up to security risks every time you move data as well. So interesting announcement from these folks. Also, it almost looks like this, I don’t know if it replaces SAP Warehouse Cloud, but it certainly certainly looks like it.
Daniel Newman is the Chief Analyst of Futurum Research and the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise. Read Full Bio