Search

The Role of Open Source in AI Acceleration

In this episode of the Futurum Tech Webcast Interview Series, I speak with Ken Exner, the VP of Product at Elastic. We discussed Elastic’s role in the open-source community and its connection to the company’s wider strategy. Exner explains that Elasticsearch, an open-source search and indexing technology, gained popularity and enabled Elastic to expand into different industries. We then discussed the relationship between open source and commercial interests, highlighting the increasing use of Elasticsearch in the context of AI.

Our conversation covered:

  • The market trends and opportunities brought about by AI.
  • The advancements in generative AI, that allow for the generation of data and content rather than just analyzing historical data.
  • The impact of large language models, such as GPT-3 and GPT-4, in the AI space and how they have accelerated the capabilities of AI by at least a decade.
  • How Elastic is focused on helping customers leverage generative AI in various enterprise settings, such as security, observability, and search.
  • Elastic’s new offering, aims to bring generative AI to a company’s proprietary enterprise data.
  • Elastic sits between public language models and enterprise data, bridging the gap by providing context to the models using proprietary information.
  • The importance of relevance and context shaping the answers from the models.
  • The role of Elastic as a ‘picks and shovels’ type company in the AI Gold Rush.
  • How Elastic provides the foundational capabilities and building blocks for developers to create unique solutions on top of their platform.
  • Elastic’s investments in foundational capabilities, transformer model integration, relevance capabilities, and vector database technology.
  • How Elastic has been working on search capabilities for years and is now well-positioned to facilitate the connection between public language models and enterprise data, allowing businesses to harness the power of AI in a relevant and context-aware manner.

I invite you to watch the episode here to learn more about our discussion:

Or grab the audio on your streaming platform of choice here:

If you’ve not yet subscribed to the Futurum Tech Webcast, hit the ‘subscribe’ button while you’re there and you won’t miss an episode.

 

Disclaimer: The Futurum Tech Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.

Transcript:

Steven Dickens: Hello and welcome to the Futurum Tech Webcast. My name’s Steven Dickens, your host, and I’m joined today by Ken Exner, the VP of Product at Elastic. Hey, Ken, welcome to the show.

Ken Exner: Hey, Steven. Thanks for having me.

Steven Dickens: Let’s just get started. Maybe position your role first off and what you do for Elastic, and then we’ll dive straight in.

Ken Exner: Sure. I’m the Chief Product Officer, which means I manage the product, but I also manage engineering. I manage both the engineering side and the PM side of product development at Elastic.

Steven Dickens: You’re the smart guy who gets to run all the engineers?

Ken Exner: I’m the person who has to pull product and engineering together. It’s a fun role. I have a good GM style role where I manage both sides of the development process, similar to what I’ve done in other places as well. I enjoy actually managing both engineering and product.

Steven Dickens: Oh, fantastic. As we jump in here, I think of Elastic, I think of open source, I think of a company that’s been deeply rooted in open source. As we talked off camera, what I’m really keen to understand is a little bit around the role you see the organization playing, how that fits into the wider strategy at Elastic, and really what you see as that accelerative engine around open source for your business.

Ken Exner: Sure. Historically, Elasticsearch was the thing that powered Elastic. It’s what got us on the map. It was the open source, the free and open version of Elasticsearch that gained popularity with millions and millions of developers around the world. I think it’s the most popular Java open source project of all time. One of the most popular and most successful open source projects. People became familiar with Elasticsearch and started taking it into a bunch of different industries, started using it for log analytics, started using it to do threat hunting in security. It showed us all these unique cases for how you could use this indexing technology, this search database.

It created familiarity with Elastic and Elasticsearch that has allowed us to go into these different businesses. People typically begin with the free and open version of Elasticsearch, and then begin using us in a more commercial sense. Similar to a lot of the other open source vendors, the familiarity that developers develop using Elasticsearch, the free and open product, leads to a commercial relationship. I think what we’re doing now in the AI space kind of is similar. You see developers start to use Elasticsearch in the context of AI and start to figure out other ways to use it in an enterprise setting as well.

I think in the broader AI space, there is a lot of energy and excitement in the open source world. Everything that you see in Hugging Face, for example, is all about these free models and open source models that are living alongside more commercial context for using AI. I think there’s a good relationship between open source and commercial interests, enterprise use cases, as well as open source use cases, both historically in Elasticsearch, but also in the new AI models as well.

Steven Dickens: We managed to get three minutes into the call, into this session and we got straight into AI.

Ken Exner: Everything’s going straight into AI these days, right?

Steven Dickens: Yeah. I mean, it’s capturing the market’s attention right now. We’re seeing it as a major trend across the board. Obviously I think you guys are at the center of that and an enabling technology. What do you see as some of those big market trends? We can focus in just on AI, or we can go more broadly. But what are some of those market trends and opportunities you see brought about by the focus on AI and the developments that are going on in that space?

Ken Exner: Well, I mean, the big thing that’s happened in the last half year are the advances in generative AI. Where historically AI has been about learning and analyzing from historical data, it’s now about not only just analyzing and learning data, it’s about generating data, generating content from the data. I think that’s been the revolutionary step forward that’s happened. That has changed everyone’s perspective on what’s possible in AI, brought the future of AI forward by probably a decade in terms of the capabilities that people expected at this time.
This is the work that’s happened in the various large language models, the transformer models like GPT-3 and GPT-4 that power things like ChatGPT, but also just any of these generative approaches for doing co-generation or art generation or music generation. We’ve taken a phenomenal step forward in the capabilities that the industry has for generating content that is compelling and seems very human. I think everyone’s timeframe for the power of AI has suddenly shifted by at least a decade or so. I think we’re also trying to figure out “how do we live with this? How do we use this? How do we take advantage of this in commercial settings?”

How do businesses take advantage of this? I know inside of Elastic, we use co-generation quite a bit. It’s interesting technology, but how do we also help our customers use the power of generative AI for their unique scenarios? How do we use it in a security setting? How do we use it in an observability setting? Or if people are using Elasticsearch to power search, how do we improve that experience by pulling the powers and capabilities of generative AI? That’s a lot of what we’re focused on these days.

Steven Dickens: I’ve been getting closer to the technology. Your team’s been briefing me, and we’ve been digging deep. I understand there’s a new offering coming, which is bringing that generative AI to a company’s proprietary enterprise data. We hear a lot about these large language models scraping the public internet and building from those, but I see the opportunity around the corpus of data that sits within an enterprise.
Maybe that’s PII data, maybe that’s patient record data inside the hospital network. Whatever that enterprise data set is, I think that’s going to be the next wave of opportunity. Obviously you guys are well positioned in that space from the work you’ve been doing in search. Can you provide some more insights, some more thoughts on how you see the enterprise use case and the role Elastic’s going to play?

Ken Exner: I think you nailed it, Steven. We’re right at the middle between the public LLMs and the research that’s going on in transformer models and what enterprises have. Enterprises have all this proprietary data. Sometimes it’s their internal knowledge Wikis or their Slack, their entire history of Slack messages. Or if it’s a legal firm, it’s all the contracts that they have. Or if it’s a retailer, it’s their product catalog and all the information about their product. This is all proprietary data, and all the LLMs have been trained on generic data. One of the interesting things that happens is when you combine generic base level LLM information with information that’s particular to a context, you get really interesting scenarios.

You get the ability to not only take advantage of the language capabilities at the LLM, you get the context of a particular business or a particular product catalog. This allows a retailer, for example, to help answer questions about their products with knowledge of their products, things that the public LLMs are not trained on. Or if you are a business and you want to help your employees figure out how to update their tax elections or sign up for benefits and things like that, the public LLMs are not going to know anything about that. How do you create that bridge between proprietary data and the public LLMs?

This is where Elastic comes in. We’re in between those two, and we create the bridge. We pass the context to the LLMs so that they can understand how to answer it in the context of that business, in the context of what someone’s asking.

Steven Dickens: I mean, the one that stands out for me in the examples you mentioned is that product catalog. I think people can understand that retail shopping experience. Maybe it’s a Walmart or a Target or somebody with thousands and thousands of product SKUs that have come in from millions and millions of suppliers and there’s a huge product catalog. Being able to provide that product description information into an LLM model, easy for me to say, that then can be either an internally facing tool or an external facing tool, but built and trained on that internal dataset. I think that’s an interesting model for me.

I mean, obviously the public models are fascinating, but you touched on it there when we were discussing it. I want to understand where the Elasticsearch Relevance Engine fits in that context. How do you see that helping organizations as they optimize their infrastructure, use their talent? Maybe let’s go, and I’m setting you up because you’re the chief product officer, to go towards product, but maybe just frame that in for me a little bit around the Relevance Engine.

Ken Exner: I like to think of it as context shapes relevance, but relevance also shapes context. Let me explain what that means. In order for these LLMs to give meaningful answers, they have to have a little bit of context, and that context needed to be passed through prompting, which means that you give it context when you ask a question. You say, “Tell me a story about whatever as a nursery rhyme.” You’re giving it context. You want it to do it this way, or you give it context by saying, “Here’s some information. Summarize it.” You’re giving it context.

You can also pass context using context windows, which means that you can pass information to an LLM in a private setting, like in a single tenant environment, that gives it more context on what to be answering something about. If you’re trying to get a summary of a legal brief, you can pass a legal brief and say, “Summarize this.” Or if you are trying to scope the answer to a product catalog, you can pass information about the product catalog that is useful in answering it. This is essentially what we’re trying to do is trying to provide the context that gives the answer from the LLM more relevance.

One of the challenges is there’s limitations on what you can do here in terms of how big these context windows can be. It’s expensive if you’re trying to send lots of tokens over. What you end up wanting to do is send over the most relevant information. This is why I say relevance also shapes context, because you need to decide what is the most important information for this LLM to answer, and then that allows the LLM to focus on that. If you are asking something like, tell me the most current way to update my tax elections, it goes and looks at a company’s information and figures out the relevant documents are the most relevant Wiki items to send the LLM.

Using our Relevance Engine, you can send the most relevant information to the LLM that allows it to have the context to give you the most relevant answer.

Steven Dickens: I’m trying to break this down and get this in my head. This is that pre-staging component, pull the documents back that I want to put in front of the LLM type model. Obviously there could be thousands and thousands of documents to search from, maybe even hundreds of thousands of documents. How do I pull back the five that are most relevant and then present them to the model? Is that the way of thinking about that?
Ken Exner: It does, but all this happens at query time. All this happens in real time. It’s blazing fast. All this stuff has been indexed beforehand by Elastic and Elasticsearch through our integrations with the various LLMs. We then pass the context that allows the LLM to understand the most relevant information.

It allows a company to take their proprietary information, take their private information, and use it to inform the LLM so that they can get the most relevant answer that’s specific to their business, that’s specific to their use case. I think it’s an exciting space. It’s an exciting way to bridge enterprises and private use cases with the power of these large language models.
Steven Dickens: I think as we look at the AI space, I heard the other day there’s over 500 startups that have already been funded with AI in their pitch deck.

Ken Exner: Probably more.

Steven Dickens: Probably more than that this week have received some type of funding. But I think for me the piece that’s most interesting is if this is the new Gold Rush, what are those picks and shovels type companies? There’s going to be lots of specific models. You mentioned the legal profession, I think that’s going to be one that I see. There’s going to be ones for cancer. There’s going to be ones for particular drugs. There’s going to be ones for retail. There’s going to be a lot of industry use cases and organizations that get started that we haven’t heard of that are those cool apps that we’ll all be using two years from now.

But as I look at the sector, I’m more interested in the Levi Strauss type company from the Gold Rush rather than that person and that mine that stood up for a couple of years to do the panning for gold. Where do you see yourselves as Elastic? In that I see you as a picks and shovels type company that’s going to ride this wave regardless of what that flashy app is that we don’t know we’re going to be using in two years time, but we know we’re going to be using it. I see you guys a little bit further down the stack as one of those picks and shovels type companies. Where do you see yourselves?

Ken Exner: I think we’re both, but I think the announcements we have this week are really about the picks and shovels. It’s about the foundational capabilities, but we do both. We make sure that we invest in the foundational building blocks that allow developers to build unique solutions on top of us, but we also use those things ourselves in our solutions, in our observability platform, and in our security platform. We consume these things ourselves in order to do things like run book automation or in order to do things like anomaly detection.

We are consuming some of the foundational capabilities ourselves, but it does start with the investments in the foundational capabilities, the primitives that allow people to build unique things on top of us. In terms of Elastic and our relationship with AI, our investments in AI, this goes back a while, this goes back to our acquisition of Prelert a number of years ago and a bunch of capabilities that we’ve invested as building blocks in the platform. We’ve invested in making sure that Elasticsearch is essentially a vector store, a vector database that can store embeddings data, which has similarities to data generated and used by LLMs.

We’ve invested in the relevance capabilities. We’ve invested in transformer model integration. Well before ChatGPT, we had integrated with transformers so you could integrate whatever transformer model you were hosting or finding in Hugging Face together with Elasticsearch. We’ve been investing in these capabilities for several years now that have allowed us to now be at the bridge between public LLMs and enterprises. This is not something that we suddenly started working on last week.

This is something we’ve been working on for a couple years now. It’s allowed us to sort of be in this unique place where we can help shape the context of the answers in an LLM using a business’s unique data.

Steven Dickens: Ken, as we start to wrap up, I’m going to do a little experiment. You didn’t see this as a prepped question. I’m going to put it almost as a prompt into ChatGPT. Summarize the last 15 minutes of our conversation into two or three sentences.

Ken Exner: Easy.

Steven Dickens: You didn’t know that one was coming, so I staged that one on you.

Ken Exner: No. Elasticsearch has been investing in AI foundational capabilities for several years, including making Elasticsearch a first class vector database, integrations with transformer models, proprietary AI models for relevance and ranking. These capabilities that we’ve been investing in for a couple years now, now allow us to create a bridge between LLMs, large language models, and companies and their private data.
We are a bridge that allows companies, e-commerce companies, law firms, anyone with private data to leverage the power of a large language model like ChatGPT with their private data. We provide the bridge that helps give context to the large language model that allows them to give a relevant answer specific to your business.

Steven Dickens: Ken, that was a fantastic summary. Really appreciate the conversation. You’ve been listening to the Futurum Tech Webcast. Please like and subscribe. Join us on the next episode and we’ll see you next time. Thank you very much.

Author Information

Regarded as a luminary at the intersection of technology and business transformation, Steven Dickens is the Vice President and Practice Leader for Hybrid Cloud, Infrastructure, and Operations at The Futurum Group. With a distinguished track record as a Forbes contributor and a ranking among the Top 10 Analysts by ARInsights, Steven's unique vantage point enables him to chart the nexus between emergent technologies and disruptive innovation, offering unparalleled insights for global enterprises.

Steven's expertise spans a broad spectrum of technologies that drive modern enterprises. Notable among these are open source, hybrid cloud, mission-critical infrastructure, cryptocurrencies, blockchain, and FinTech innovation. His work is foundational in aligning the strategic imperatives of C-suite executives with the practical needs of end users and technology practitioners, serving as a catalyst for optimizing the return on technology investments.

Over the years, Steven has been an integral part of industry behemoths including Broadcom, Hewlett Packard Enterprise (HPE), and IBM. His exceptional ability to pioneer multi-hundred-million-dollar products and to lead global sales teams with revenues in the same echelon has consistently demonstrated his capability for high-impact leadership.

Steven serves as a thought leader in various technology consortiums. He was a founding board member and former Chairperson of the Open Mainframe Project, under the aegis of the Linux Foundation. His role as a Board Advisor continues to shape the advocacy for open source implementations of mainframe technologies.

SHARE:

Latest Insights:

On this episode of The Six Five Webcast, hosts Patrick Moorhead and Daniel Newman discuss Apple Vision Pro developers losing interest, U.S. awards Samsung and Micron over $6B in CHIPS Act funding, does AMD have a datacenter AI GPU problem, Adobe’s use of Midjourney, Samsung knocks Apple off of number 1 market share, and Arm says CPUs can save 15% of total datacenter power.
In Recent Years, the Concept of a Sovereign Cloud Has Gained Significant Traction Among Nations Seeking Greater Autonomy and Security in Their Digital Infrastructures
The Futurum Group’s Steven Dickens observes that Oracle's recent $8 billion investment in Japan not only expands its cloud infrastructure but also strategically aligns with the growing global trend toward sovereign cloud solutions.
Hammerspace, Seagate, Quantum, LucidLink, and Resilio Are Among the NAB Products of the Year for 2024
Camberley Bates, VP of Data Infrastructure at The Futurum Group, covers the significance of data infrastructure at the NAB Show 2024 and the Product of the Year Awards.
The Circular Economy Can Help Your Company and Business Ecosystem Improve Sustainability Efforts and Create New Innovation Opportunities
Olivier Blanchard, Research Director at The Futurum Group, encourages you to download the latest whitepaper from The Futurum Group, which highlights circularity—a perfect read for Earth Day.