Search

Reframing AI: From Cloud-based to the Edge, AI is Expanding its Reach – Futurum Tech Webcast

In this episode of Futurum Tech Webcast, host Daniel Newman and I discuss the state and direction of the ever-expanding world of artificial intelligence (AI), and how, from the cloud to the edge, AI is expanding its reach. We talked about how chipmakers are driving the future of AI and shifting from more traditional compute thinking to making bigger, longer-term bets on core AI technologies, as well as the expansion of AI from cloud-based models to more distributed, edge and device-based models. We explored the business, societal, and other broad implications of this expansion, as well as how will these different tiers of AI technology will likely work together.

Some of the specifics we touched on in this conversation included:

  • The evolution of AI from a cloud and data center focus to on-device AI, and the importance of that evolution to the types of experiences that it creates for users.
  • The challenges of cloud-based AI benefits, including latency issues, connectivity issues, overburdened networks, and outages, all of which result in performance issues and frustrations for users.
  • The binary model of cloud-based AI and device-based AI and what that third layer in the edge can do.
  • How AI is increasingly finding its way into smart phones (in ways that users might not even notice), some interesting use case examples, and where that is particularly useful.
  • The role of chipsets, like Qualcomm’s Snapdragon 865 chipsets, and some of the impressive functionality they are serving up — and what that means for the future of on-device communication.
  • How AI is finding its way into factories, cities, and utilities and what that means for the future.
  • AI’s role in healthcare, especially during an epidemic, when speed is critical.
  • The role of AI in autonomous vehicles that will someday become the norm.
  • How AI is transforming enterprise-class analytics and democratizing enterprise software by making it not only more effective, but also more accessible and user-friendly.

We also indulged in a shameless plug of our latest book (Human/Machine: The Future of our Partnership with Machines) to briefly discuss how bots, voice interfaces, automatic scheduling, and autofill are already transforming the way we work, from speeding up the process of drafting an email to streamlining document creation and project management.

You can watch our conversation here:

or grab the audio version here:

As we concluded our discussion, we explored the ethics of AI, and the importance of thinking through some of AI’s limitations when it comes to making ethical decisions. In some cases, where “right and wrong” are simply matters of law, regulation, widely accepted societal norms, terms of service, or logic, ethical questions don’t pose much of a challenge for artificial intelligence. The rules are already codified. In other cases, however, where a decision may lead to loss of life, loss of liberty, loss of privacy, or other outcomes we deem serious, humans may have to step in and either guide AI in the process of deciding the best course of action, however imperfect it may be, or override AI products by default. We will likely revisit the issue of AI ethicism in a future webcast, as it is a timely, important, and fascinating topic that demands continued attention.

Disclaimer: The Futurum Tech Podcast is for information and entertainment purposes only. Over the course of this podcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.

Other insights from the Futurum Research team:

CISO’s Playbook for Leading Security During COVID-19 – Futurum Tech Podcast Interview Series

Qualcomm’s New Snapdragon XR2 5G Reference Design Opens The Door To Truly Wireless On-Demand 8K XR Experiences

Qualcomm’s Virtual MWC Event Reveals Big Wins For Snapdragon 865

Transcript:

Olivier Blanchard: Welcome to the Futurum Tech Podcast, now also a webcast and videocast. I’m Olivier Blanchard, Senior Analyst at Futurum Research, and I am joined today by Futurum Research Principal Analyst, Daniel Newman. And today’s topic is kind of a dual topic. So, on the one hand, I want to talk about how chip makers, silicon companies are driving the future of AI, and essentially moving from traditional compute to making bigger long-term bets on AI.

And on the other hand, I also want to talk about how that is transforming the AI landscape from what used to be more of a cloud-based AI model, or at least the perception that it used to be a more cloud-based AI model, and people remember how IBM Watson seemed to embody that idea for a while, where the compute power behind AI functionality was thought to live mostly in the cloud, to a more distributed AI model now and moving forward, where a lot of that compute power is embedded in every device as opposed to just being in the data centers. And more specifically, what are the business societal and other broad implications of that shift.

And I also want to talk about the AI edge cloud, which it might be a little too soon to talk about it, I don’t know, but hold onto that thought, we’ll circle back to that. So first, I want to kind of open this up with something easy, and I think a good way to reframe the conversation about AI, because we tend to think about AI in these big terms of how it can make a lot of decisions for us, it can control the major systems. But AI is also found in everyday objects and is a major component now with the functionality of our smartphones. And I think by starting here, starting as small as we can get, this helps gives us a glimpse into how AI can easily be embedded into pretty much everything, from phone cameras and audio, and even simultaneous translation during phone calls.

Daniel, I just wanted to, now that I’ve talked for five minutes, wanted to get your first impressions or first set of comments on what I just said, and maybe drill down a little bit on some of the reality versus myths of this AI discussion. Where we were and where we’re headed.

Daniel Newman: Yeah. AI is a really interesting topic that’s found its way into the limelight over the last few years. I think the Watson Jeopardy was one of the big moments where people said, holy crow, look at what AI can do. It can defeat the champion of the world pretty easily just by ingesting Wikipedia. It’s really a much bigger discussion that has a lot more of a tenure than just the recent, some of the applications. What we’ve seen is a sort of history over 20, 25 years. And by the way, it goes back further than that. We’ve been looking at artificial intelligence, everybody remembers a space Odyssey, 2001. We remember Orwell’s books and we remember 1984 and different technologies, they were all AI driven discussions. We were going through an evolution.

Chip makers, compute, and CPUs, GPUs, VPUs, NPUs, these different silicons have been made to basically enable compute processing to do more and learn more and it creates a functionality like machine learning, deep learning, neural networks. And these technologies are what’s basically formulating things that we’re using every day. So, smart speakers, chatbots, language translation, these are just a few examples. Of course, data enrichment inside of big data platforms all based on ML, deep learning algorithms.

But we’ve also seen this evolution, and to your point, where we sort of started doing this in the data center with specialty compute devices and GPUs built by chip makers and specialty companies like Nvidia, and these data center technology has evolved into cloud. So over the last few years, we’re hearing the likes of AWS, Google cloud, IBM, of course, Microsoft Azure, and others that are playing in this space, basically enhancing their cloud offerings to be able to use AIML and make it simpler and adding frameworks. So it democratized it where you didn’t need a specialized room full of specialized compute horsepower but you could basically lease compute power, specifically for AI and ML workloads.

And now what we’re starting to see is data is proliferating most rapidly not in the data center. It’s actually proliferating most quickly, Olivier, at the edge. So you hear about IoT, you hear about edge data, on-device data. And so, we’re seeing the silicon makers next evolution is making silicon that works on our mobile devices, our portable computers, PCs, ACPCs that can basically rapidly process data in real time and use AI.

We’re kind of seeing this evolutionary journey that’s being simplified by companies building frameworks, tools, software, that essentially, like I said, democratized applications where AI can be layered into everything. So whether that’s recommender engines that we use to get movie recommendations and product recommendations on our Amazon and Netflix accounts, respectively, that could be AI that’s in our CRM tool that’s helping us understand what our next best action might be when it comes to a buyer or seller for a B to B business. The smart speaker that can hear us can translate or just a chat bot that we might talk to when we’re trying to get customer service.

So we’re seeing a lot of enhancements, and AI is coming a long way. And it really went from something that was really expensive and a big heavy lift to something that almost any business on the planet can benefit from today.

Olivier Blanchard: Right. I’ve been especially fascinated by, and this is kind of subtle, sometimes it’s hard to see where the AI actually lives or where the compute power actually lives. For instance, when you’re talking to your smart speaker, it’s difficult to tell where that artificial intelligence and language processing actually takes place because it’s connected. So if it’s connected to the internet, is your smart speaker hearing your words and then connecting to some server somewhere that does some kind of translation or some kind of analysis, and that then comes back with an answer or is it done directly on the speaker?

And one of the most fascinating and impressive examples of AI actually living on a device as opposed to being kind of on the edge or in the edge cloud was this demonstration I saw at the Snapdragon tech summit last year by Qualcomm, where they demonstrated a live conversation between two people using their phone. And one person on one end would speak English, the other person on the other end was speaking Chinese. And the process was actually fairly simple when you map it out. It’s voice to text. So, somebody would speak in their language and their voice would be translated on the device to text. And then that text would be translated from English to Chinese or from Chinese to English depending on which direction the conversation was going.

And then the same process would just then turn that translated text into translated voice. And this wasn’t done through this hybrid model of kind of sending these data packets to the cloud somewhere for it to be translated, and then waiting for it to come back. It was done directly on the device. And I think to me, that seemed like a fundamental change in the way that we think about AI but also in the way that we start to embed AI functionality in our devices, where you don’t necessarily need a connection to the internet, you don’t necessarily need a super-fast 5G connection, for instance, or Wi-Fi 6 connection for this to work. The devices can actually do that computing and essentially manage the AI functionality on their own.

I just kind of wondered what your thoughts were on that because in a more rudimentary way, that’s already what was happening with phone cameras. They’re already kind of, your phone camera is already making adjustments to optimize the quality of the image, in low light, zooming out, zooming in. And now we’re able to do semi real time language processing and translation without having to connect to a server somewhere. So where do we go next?

Daniel Newman: I think these technologies have to be made GA and widely available and highly usable. Forever, we’ve seen demonstrations of technologies that can change the way we live, but accessibility is the key. And for these types of technologies to be accessible, the silicon, the development, the AI on chip needs to continue to be advanced and it needs to be rolled out as a standard feature in devices. And then adoption becomes critical. You need this to be adopted by multiple different devices because the work has to be done on all ends. If the devices don’t have it on device, then you’re now depending on latency written edge or cloud. And of course, the edge is derived with less latency than the cloud and the cloud often maybe has less latency than trying to make some sort of remote connection into a data center. So that’s why we’ve seen things move that way.

But the on-device is going to add a number of functions and functionality and that’s kind of been the proliferation of tech that allows for tiny chips and circuits on these small portable computers that live in our pockets that are exponentially more powerful than computers that old guys like us used even just five or 10 years ago are able to do process AI workloads on device. These kinds of translations and such, they’re important. It’s important that it’s made available.

Now, there are cloud-based frameworks that are also being developed to provide almost zero latency. So, on-device will be important, edge will be important and cloud will be important. Where the data lives will always impact the ability to do certain things and create certain types of solutions. But in the end, mostly it’s about the experience. And you and I have talked about this, we’ve written books about it. So, if the experience is seamless, it’s not so much a matter of whether or not the AI is on device and it’s on the phone or if it’s at the edge or it’s in the cloud, but can we literally have a conversation with someone, to your example, in a foreign language, with no delineation, no latency, noise interference, where it feels like you’re talking to someone.

We all know when you sort of chat into your remote control and then it’s trying to process, it’s like, I can deal with this for trying to find a show I want to watch. When you’re trying to talk to someone, just think about when your computer has latency or you have a bad connection, how frustrating that is. The fact that it has to work right. But where we’re going, like I said, it’s all scale. Everything that we’re doing right now, the first and the next thing is going to be scale. There’s going to be more data, meaning more scale. You’re going to have more desire for use, you’re going to need greater levels of accuracy, reduce levels of latency, more speed, and more features and functionality. So, that’s the path I think we’re taking forward.

Olivier Blanchard: If it works well, you can’t tell where the computing is taking place, whether it’s on-device, in the cloud, somewhere else, all of the above combines, and that’s pretty nice. I think it’s interesting because I’ve been looking at the role of AI and things like collaboration, productivity, even analytics. And that still is not, I mean, and it doesn’t need to be a device-based AI functionality. It can live in the cloud, it can live anywhere. But it’s interesting to see the evolution of bots for instance. And we’ve talked about this, I’m going to go ahead and plug our book, Human Machine, which the relationship between humans and machines and explores whether or not AI and smart automation will be a job killer or job enhancer. So if you haven’t bought our book, you should definitely give it a shot because that’s pretty smart.

To come back to our main topic, bots also automatic scheduling becoming kind of smart scheduling and automating repetitive low value tasks to be able to liberate workers so they can work on more valuable tasks. I look at these kinds of major threads and trends in the use of AI. And when you were talking about, talking into your remote control and letting your TV or your service, try to figure out or try to find a movie for you to watch, it seems to me that where we’re going, especially with AI, and it’s not just machine learning, it’s artificial intelligence, emphasis on intelligence, that we’re moving towards more predictive models, where right now AI seems to be reactive.

You task it with something and then it performs a function, where I think in the next decade, what we’re going to start to see is AI actually trying to anticipate our needs and meeting them before we even asked, which means that you’re scheduling won’t be just kind of you having to input anything. It will prompt you and give you options that seem logical based on the context and also based on your trends of behavior. Same with a lot of your work tasks. If you have a habit on Monday mornings, you’re always on the phone, your outlook or whatever system you use is probably going to start getting a little smarter about that and automatically block off that time if somebody requests you to be available for something.

We can see that with analytics as well, where we used to look at analytics by looking at the past and trying to predict what’s going to happen today and tomorrow and moving more and more into the future towards more predictive or more effective predictive analytics, whether it’s business performance or logistics or even playing with the stock market. What do you think about that? Am I dreaming? Is this science fiction or do you feel that we’re getting pretty close to a more proactive era of AI functionality as opposed to reactive?

Daniel Newman: I think there’s a lot to unpack there. I think we already are seeing some of this. For instance, if you’re using Google productivity and you’re sending a mail in Gmail, and you’re seeing it’s starting to be able to finish sentences for you, interpreting, it’s analyzing, it’s learning, it’s looking at your behavior. So you think about how does a recommendation engine work. And essentially, it’s based on a lot of things in the algorithm, but there’s two major factors. How does Amazon recommends your products? One, it literally monitors your behavior. What have you looked at in the past? What else did you click on? What have you bought? When did you buy it? What type of price did you pay for it? And these factors become a profile, a data profile.

It also uses filtering and eventually it uses comparative. So it looks for other people like you. What are other people that are like you that fit certain molds in comparative. And then that creates a series of data that ends up providing a catalyst for recommendation. So those aren’t accidental. And as we’re all seeing in time, they’re getting better. The filtering’s improving. People like you, that identifies, that share values. These are yours and consumers behavior. But these are also other values that it’s able to determine about you. Maybe it’s based upon books you buy, maybe what your political stances are, other data from social networks. And it’s able to get to know you, learn about you, and provide you recommendations.

So you’re seeing it’s happening in productivity tool, you’re seeing it’s happening in shopping tools. That’s the same thing that you see happen to you, when you look at media that you might want to consume, how does Spotify what songs you’re going to want to hear? Well, you train it. You train it and then it becomes predictive. How does a CRM system gets to know who may be a really likely buyer in a period of time. It’s able to extract a lot of information from a wide swath of data.

And then it starts to be able to interpret behavior. It’s not actually someday. We’re here. It’s about refinement now, it’s about improvement. You’ve got to feed the beast. You got to feed the beast. And feeding the beast is more data. And by the way, we are very willing participants in this experiment. Society is giving that data. We can learn boundless amounts about people and we’re doing that every day through what we search on Google, through what we post on Facebook, through what we read online, through what we put on Twitter. And then of course, outside of media, social media consumption, it’s also through B2B communications. It’s emails that systems are able to intercept and interpret.

And then of course, when you think about things like vehicles, being able to make predictive decisions about your next move. How is autonomous driving going to work, a whole nother sector of AI we didn’t even touch on yet. But the computer vision is going to interpret, it’s going to interpret surroundings. It’s going to take thousands and thousands and ultimately millions of miles that get driven, and it learns and it improves, and it creates the algorithm that’s going to optimize its decision making. And of course, there’s the human participation in all of this, in every single part of this. Humans are going to participate, they’re going to provide the data, they’re going to provide the information, they’re going to provide the examples for the machine. And then the machine is going to basically optimize it. It’s going to enrich it until it becomes something better than what any individual human can deliver.

Olivier Blanchard: It’s interesting, I wanted to kind of sort of end with the automated vehicles, but we can jump ahead and go back to other topics later. But where we are right now is essentially with vehicles, on the one hand, we have connected vehicles. And so it’s just that aspect of new functionalities for cars, that’s just a connection piece, which is entertainment, it’s data, it’s all sorts of things. And we’ve also reached a point where vehicles can now be semi-autonomous. So you have some functionality of driver assist braking or self-parking, that sort of thing. And to some extent, we have very advanced almost fully autonomous functionality as well, where vehicles could theoretically drive from point A to point B without a driver on board and without running anybody over or crashing into a tree.

I don’t feel that we’re 100% there yet. Although some Tesla drivers might disagree with me, I think for safety reasons, I think we’re very close but we’re not quite there. Or not where we ought to be in terms of true safety and real performance, and 100% reliability or close to 100%. But we’re getting there.

But it seems to me that there are two unanswered questions with where we’re going with regard to fully autonomous vehicles. On the one hand, there’s what percentage of the artificial intelligence that will allow vehicles to be fully autonomous will live on the vehicle versus on the edge cloud. In essence, the question that I have is how much of the AI, how much of the autonomy and AI powering the autonomy live solely encased in the vehicle versus additional data and processing and compute that happens around the vehicle, either through 5G connections or Wi-Fi 6 connections, or any kind of other network connection. Smart devices, helping control, helping guide, and helping provide information to the vehicle, when perhaps there might be visibility issues when the weather is bad. When there are circumstances that might negatively impact the onboard computer’s ability to drive the car itself. So that’s one point.

And the other is a lot of decisions that have to be made by a driver, whether that driver is an AI or human being, happen in a split second and requires some sort of baked in ethical value base. It’s a simple dilemma of, if you can’t break and you can either go right or left, and if you go left, you’re going to potentially run over 10 people. But if you go right, you’re going to run over one child. Which direction do you take? Do you turn the wheel left, do you turn the wheel right? And that’s already a difficult decisions for humans to make, especially in a split seconds, but it’s an exceedingly difficult decision for an AI to make. And I don’t think that it’s something that can be logic-based.

And so, I’m wondering, and I’m just thinking out loud here, if in some way, we have to preprogram some of these ethical decisions into a vehicle based on driver preferences. In other words, if the decision is between saving 10 adults or one child or if the decision is between saving a few pedestrians or my family inside my vehicle, should the vehicle decide that in the moment, or should the driver have the ability when he or she is setting up their driver profile, when they first but their car, if they should be allowed to input that into the vehicle so that the AI knows what to do when that problem happens. So feel free to tackle either one or both of those points and see where it takes us.

Daniel Newman: I think when you talk about ethics of AI, this is going to be a societal debate that’s going to go on for a long time, because first of all, you have standards by country, then you’ve got standards by the world. You’ve got technologies that are developed for global economy. But you have rules that are based on different geographies. The vehicle example has been used umpteenth times and it’s a good example because it’s something everybody can relate to. Will we come up with an algorithm for vehicle decisioning in an emergency scenario that’s going to solve all the problems? It’s unlikely. Humans will drive this. This is not going to be decided by the tech. This is going to be decided by humans, and there will be set of preferences and there will be a set of inputs that will be utilized to optimize the decision tree. These things don’t actually have a conscience.

So when you say it’s hard for the AI, it’s not hard for the AI. The AI will run us all over, it doesn’t care. It’s hard for humanity to actually have a discussion about this and say, is a two year old baby’s life more valuable than 10 adults? How many children do those 10 adults have that there will be raised without parents if the car goes left instead of right? But doesn’t that child deserve its life too? On the other side of this too, if AI is optimized and done correctly and crazy drivers like me are taken off the road indefinitely, how many lives might we save? So, yes, there may be some because one of the most common causes of death when you’re not in quarantine is motor vehicle accidents are a significant contributor to death counts in most societies, in most evolved first world societies that have advanced driving highways and roads systems, and of course, everywhere else.

But I think the moral of this story is, these will be decisions that will need to be made. We are already facing a little bit of a controversy right now where the AI capabilities are long ahead of the government’s ability to govern them because the technology companies that are building them are far more advanced than those that regulate. And so, we are going to have to play catch up there, and we have to now expect a group of companies, companies like Google and Amazon and Microsoft and IBM and Oracle and Cisco and Huawei and Alibaba to basically govern this because we cannot expect governments to be able to understand what’s capable or manage it anytime soon.

So that’s another whole story for the wild to interpret. We’re going to need to address this, and I don’t know what the plan is. I’m not hearing anything that’s really compelling, I’m hearing some of these companies are trying to create consortiums for AI and for ethics in AI. But boy, I mean, given the state of our world right now, Olivier, I’m not sure whose hands I would want to put these kinds of decisions in, and certainly don’t want it to be unilateral.

Olivier Blanchard: There’s that, and there’s also public adoption. So if we’ve learned anything, and especially for me living in the south and the United States during this coronavirus phase of 2020, it’s that bad information or conflicting information prevents people from making good decisions. And it leads to a lot of confusion. If everyone is not aligned, so if the governments and the media and the technology companies and the consumers and the OEMs and the entire just ecosystem is not aligned with where it wants to go and how it wants to get there, then it creates a lot of friction.

It’s fascinating to me that we actually now have contact tracing apps that are starting to populate the app stores. They’re free to use, people can start using them, and it’s a voluntary process of essentially just entering your positive or negative coronavirus or COVID status. And then allowing other people in your network to anonymously know they’ve been in contact with someone who was positive or negative.

And the downloads for these things, even though they’re fairly new, are negligible. We have a technology here that could help save lives, that people are not necessarily downloading. There are behaviors that we know would help save lives and shorten the time frame to recovering and people are not observing those best practices either unless mandated by law, and unless there’s a punishment at the end. It’s either carrot or stick, and those are your two options. There’s two doors. Either you walk through the door and you’re going to get a reward or you walk through this other door over here and do something that you shouldn’t be doing and you get the stick.

And it just seems to me that just building it, just creating the technology is not enough. And having this kind of wild West of one car company offering these features and another car company offering these other features over here, and letting the ethics of things and best practices of how to use AI with your vehicles or with anything, is not going to work. It seems that we need more consensus and more proactive efforts to build consensus in order for these technologies to not just go haywire and ultimately not accomplishing what we want them to accomplish.

And I’m like you, I don’t know, I think we need better policies, and I think that we need to rethink the role of governments in being more proactive in the development of standards that technology companies and innovators can then kind of steer their research and their products towards. But we’re not quite there yet, and I don’t know who leads this. Because I don’t think it needs to be the Googles and Facebooks and Amazons of the world. But somehow, we’re not addressing this on a policy level the way we should. I’m not sure how to change that.

Daniel Newman: I don’t think we have that type of control. I think it’s a discussion that needs to continue to be had. I think folks like us need to write about it, talk about it, continue to use the channels and the communities and listeners we have and educate people on it, and we need to look to policymakers. This is going to need to be developed a lot like other policies. You’ll need your UNs and G-20s and CDCs and WHOs, and different policymakers will clearly decide which ones they feel are important. But we’re going to need these bodies to sort of come together and we’re going to need expertise from both the industry itself and from government. And this is going to be a real example of where these two bodies are going to need to collaborate closely.

Again, AI has got a lot of promise, and a lot of people are scared of it, but by and large, it’s going to make our lives better. Going to give us advancements and capabilities and information and accessibility and safety, health. We barely touched on healthcare, but my gosh, the next COVID coronavirus, you take supercomputing, you take AI, you take quantum, these advancements in technologies, and you’ll be able to isolate compounds and molecular compounds that can potentially serve as treatments and cures, driving vaccines.

This is epic, this is amazing. We’re going to learn a lot from what we’ve just been through with COVID-19, but it’s going to get better, it’s going to continue to get better. And so, there’s a lot to be excited about, Olivier, and AI has come a long way. It’s not new, and I feel like I constantly have to remind people that the smart speaker was your first entree or Siri or whichever, Cortana, that you’re using. But AI has long been about taking data, identifying patterns, enriching it, testing algorithms, refining it with more data, improving algorithms, testing it again. Now has more data and more real time. And as compute becomes more powerful and latency becomes less problematic and storage and data becomes more available, these things are all a perfect storm for creating a world where AI enhances our lives and experiences.

Olivier Blanchard: Yes, it is. I almost want to close on that because that was a really nice close. However, I just want to just ask you one last question. There seems to be kind of, we started with this notion that there’s AI living in the cloud, there’s AI living on the edge cloud. And then there’s AI living on device. And obviously, all of these things can be tied together, and I think 5G, which is very fast communications with very low latency, 5G can help kind of be the glue that brings it all together. It’s sort of the connectivity layer that makes all of this work and work better at least in the future.

And I’m wondering what your prediction is in terms of where the fastest or where the focused of AI innovation will be in the next two to three years. Will it be more in the cloud? Will it be more the AI edge cloud or will it be more the on-device AI that’s more independent from clouds and connectivity?

Daniel Newman: I don’t know. That’s a great question. I would think my logical brain would say the edge is going to be because it sort of splits the difference. Like I said, the data center has evolved to the cloud but the type of AI that those two are doing is more similar. It’s democratized, it made it available, made it more consumable for enterprises and companies. The edge is able to collect and respond to so much data, and its latency is low and it’s got more compute horsepower, more storage, more networking. It can take more data concurrently. And then what happens on device, you can take a lot of the stress off the device itself. And let me be very clear, these three things are going to work harmoniously. There will be workloads in the cloud, there will be workloads at the edge and there will be workloads on device.

As devices are still relatively small, the compute, the amount of chips, circuits, transistors that you can put in a phone is less than in a small data center at the edge, which is less than the big data in the cloud. We’re also seeing 20 racks or 11 racks I think it was with NVIDIA’s most recent announcement down to one rack. We’re seeing nano in terms of things getting smaller. We’re seeing all the technology gets smaller. So in time, the phone will be able to do, this mobile device will be able to do more AI on it than the cloud does today. That might be 10 years from now, but at some point. So at some point, the device will do a lot. But I still think developers are looking at it sort of harmoniously that the three need to work together interoperably.

But I like the edge if I had to pick one because all the data, it can access the cloud, but so much of your environmental data, vehicular data, device data, environmental data, all that stuff, sensors, all can be picked up at one time, get real time, processing and be utilized for decisioning or for any sort of enrichment with AI.

Olivier Blanchard: I agree. We’re in agreement with that. I’m particularly fascinated by how AI can enhance experiences and functionality of portable devices like phones and where it can really take autonomous driving. I enjoy to drive like a madman as much as the next person and I enjoy the freedom of it. But the prospect of having a car that I don’t have to park, that parks itself, that can drive me home if I’ve had three glasses of wine at a friend’s house, that’s going to eliminate traffic accidents, that’s going to eliminate traffic stops, period. That’s going to eliminate traffic fatalities is amazing. And a lot of that stuff is what’s in store if we keep going in this route.

But anyway, this is all the time that we have to devote to AI today. Unfortunately, there’s so much more that we could talk about, but I want to thank you all for joining us on this little AI 201 tour. I definitely want to thank Daniel Newman, our principal analyst at Futurum Research. And again, I invite you to subscribe, if you haven’t, to the Futurum Tech Podcast, which is again now also a webcast and a videocast. And I wish you a great week.

Author Information

Olivier Blanchard has extensive experience managing product innovation, technology adoption, digital integration, and change management for industry leaders in the B2B, B2C, B2G sectors, and the IT channel. His passion is helping decision-makers and their organizations understand the many risks and opportunities of technology-driven disruption, and leverage innovation to build stronger, better, more competitive companies.  Read Full Bio.

SHARE:

Latest Insights:

Anthony Anter and Tim Ceradsky from BMC Software join Steven Dickens to share their insights on fortifying mainframe operational resilience through a strategic CI/CD pipeline approach, emphasizing the importance of early integration and comprehensive testing strategies.
Dario Gil and Ion Stoica, from IBM Research and Anyscale & Databricks respectively, join us to share insights on why an open future for AI is critical for innovation and inclusivity. They delve into the AI Alliance's role in this vision.
The Six Five team discusses Synopsys Investor Day 2024.
The Six Five team discusses Micron Tech Q2 FY24 Earnings.