On this special episode of the Futurum Tech Podcast – Interview Series, host Daniel Newman is joined by Logan Wilt, AI data scientist and Applied AI Center of Excellence Garage Leader at DXC Technology to talk about using AI to approach problem solving and the human-machine relationship we are now encountering. We also discussed how organizations can get started with AI and data collection — a must for the time we are living in.
In our conversation, Logan and I discussed the coexistence of AI and humans. Despite the sci-fi movies, AI won’t ever fully replace humans. AI is learning from every interaction, but it still has shortcomings that humans need to compensate for. Just like humans have shortcomings that AI can compensate for. Organizations need to embrace that employees and AI have to work together in order to get the most out of the technology.
Ideal AI-Human relationships exist already. Logan shared her thoughts on the benefits of ideal human-AI relationships that we are already seeing even just from simple software applications in the home like a self-learning thermostat and an Alexa device. AI is learning patterns from our behavior that is eliminating the need for us to spend time programming or fiddling with the thermostat. It’s a simple idea, but we are now free to spend more time on other tasks that might require more mental energy.
Even something small like the predictive text in Google apps now can help us write an email faster. The ideal AI-human relationship allows us to spend time on things that really matter.
Getting started with AI. Logan and I discussed how companies can get started on their AI journey which included understanding that AI and data are cyclical in nature, not linear. Logan recommended that companies start with the problem they want to solve and work backwards from there to determine the type of data that would need to be collected. Then examine it from the other direction and look at the type of data that needs to be collected and how it will be stored to determine the kind of AI that can be done. Data and AI influence each other.
AI is creating jobs, not taking them. Lastly, I shared that AI, machine learning and data have created entire new departments in organizations with thousands of new jobs. Logan added that to fill these jobs, companies don’t need a data god. You need someone who is logical and curious and willing to put in the time to examine the issues. Companies can benefit from upskilling existing employees who fit this description instead of hiring someone entirely new. Data and AI are here to stay and if we want to make the most of it, we need the right talent to drive the processes.
If your company is looking at adopting AI or expanding your existing AI program, this is one podcast you don’t want to miss. Also be sure to check out what DXC Technology is doing with AI — lots of exciting things are sure to come.
Disclaimer: The Futurum Tech Podcast is for information and entertainment purposes only. Over the course of this podcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.
Daniel Newman: Welcome to the Futurum Tech Podcast. I’m your host today, Daniel Newman, Principal Analyst and Founding Partner at Futurum Research. And welcome to this week’s interview series podcast. I’m excited today to be joined by Logan Wilt. Logan is the AI garage leader within the Applied AI Center of Excellence at DXC Technologies. Logan, welcome to the Futurum Tech Podcast Interview Series.
Logan Wilt: Thank you so much.
Daniel Newman: It’s always good to have guests on here that are doing really interesting things. AI as a topic, it really doesn’t seem to matter what we’re doing, what we’re talking about, whether it’s shopping online, education, doing data analytics on world events. AI is top of mind in a lot of things, and I’m really looking forward to having this discussion with you. But before I do that, your title, it’s long, it sounds very important, but I think for most people that hear AI garage leader within the Applied AI Center of Excellence, they go, “Huh? What is that?”
So why don’t you go ahead and just quickly introduce yourself and tell us a little bit about what you are doing at DXC Technologies day in and day out. What does that role mean?
Logan Wilt: Yeah. So I’m a data scientist, but most of what I do today is oversee the projects that are delivered via the Center of Excellence. So rather than just being a delivery lead though, we use garage leader because we’re wanting to take kind of a different method towards AI and AI projects. So this is kind of as a counter to say a factory method. So when you think of, when you work in your garage, and you’re doing projects and you’re tinkering, and you’re experimenting and you’re innovating, that’s a theme of how we approach problem solving within the Center of Excellence that we want to continue forward. So that’s why I use the title garage leader.
Daniel Newman: I like it. I’ve heard a few companies use kind of a garage or the garage. It’s a little bit of that kind of cool startup. Some point it moved from the basement to the garage, I guess in certain parts of the world, there are no basements. So you have to do it in your garage, but I know it’s really cool and it sounds really interesting. Now, for everyone out there listening to this show, we record these and then we publish them usually a week or two later, as you’ve gotten to know us. It’s the middle of May. It’s still quarantine here. I’m in Chicago. And we are on a tight lock down here. I got a really funny meme today, Logan, it said something along the lines … it was a picture and it said, “Whatever you’re complaining about quit, you could be from Illinois.” But things are really strict here, but we’re coping well.
And we’re starting to come to that backside where this isn’t by any means over, but we’re starting to kind of start to talk about life again outside of just COVID-19, just the pandemic Coronavirus. We’re starting to really figure out what is the next normal, new normal, those words that I think everyone’s exhausted by hearing what’s that’s going to look like, but everyone that’s listened to the show has heard it from me. They know where I live, they know what I do. Just a little fun, a little personal moment, Logan, where are you? Where are you located? And how are you handling this whole pandemic?
Logan Wilt: Yeah. So I live in California, so I live near San Luis Obispo, which is about halfway between LA and San Francisco-
Daniel Newman: SLO-town.
Logan Wilt: Yes. We like the SLO life here. So we’ve been in lockdown for quite a while. California was the first state that shut down. And my county actually shut down a day before California did. Well, we’ve been in it the longest, at least in this area we’ve been strict, but not as strict as some of the bigger communities. Where I live is pretty rural actually but it’s been an experience. I don’t know where the proverb comes from, but like, “May you live in interesting times.” And I was like, “Oh, Oh, I do.” I’m feeling that. I’m feeling the threat of may live in interesting times, but it’s going well. We’re hitting well in the spring. We actually got a tiny bit of rain yesterday which in May for us is huge.
Daniel Newman: It’s warm there. I visited there. My folks love Pismo Beach on the Central Coast. And so I know one of the days I brought my family, we met my folks there, drove up to the market, the farmer’s market in SLO-Town. And it was great. It was great but it really hot. I remember leaving the coast and how cool it was the whole time we were on Central Coast. But then once we got to SLO, it was really hot. It was like leaving the beach and going into the desert. It’s a really cool little town. So while this show is not a travel podcast and it is not a real estate recommender by any means, it’s always great to kind of know how you’re coping. It sounds like you and I are both living in finding ourselves in times of still being pretty confined throughout this, but I am hopeful.
And actually what you work on though with data, data science, AI is actually become a really critical component to what we are going through and facing right now with the pandemic. A lot of the decisions that have been made by governments, by officials, by corporations and enterprises have been baked with data science, machine learning, neural networks, deep learning, all these words you’ve heard have become keys. And models, I think is a word that probably everybody had heard by now. Some people are probably very, very distraught because some of the models have been very, very wrong and some of them have been pretty close and very interesting. And obviously, it’s a continuous adjustment to get there.
And again, not the exact topic, but very interesting, nice caveat, it’s certainly a segue, but let’s talk a little bit about what you’re doing. Let me ask this question because I kind of just gave a primer of a lot of vernacular, a lot of terms. And a lot of people I think nod and shake their head when we start talking about AI and maybe it’s because Siri has answered a question for them, or they’ve gotten an engine recommendation in their Netflix. And they’re like, “I know AI, it’s Netflix. I know AI.” But AI, I think it really does need an introduction. It needs an explanation for most people. How do you do that?
Logan Wilt: First off is it isn’t any one set of technology. Sometimes it can be spoken of like deep learning and neural networks are synonymous with AI and with too few colors the AI is really more an approach to problem solving. And it uses lots of different technologies and methods to get there. Neural networks, deep learning, machine learning are commonly used tools in that process but they aren’t the whole of it. The other side of it is, you mentioned recommenders. So like Netflix has a, they have their secret sauce. So they don’t really discuss in great detail all the things that are going into their recommender. But there are other ones … like I was thinking for this, about Pandora. So Pandora is my major music app. They’ve got an amazing content based recommender.
It’s not about whether or not somebody who’s similar to me likes music. It’s all about the music and the music genome project and finding things that sound alike. They’ve mapped and tagged all of that content. And then they essentially link it together to allow you to find new things that are similar to the things that you’ve told them that you like. Those are both recommenders, even AI like Alexa or Siri, these are great examples of narrow AI, very task specific AI. And that’s what most of us interact with. But most people though, when you talk about AI from a kind of an emotional standpoint, they flip to generalize AI. And that’s what we see in fiction so much. And it’s often portrayed as something evil like Hal or Skynet. And I think that’s because there’s a lot of anxiety around us, what it would mean to have generalized AI, but it’s also something that’s not here yet.
And so when we flip back and forth between talking around AI and whether or not the computers are going to take over versus what we can do with computers to make our lives better, it’s really more that narrow AI. That’s where we are.
Daniel Newman: It has been very application specific, but it is … we are seeing it at scale. I think we’re seeing it at scale quickly. You mentioned a few examples, I mentioned a few examples and there’s a lot of technology in the background that really has to be put into place. I attended an event, a NVIDIA’s GTC a week ago, and that’s the big GPU conference. So a lot of people think it’s just data or it’s modeling, but there’s a ton of horsepower, it’s compute, it’s speed, it’s latency, it’s networking. And the CEO of NVIDIA was actually talking to Jensen Huang and I’m sure DXC Technologies works with NVIDIA GPU is because you guys are building stuff. And there are at least in the conversation along with other stuff you’re certainly using. But he was talking about recommender engines being the filter of the internet, basically the internet of the future, because there’s too much content.
The only way to really make the internet consumable in the future, it’s not going to just be search, it’s really going to be about AI. They can really look at our behaviors. It can filter our behaviors. And then right now it might be filtering your movies to help you find movies you like, but eventually it’s going to be everything. It’s going to be all the content, all the information. And it’s going to use different filters. One might be your own behaviors, that’s going to be perhaps the most optimal, but other is like matching personalities. Like who are people like you? How can it determine others that are like you and behaviors they have where there’s richer and more data that it can use to help you understand you? And so again, it becomes a process. We participate in making AI good, but then there’s a whole bunch of technology that has to be built in data centers, in the cloud, applications, and services, and chips, and all this stuff together really makes it happen.
That’s a great introduction. You mentioned something though, Logan, and you talked about human centered sort of human machine. This is really interesting. So I wrote a book called, Human Machine, actually the book title is called, Human/Machine. So you’ll probably like that. And one of the big things we were exploring is, what is the role of humans? In the future, what role do humans take? And so a lot of people are kind of saying, “Is AI going to create a human centered alternative?” And as we see machines do more, you might think it goes that way. But I think part of your work, at least from what I had a chance to get to know you before this podcast was sort of helping people understand that narrative, the alternative to people being replaced by machines. So talk about this. How does that become the future? How do we coexist together in your eyes?
Logan Wilt: This coexistence just kind of comes along the idea that AI is center, the half human, half AI machine, where the goal is really to work together, to leverage our strengths and compensate for each other’s weaknesses. So when you talk about recommenders becoming the filters of the internet, that’s actually a really great example of what that can be. So as humans, we’re really good at generalizing information. We’re good at taking what we’ve learned in one domain that on a superficial level, it doesn’t look like has anything else to do with another domain. And pairing them together … an example I like, this is the epidemiology of violence, that violence begets violence and you can track it like a disease. That’s taking two areas, otherwise they don’t seem like they have anything to do with each other.
And that’s something that’s very human that we’re very good at. But in the same, when you talk about the internet and there’s so much information out there, we get overwhelmed with massive amounts of information. Biologically our brains filter out tons and tons of just passing by data information all day long. Otherwise, we kind of start to fritz out, but that’s not a problem for computers. They don’t care. You can throw everything at them and they can turn through all the little details, find the weak signals and amplify them in a way that’s meaningful to people to be able to work with, same like the recommenders and going through … who knows how many thousands of things are on Netflix, but to be able to filter out so that you aren’t overwhelmed. One of the applications that we’ve worked on in the CoE is a healthy eating app.
And it’s about finding recipes and such that you want to cook. And then also finding other similar recipes that really kind of make the most of a weekly food budget. At its core really what it is, is decision support. If you were trying to map that out for yourself every single week, the amount of information that you have to go through from finding recipes that you like, and then finding other recipes that pair well with those recipes, that’s tiring, but that’s something with you in putting your preferences and your interests along with the computer, you can work really, really well together. And that’s the human center. The idea that the human should always be a part of the equation. That’s what we’re going for with the CoE.
Daniel Newman: That’s kind of a nice way to think about it, Logan, is there is a relationship and it’s symbiotic in a lot of ways. Our relationship with our devices right now, we’ve become very interdependent. There’s a dependence on one another that’s positive mostly. I wish my kids would spend a little less time on the devices and such, but the overall idea of it is, is we use it for positive things. It fills gaps in things that we aren’t necessarily wired most efficiently and effectively to do. It’s great that you can multiply numbers with four and five digits without using a calculator. But for most people, it’s like, “Why not?” And that’s a very simple example. Why would you use all that compute power in our brains for a machine that’s got that very specific application well settled to handle.
There’s other more ephemeral and empathic things that we do better than the machine. That’s not it. The machine will most likely always be better than us, kind of like kind of like Jeopardy!. Remember when the AI machine beat the champion. And the guy was really smart, but the machines just can process a lot more data a lot more quickly. The other thing though that you didn’t mention, but I think it’s probably worth mentioning is really all about the fact that jobs are not going to just go away. Your job actually is a great example. The data scientist itself 20 years ago was barely a job. If you were not in some sort of actuarial business or part of a ginormous organization … I have always wanted to get ginormous into a podcast, there was no data scientist role. There might’ve been some analyst roles. There were some people that knew how to use a computer to process large math and large numbers.
But the point is, is as we’ve recognized an opportunity for the value of data, AI, ML, it’s created entire new departments in organizations with hundreds, if not thousands, and sometimes even more jobs. And in fact, right now there’s not enough talent for the number of opportunities for data scientists out there. A decade ago you could look at something like social media and that wasn’t a job. There was no job for social media. People were saying traditional ad tech or marketing was going away. And those people adapted and adjusted into new human machine partnerships. As applications came to market, they gave businesses new ways to tell their story. There was a new human relationship and role that was created to coincide. And so there are a lot of these kinds of examples. I know in our book, we talk about the gas lighters. They used to walk around town before electricity, and they’d literally every night, you’d go up on a ladder and climb the pole and then light the lamps, the gas lamps.
And you could have easily said with electricity, that person will never work again. And the truth is, is that’s just not the reality of the world. With electricity came a wave of new opportunities, new jobs, new industries, new markets. And that’s I think one of the most important reminders. “Look, if you’re not an adaptable agile person that has a thirst for knowledge and learning, your obsolescence isn’t going to be AI driven. It’s going to be habitual. It’s going to be behavioral. Your behavior will make you obsolete. Not the fact that there aren’t jobs.” We have to constantly adjust and make sure the society has upskilling methodologies and all that stuff, Logan.
But I’m just saying as a whole, you talked about they’re replacing people with machines and you gave some construct. I felt like it was just important to kind of add that people who want to grow, this is going to create a whole new economy of jobs and opportunities. And I really hope people see that because this is powerful and it should bring society … it should uplevel society. It should be the high tide rising all boats. Speaking of which, you mentioned there’s some ideal human AI type relationships. Talk a little bit about some of the ideal examples though, that we’re seeing in everyday life. You mentioned a few that even exist within simple software applications that we use every day.
Logan Wilt: The ideal situations are the ones that improve your life. And with IOT devices and smart home devices, those are things that people are becoming familiar with. And my parents, they have an Ecobee and an Alexa. And they’re getting older and they don’t move around as much. And this has been really beneficial for them, being able to use something that can essentially take your voice and turn it into code and then talk to a device in your house to adjust it as well as the device itself is learning your patterns of your house. So it needs less tweaking. Now, I have a very ancient thermostatic, fussing with that thing all day long. That just takes time out of my day that my parents, they don’t have to take the time out of their day. So in an ideal situation, what this partnership does is frees us as people to spend more time on things that are creative, and innovative and beneficial to ourselves, either just mentally in our own mental health or society as a whole, besides fussing with your thermostat.
And that’s a really simple situation. I don’t know what the timeline is, but when you think about self-driving cars and that’s a huge endeavor, we’re getting closer to that becoming a reality. I have a five year old and she talks about, “Oh, I’m going to drive this. And I’m going to do that when I drive, mommy.” And in the back of my head I’m like, “I don’t know how much driving is going to be a thing.” Self-driving cars that you drive yourself might become the luxury of what horses are today by the time she’s there to be seriously driving. It’s coming in both small ways and big ways. AI is really filtering into our lives.
Daniel Newman: Yeah. You makes some great points. I have a three year old son and I’m a car guy. And so he sees me and he talks about cars all the time. And I always promise him what kind of car I’m going to buy him. But I do in the back of my mind think he probably will never drive. If he does, like you said, it’s going to be, because I will refuse to maybe demolition man or whatever. I’m going to be the underground guy with the old high horsepower car that enter into the world of the robotics, self-driving vehicles. But yeah, we were there. Frankly, it’s not even so much the technology of the vehicles, it’s really more the civil engineering isn’t really ready for self-driving vehicles yet. Society isn’t ready yet.
But the technology in terms of computer vision, sensors, the ability for continuous learning, all that stuff works. It works pretty well. And we’ve seen it right now. There’s no question that the fatality and accident rates of a self-driving car would outperform almost any human over a long period of time. The problem is, is humans aren’t going to accept anything less than pretty much perfect. That’s just going to be the way, because until then, with a human we’re culpable and we’re able to accept blame when a driver is playing video games in his Tesla and crashes the car when self-driving mode, we don’t know whether to blame the guy or the car. And we should know, we should know better, but we sometimes don’t because of our expectation of perfection when it comes to AI. AI is only as good as the data.
It’s only as good as the continuous learning, and information and improvement of models and algorithms. But I love that example. But there are so little ways our cars are already using AI to enhance experiences, whether that’s learning our favorite positions, our driving behaviors, breaking patterns, stuff like that the cars are learning and the data that it’s also not only real time in our own experience, but it’s applying back to manufacturers to continuously improve the vehicles are some great examples of AI and humans being paired together. Even as little as in our Google productivity tools that an app can start to finish my sentences for me, those are real solid, ideal human relationships. It knows what I’m going to say because it’s listened to me say things enough that it’s got a pretty good idea.
Sometimes it even finishes and I’m like, “That’s not what I was going to say. That’s better than what I was going to say. I’m going to use that. There’s somewhat occasional little wins that I don’t even think we’re necessarily chalking up or expecting. So let’s wrap up on this question. Someone that does what you do, that’s paying attention a lot to the continuous change and improvement, part of it is to helping companies, helping companies figure out how to get started. How do they get started with AI? What do you recommend for enterprises organizations that are either in their early phases or even later in the journey, but that just haven’t really optimized?
Logan Wilt: One of the things that we come up against, and it’s something we advise for is you want to start with your problem, start with the thing that it is that you want to solve and work backwards from there, as opposed to the flip of that is, “Let’s get all of our data perfect and spend all of our time and energy building the best data lake or re-hauling all of our source systems. And then we’ll talk about AI. It’s not a linear relationship, the much more circular. And so if you want to start with AI, then you start with a problem that you want some outside help to basically make it easier for you to solve.
And then from there you start thinking, “Well, what data do I need?” And you build them together. How you store your data, how you curate your data, that determines what kind of AI you can do. And so what you want to do is kind of start from both sides because they’re going to influence each other. And you may find that you don’t have enough data to solve your problems. And now you have to go out and get more data. But if you pick your data lake or you pick all of these structural choices too early, you might pigeonhole yourself into the kinds of things that you can do.
The other one is, you’re talking about there’s so many open positions for the talent for data scientists, and I’ve seen this in wrecks and such. And sometimes they are looking for gods of data and analytics. And you don’t need a data god to get started. You need someone who is logical, and thoughtful, and curious and willing to put in the time to figure these things out, but it doesn’t have to be this perfect PhD candidate that codes in six languages and speaks four more. It’s not that high of a pedestal to achieve. And so to get started is to be a lot more realistic about what it is you want to solve and then look at the people that you have. For a lot of enterprises, there are so many … what I like to call, proto data scientists, they have the right mindsets to solve these problems in an AI way. Invest in them, invest in them with the tools and the skills and build up your people. Don’t always look at trying to fill all these open wrecks with the scarce outside resources.
Daniel Newman: Data gods, as you like to call them. That’s the episode title, is going to be, Identifying the Data Gods with Logan Wilt. No, I’m just kidding. I think that’s some really sage advice though. I think we talked earlier about the up-skilling, this is an opportunity. People who have a natural tendency to … like a lot of great coders, for instance, didn’t go to school to code. And I think there’s a parallel there with data science. There’s a lot of people who have a knack or a propensity to seeing interesting patterns, or identifying or being open minded to experimenting and adjusting. And that’s a lot of … the data scientist is in so many ways, the root of the scientific method. It’s a lot of hypothesizing, and it’s a lot of experimenting and refining and then sharing observations, trying again, making it better and improving. And that’s how great models are created. They’re very rarely created on the first try.
So I think that’s tremendous. Logan, I just want to say, thanks so much for sitting in on this episode of the Futurum Tech podcast interview series, you were a great guest, very interesting insights.
Logan Wilt: Excellent. Thank you so much for having me.
Daniel Newman: And of course, thank you very much to DXC Technologies for being a partner. You’ve probably heard now a series of podcasts with the team at DXC technologies, some great discussions on everything from data science to security and transformation, and more. Definitely check out the rest of our Futurum Tech Podcast Series. We’ve got our weekly show where we talk about all kinds of interesting things. Then we have the interview series where we’ve had guests like Logan and so many more, so many interesting discussions.
So hit that subscribe button, tune in, stick with us. For now, for this episode, I’ve got to say goodbye, but we’ll see you later until then. Bye. Bye.