Search

The Issue of Bias in AI – Do we have Digital Trust or Not? – Futurum Tech Podcast Episode 016

On this week’s edition of the Futurum Tech podcast,we discuss how to solve AI’s bias problem, how to steal a Tesla, shouldn’t the president be using a secure phone, why did Italy fine Apple and Samsung, and much, much more.

Our Main Dive

The main issue we’re focusing on today is bias in AI. As artificial intelligence takes on more of a role in our world—even driving autonomous cars—it’s more important than ever to identify and solve any biases it might have. And right now, it has a lot, with three types being a concern.

First, there’s developer bias, meaning the people developing the AI systems have biases that end up baked into the technology. On top of that, there’s bias within the way AI learns from data over time, meaning even if it starts out with little bias, it could develop more. And then there’s the fact that users could use AI tools in a biased way. Thus, you end up with big issues, like facial recognition systems that only work well in some parts of the world because they favor certain ethnicities. Or you have problems with voice search, like Alexa, because it doesn’t understand some accents.

The result is that not everyone can use AI systems, and also that we can’t trust some aspects of the AI systems we’re making. And when AI systems are making lots of decisions for you—like which Netflix show to watch or which Google results you’re looking for—that’s a problem!

Bottom line: The good news is that we’re aware of bias in AI systems now, and some companies are trying to fix it. IBM just released a tool that can detect AI bias. Google has a similar tool, and Facebook is testing one, as well. So we may be on our way to detecting and eliminating AI system bias. We’re just not sure how long that’s going to take.

Our Fast Five

We dig into this week’s interesting and noteworthy news:

  • Until this week, Tesla had a full “Level 4 Autonomous” option on its website…but that was just pulled. Yeah, it could be back soon, but we’re not sure if it will, because it doesn’t seem like we’ll be ready for a fully autonomous car in the next year or two—unless we’re talking about a small, controlled environment, like a campus or airport.
  • Speaking of Tesla, we saw a video where some thieves stole a Model S in the UK. They were able to do this by capturing the passive signal coming from the owner’s key fob, and then replicating it to the car so they could unlock it and drive away. Apparently the owner hadn’t enabled the “PIN to drive” feature where you put in a PIN on the key fob. Plus, they didn’t keep the key fob in the recommended pouch that would keep the signal from being intercepted. This was a tough lesson for the owner to learn.
  • Apparently, President Trump has been using an unsecured phone, which is a problem, because Russia and China managed to hack it. Supposedly he’s only used the phone to mostly gossip to friends, but even letting the Chinese and Russians listen to those conversations has given them more than enough leverage to figure out how to affect the administration.
  • YouTube just made it even easier to multitask as you watch videos. Now if you’re watching on YouTube and try to search for other content on the site, the player window gets smaller and drops down to the bottom of the screen. While this ability has been available on mobile, it’s new on the desktop version, and it’s great if you can’t just focus on one task at a time.
  • People have been disappointed with Apple—and rightly so—for throttling older iPhones when a new iPhone is released. But it turns out Apple isn’t the only company that does this. Italy just discovered that Samsung does the same thing when releasing a new phone. Now Italy’s government has fined both Samsung and Apple for this misleading practice, so hopefully it comes to an end.

Tech Bites

For this week’s “tech that bites” award, let’s discuss Google…again. This isn’t the first time Google has appeared in this section, and it probably won’t be the last, because the company seems to have a transparency issue. This time, it was discovered that a handful of senior executives were fired for sexual harassment. That wasn’t the problem, though. The problem was that Google essentially protected these men, as they kept the cases quiet and even paid them tens of millions of dollars in their severance packages. That’s not exactly teaching those guys a lesson, Google!

Crystal Ball: Future-um Predictions and Guesses

Now for our crystal ball prediction! So, AT&T just announced it will be releasing its 5G mobile network very soon. And that sounds impressive and exciting and all that…but the issue is that very few phones will be 5G ready any time soon. None of Apple’s are, for sure, and that’s a huge share of the market. So which phones will be 5G ready first, and when? We think within 1 year, only about 5 percent of phones will be 5G ready, if that. The phones that will likely be among the first to fully handle 5G include the S10, Note10, and maybe the Pixel or an HTC. But that’s really wishful thinking at this point, because we’re pretty sure very few will be ready by the end of 2019. And there you have it, this week’s Futurum Tech Podcast.

Transcript:

On this week’s edition of the Futurum Tech podcast, how to solve AI’s bias problem, how to steal a Tesla, shouldn’t the president of be using a secure phone, why did Italy fine Apple and Samsung, and much, much more.

Fred McClimans: Hello and welcome to this edition of FTP, the Futurum Tech podcast. I’m Fred McClimans, your host for this week’s edition. Joining me today is my fellow analyst and colleague, Olivier Blanchard. Olivier, welcome to FTP.

Olivier Blanchard: It’s good to be here.

Fred McClimans: For our regular listeners, Dan Newman, our other cohort here at a Futurum Research, is not with us today, but the good news is Olivier is in France today, so we are naming this our international edition. So Olivier, how’s France?

Olivier Blanchard: France is great. It’s funny, because Dan and I were in Slovenia earlier this week, so it’s been kind of a around the earth in 80 days in airports lately.

Fred McClimans: That’s great. Well, I’m glad you are landed, grounded, and have a good internet connection coming back for the recording here today.

Olivier Blanchard: Yeah. I’m as surprised as you are.

Fred McClimans: Before we begin, I do want to take a moment and remind everybody that while we talk about a lot of companies, and about the financial aspects of those companies, the Futurum Tech podcast is for information and entertainment purposes only. We hope that you find it informative and entertaining, but we are not offering in any way, or implying any type of investment advice. With that out of the way, Olivier, our main dive today is sort of a broad topic. Typically, we’ve focused on a specific announcement, or some type of action in the industry, because we tend to cover things pretty closely. But today, we’re going to step up back a little bit and talk a little bit about a larger issue.

That issue is bias in AI, or bias in artificial intelligence. This is something that, it’s been bothering me for a while. Just the ongoing issues that we have with bias in technology in general, and the fact that bias from a development perspective, or from a corporate perspective, is making its way into AI is not that surprising, but it is a reason for concern. Now, when we talk about bias in AI, or in any technology, and I’ll broaden it to include bias in just data and analytics and so forth. Just sort of guilt by association here, and that would include machine learning, and other systems there. But when we talk about bias, there are sort of three areas of bias that I think we can identify just as broad categories and should focus on. The first is the bias of the developer, or the developers as they’re building an AI system.

It’s almost impossible to kind of separate yourself from your own experiences, and those experiences do shape the way you design a system. The functions that you build into the system, the approach that you take. So, it’s understandable that that comes into play a bit there, but it is an issue, because when we talk about the markets that we’re building today, what we’re building for global markets, you know? Technology is no longer limited in application to one particular market sector necessarily, or one particular geographic region. It’s increasingly important that within the industry, when we develop products, they really need to be able to hit that broad-based global market, the global consumer, whether it’s an individual, or a B2B type of application there.

We know it’s an issue, because there have been a number of cases in the past where we’re relying upon artificial intelligence, and machine learning, and analytics to inform, or to understand what we’re doing, and to inform a decision, or provide feedback to an individual or a company. If the system doesn’t work right, you get things like facial recognition applications that don’t work globally. They may favor one particular ethnicity over another. You have language bias that comes into play where something like Alexa may have a much better a chance of understanding a particular language, or a regional dialect, than others, but the issue there is sort of a bit larger because if we make decisions based on data that is captured and processed by an AI system, typically machine learning today, we have to question the data.

That’s important in an AI model because the way AI works, we don’t always understand how it works inside. You know? We put data in, the machine learns, the artificial intelligence grows, and it adapts. Tracking the actual algorithm process from input to output isn’t something that we can always keep up a steady hand on. We more often have to rely upon, did we get the result we thought we were going to get to verify that the system is working?

Olivier Blanchard: Yeah.

Fred McClimans: So, I think that first category, understanding the developer and how their bias might influence the development is important. The second area is bias that actually gets woven into the ongoing use and the learning process of the artificial intelligence system itself, and that is a problem because AI, it’s like a child. When you develop it, you’re not developing it with all the set of capabilities that it ultimately will have. It has to learn, and the way it learns is through data. That data could be typical data, text input. It could be visual data. It could be audio data, but it’s important that we recognize that these systems, even if you don’t inject bias into the technology from an underlying perspective, the way that artificial intelligence learns, bias can creep into the system.

Then, the third area that I look at here is kind of the way the AI tools are being used by the end user or the consumer. Again, an individual or a business. I think there’s an issue of using these systems in particular areas where they may not fit, or there may be other technologies that are more applicable in that particular area. So, these three kind of broad areas of bias coming into systems, very important that we understand these areas, and that we figure out how do we remove as much bias as possible?

It’s not just an issue of being able to serve different target audience members, but I think it really comes down to really having trust in the AI tools that we’re developing and implementing across all industries here. You know, certainly AI is important. It’s making decisions for us. It’s behind the back ends of your iPhone or your Samsung phone. It’s behind your Google search results. It’s behind the suggested videos in Netflix. So, it’s definitely something that I think we need to kind of keep an eye on, you know? So Olivier, let’s just kind of toss this out there. Give me your thoughts. How much of an issue is this? Where do you think it might be an issue more so than others?

Olivier Blanchard: This is, for me mostly, a machine learning issue at this point, or at least where most of the emphasis and the investments in technology and solutions is going towards is the machine learning side. I know that IBM very recently launched a tool aimed at detecting AI bias. I think it’s called Fairness 360 kits or something. So, that’s IBM, and it works really well within the context of Watson and IBM’s kind of dominance in the AI and machine learning space. I know that Google also has a tool, it’s not in real time, it’s not as robust, but it’s what’s called a What-If “Tool”, which helps, for instance, kind of split datasets between A and B, and identifies bias, or at least alleges that it identifies bias in machine learning and algorithms.

Facebook also says it’s testing a tool right now to determine whether or not an algorithm is biased, so there’s definitely concerted efforts in the tech industry to try to solve this problem, at least from a machine learning standpoint. Because as you mentioned earlier, we’re teaching these machines to learn from their own data sets, and their own repetitive behaviors. It’s kind of like these cycles of learning, where it’s just if you do something well over and over again, you’re going to learn how to do it better. The problem is that there isn’t necessarily a human component in the model that helps the machine learn the proper way. A machine might think it’s learning something properly and it doesn’t.

We’ve seen this with HR and machine learning, where suddenly there’s this kind of a bias for white males, because since there’s a preponderance of white males being hired, then the algorithm starts to believe that white males are better. It’s kind of like this vicious cycle of reinforcing its own bias, and that’s a problem. It’s pretty bad if you’re looking for a job and you’ve missed a fantastic career opportunity because of machine learning bias at Amazon or somewhere. But, there’s some worse types of problems.

For instance, I know that the police in the UK, I think it was a year ago, were warned by Liberty, which is a human rights group, about relying on the algorithms to decide if offenders should get paroled or not. It was kind of like, “Wait. You’re letting machine? Like machine learning and AI kind of gauge whether or not somebody should be eligible for parole based on their ethnicity, their zip code, their age, their rate of recidivism?” If you get that wrong, people end up staying in prison when they shouldn’t. It’s pretty bad.

More in a public view, what we usually hear about though are the kind of more, the less importance … I don’t want to say less important. Hang on, let me backtrack, because I’m going to get in trouble if I say less important. The types of errors that AIs and just machine learning algorithms make that are embarrassing, that are damaging, and that are harmful, but don’t necessarily deprive someone of their liberty or their life. An example was back in 2014 or 2015. Google, if you recall, had a little bit of an embarrassing moment when its photo algorithm was mistaking black people for gorillas. That’s obviously a huge problem that’s going to make a lot of noise, because people are going to notice, and it’s pretty damn awful.

Fred McClimans: That’s similar to Microsoft.

Olivier Blanchard: Yeah.

Fred McClimans: With their Twitter bot. It was very clear something was wrong, because-

Olivier Blanchard: Yeah, exactly.

Fred McClimans: … the tweets coming out of the bot were inappropriate. I think it’s the polite way to put that. But you know, this is an interesting area in looking at bias, because there are some examples where it’s readily apparent that the system isn’t working properly. You’re getting results that are just so far out of the norm that you very quickly recognize this system has learned incorrectly. That can be an issue because in some instances, that learning issue could be the result of the bias that was built into the system itself. The way it learns. But then, there’s also the side of are we putting the wrong data into the system, or perhaps more appropriately, are we not giving the system enough data? This is one of those areas where less is not more when it comes to learning. You know? Just like humans, when I research a particular area, or I’m curious about something I try and read as many different sources as I can across a very broad spectrum.

I look for different perspectives. I look for geographic perspectives. I look at different individual perspectives. Sometimes reading a piece, or that has been written by three different people, three different locations around the world, three different ages, different sexes, it can make a difference. It provides a lot of context and data. There may be differences in the data, but those differences are important. It’s how we learn. I guess sort of the question here is how do we spot the inobvious, or the unobvious areas where bias might creep in? Is there an approach that you think we can take to kind of just spot things a little bit better, or is this really something where, especially from an enterprise perspective, if you’re a brand or a company out there, are we to the point where you should make sure that you’re continuously, on a daily basis, going through and reviewing, and trying to spot inconsistencies, or areas that might be problematic?

Olivier Blanchard: Yeah. I think we’ll get around to it. It’s just we’re sort of only now, not really only now discovering that this is a problem, but realizing how prevalent it is. Even like really basic small algorithms are turning up bias. What’s most interesting to me is that a lot of the focus right now is aimed at machine learning, and images. Facial rec. So, it’s not so much necessarily some of the images, facial rec. So it’s not so much, necessarily, some of the examples that we gave earlier, but if you can teach computers to recognize faces better, then apparently, there’s … It’s kind of like a natural trickle down or trickle outwards to a lot of other applications because that’s probably … It’s actually speaking … Being able to do facial recognition really well and skin tone sequencing is … It solves a lot of adjacent problems.

Fred McClimans: You know, I would think understanding language and really understanding, you know, the sentiment and context around language would be another one of those areas where, you know, if you can solve that problem, you can probably apply that technique to other areas, you know?

But even today, after a million plus years of evolution, humans often still have an issue determining, “Was somebody being sarcastic or not?”

Olivier Blanchard: Oh, yeah! No, we’re not even that good at detecting our own bias. If we have any political conversation on Facebook with anybody, we’ll quickly realize that.

But I think one of the cool applications, especially with regards to machine learning and cameras, or at least imaging, in real time, is that machine learning and AIs are going to driving our cars eventually. The question of-

Fred McClimans: And flying our planes.

Olivier Blanchard: And flying our planes.

And the question of bias there, especially with image analysis, is that, you know, cars are going to have to decide not only whether a pedestrian is walking towards the vehicle, walking across the vehicle’s path, or walking away from it, but they’re also going to have to figure out if there’s no way to avoid a collision, should the vehicle crash into a solid object and potentially harm all the occupants of the vehicle? Or should it crash into a crowd and save the occupants but kill the pedestrians?

Fred McClimans: Right.

Olivier Blanchard: So there are biases that, for us, may be baked-in, but manifest themselves in a fraction of a second in the decision that we make as a driver, that machines are going to have to make in a half second as well that they’re not necessarily equipped to make.

Fred McClimans: Yeah, you-

Olivier Blanchard: That have serious legal ramifications.

Fred McClimans: Yeah, and that’s where I think the discussion around bias in AI shifts from being bias-focused to ethics-focused.

Olivier Blanchard: Yeah.

Fred McClimans: You know, because we’re literally creating a technology here that offers the, not just the possibility, but the requirement that we really think through, you know, “What does it mean to be ethical in behavior?”

Olivier Blanchard: Right.

Fred McClimans: What does it mean to be ethical in business?

Olivier Blanchard: Well, we have a framework to go for AI and it’s … We have Asimov’s three laws. Which, actually, there are four. There’s a fourth law that’s mentioned in one of the “Foundation” books. In the fifth one, I think. That deals with bigger realms than just robots and humans, but it’s … Honestly, it’s a good model to … If you’re trying to extend beyond just bias of machines, if you’re trying to get into ethics, you can’t really teach ethics to machines, but you can give them rules, and so if you can codify ethics into rules, I think that you can solve the problem from an engineering standpoint.

Fred McClimans: Well, you know, and I think that’s going to be important because the third area of bias that I laid out there, the actual application or use of an AI tool, I think that’s going to ultimately get, probably long-term, a lot more attention because there are areas where we may simply say, “It doesn’t make sense to rely on an AI system to uncover the data and then make a decision and then execute on that decision.” And I think for businesses, this becomes very important because I think right now the best use of AI tools, today and maybe even moving forward, is as a decision assistant, not necessarily to actually make and execute decisions.

Olivier Blanchard: That’s right. Yes, we don’t want Skynet.

Fred McClimans: No, no, no, we don’t, so I think, in kind of stepping back from this and kind of taking the broad perspective here, what should people and what should businesses do about this today? I mean, obvious, you’ve got to have some system in place, if you’re developing these tools, to make sure that you’re broadening the dataset and the developer set, both during the initial creation or development process, but then also during that learning process to make sure that you’re including as much data as you can, even if it doesn’t seem necessarily appropriate. If you’ve got a system that’s trying to make scheduling decisions that’s looking at predictive maintenance applications or trying to decide, “How do I solve this new issue that I just encountered for the first time?” More data is going to be better data, I think at this point here.

So I guess sort of the key takeaway here is: tread carefully. Understand that bias exists in these systems and start to think about how do you take a preventive step to make sure that during that learning processes, it’s learning from as much data as possible and then from an ethical perspective, in the actual application, don’t let systems or don’t build systems or don’t rely upon systems to make the decisions for you.

With that, and I know we’re going to come back to bias in AI again in further conversations, because this is ongoing.

Olivier Blanchard: Oh, yeah.

Fred McClimans: Olivier, let’s trek over to our “Fast Fives” today. We have five interesting topics to kind of toss at you and we’re going to do this quickly. We have a tendency in the past to turn “Fast Fives” into more moderately paced interesting fives, but today we’re going to churn right through it, so Olivier, first up … Your first “Fast Five”: Tesla. What’s going on with Tesla?

Olivier Blanchard: Yeah, so I was trying to subtly transition from our AI discussion into autonomous vehicles and Tesla because my first “Fast Five” is about Tesla. So, Tesla, a few days ago, pulled its full “Level 4 Autonomous” option from its websites and I understand the marketing behind it and I understand their excuse. I also understand that it’s just a question of scheduling and that it’ll back soon.

In my personal opinion, I don’t think that we have a fully autonomous vehicle in 2019 or even in 2020 that is entirely safe and reliable. I don’t care what chips come out. We’re not quite there yet, despite what Elon Musk may say or suggest. So, we’ll see when it actually comes back on the website. I don’t think it comes back.

Fred McClimans: Well, you know, I think I agree with your take there. Full autonomy, level 4, where you enter a vehicle, tell it where to go, and then exit the vehicle when you arrive. That is a way off, especially in that wide-open, uncontrolled type of environment. Where I do think there’s obviously some opportunity to move quicker, much more controlled environments, a limited campus environment, an airport, within a machine shop floor.

Olivier Blanchard: Correct.

Fred McClimans: Those areas, I think we can get there much sooner. But I’m going to continue this theme with Tesla, shifting from the AIS back to Tesla and talk about something that’s not really AI. It’s more human error, I suppose.

We had word the other day that somebody had managed to steal a Tesla Model 6. In fact there’s some great video out there on the web.

Olivier Blanchard: There’s actual video?

Fred McClimans: There’s actually video, yeah. There was a security cam that captured the entire event.

Olivier Blanchard: Oh, no! Keystone cops.

Fred McClimans: Yeah, yeah. So, a couple of thieves, they managed to steal a Tesla by capturing the passive signal from the owner’s key fob and using that data from the signal, they were able to replicate the signal to the Model 6 and open the vehicle up and drive away. However, it took them a while because the more difficult challenge that they hadn’t thought about was how to actually unplug the power cord from the car. So, smart enough to figure out how to intercept the key fob signal, but not quite there to figure out in advance, “What if it’s plugged in? What do we do?”

The other interesting thing here was that there is a feature in the Tesla key fobs called “pin to drive”. Essentially, you have to literally enter in a pin like you were opening up you’re mobile device for that. And this particular owner had not activated that, nor had they stored the key fob in an EMI controlled Faraday pouch so that the signal could not be intercepted, so I think, little lessons to learn there.

But now that we’re talking about mobile phones because we made that connection from key fobs and pins to phones, you’ve got a “Fast Five” here involving President Trump’s mobile phone.

Olivier Blanchard: I do! It’s also a poor-security issue with … Or failure to secure a technology device. So, apparently, Russia and China allegedly hacked Trump’s twitter and phone, but I think the greater story here is that, for whatever reason, the President of the United States appears to be using an unsecured phone, at least to get on Twitter and possibly to make phone calls to some of his friends and confidants, which is probably not the best idea for the President of the United States or for any important individual with secrets and very important conversations to be made to be doing, so I’m not sure why this is happening in 2018, but it’s definitely … If it’s indeed happening, as it appears to be, that probably needs to stop itself.

Fred McClimans: Yeah.

Olivier Blanchard: Negligent, I would say.

Fred McClimans: You know, that’s interesting because I know, over the past couple of years, I have seen a number of reports coming out about increased use of the Stingray … I think it’s called a “Stingray”, the fake mobile phone tower, or fake cellular towers.

Olivier Blanchard: Yeah! Yeah, yeah, yeah.

Fred McClimans: And I think at one point, they were saying that there were a couple dozen of these around the greater Washington D.C. area, so that as you’re, you know, moving from your White House, your home to your car to wherever, your mobile phone is continuously connecting to different cell towers and we have these fake cell towers that people are popping up … Literally, it’s just a briefcase-sized device, it could be, that just sits there, but it acts like a real fake tower, sort of like the fake WIFI in the airport.

Olivier Blanchard: Yeah, there’s so many different ways of hacking into a smartphone that’s … And this is definitely a more involved but a pretty decent one … I’m assuming that the White House and the secret service have some pretty solid counter measures. Even at Mar-a-Lago and places where the president probably visits on a regular basis, but still … And especially, not to make a political statement, but especially after all the hoopla about Hillary’s emails, you know, and the server not being secure, and the fact that we now have another issue with another high-level politician and federal employee who is not using a secured device is …

Fred McClimans: Well, you know, I’ll chock this up to the “Do as I say, not as I do” category, perhaps.

Olivier Blanchard: Yeah, kind of, but how hard is it to really get a secure phone? I mean, it doesn’t mean you have to stop using Twitter, it’s just … Get a secure phone! You’re … It’s possible to get one, I think, in this day and age.

Fred McClimans: Yes. I’ve got one sitting on my desk right now.

Olivier Blanchard: Yeah.

Fred McClimans: So, our fourth “Fast Five”: YouTube. We don’t talk a lot about YouTube or about necessarily consumer apps, perhaps as much as we should and this one, it just caught my eye because it portends something a bit larger.

YouTube, they have a feature on the mobile phone, if you’re watching a YouTube video, where, as you search for other content on the site, the player window is minimized and dropped down to the lower portion of the screen, but they haven’t had this on the desktop. Now, they do. And this may seem like a trivial aspect of YouTube, “Yes, now I can search for other videos while I’m doing something else.” But in the larger sense, I think what this does is this kind of further positions YouTube as more of that, you know, Netflix-type content provider, where, “I’m watching a video, but I’m also doing something else.” You know, it kind of gets us into that multi-tasking in the desktop environment and for me, I thought that was kind of interesting, just that they’re making that move there. They’re adding a relatively small feature that, from a usage and a user behavior perspective, however, I think could have some good impact.

Olivier Blanchard: That reminds me of what Facebook’s been doing for some time. It’s almost like YouTube kind of stole that kind of capability from Facebook, liked it, just kind of made it happen. I think it’s one of those things that I really like when I want it to happen and one of those things that I really hate when I don’t want it to happen. I’m trying to move away from watching a video and next thing I know, it’s still playing and now it’s in the corner and I have to find a way to swipe it off my screen. But it definitely makes sense, I think.

Fred McClimans: Yeah. All right, so Olivier, our last “Fast Five” … Quick one here on … Since we’re talking about mobile phones …

Olivier Blanchard: Yep.

Fred McClimans: Apple and Samsung devices, what’s going on?

Olivier Blanchard: Yeah, so I’m always hard on Apple and this is something that Apple we knew was doing for some time, but it turns out that Apple is not the only company that throttles phones to incite iPhone users to buy new phones. Basically, essentially what happens is all of a sudden, when a new iPhone comes out, you may notice that your current iPhone starts slowing down and doesn’t work as well. There have always been rumors that Apple was kind of like throttling them. Now, as it turns out, Italy has found that Samsung is also doing the same thing, or at least was doing the same thing. The government of Italy has fined both Apple and Samsung for throttling older phones. That’s good. I wish more countries would start doing the same. Kudos to Italy for doing that.

Fred McClimans: You’re not buying the whole argument from Apple and Samsung that this is just to preserve battery life.

Olivier Blanchard: No. I’m sure it’s also to preserve battery life, but no, I don’t.

Fred McClimans: All right, very good. Let’s move on here, our Tech Bytes Award of the Week. There’s probably a lot that we could say about this, but I’m just going to try and narrow it down as much as possible here. Our winner this week is Google. Now, they are a prior winner. They’ve done a number of things, a couple of gaffes in the past, that we just kind of shook our head at. This time, I think it’s a bit more serious and worth pointing out.

There was an interesting investigative piece in the New York Times earlier this week that brought out to light the fact that Google has or had a bit of an issue with several of its key employees and their behavior. Behavior that kind of carried directly over into the clear misconduct and in the #metoo category here, which they kept very quiet. Not only did they keep it quiet, and the individuals … I’m not going to get into who they were here in the podcast, but do you know who-

Olivier Blanchard: Some of them were pretty senior, though, right?

Fred McClimans: Yes, very senior.

Olivier Blanchard: Very, very senior.

Fred McClimans: Developer of Android senior. These individuals, while they ultimately left Google and were kind of pushed out, they also received substantial, in the tens, bordering on hundreds, of millions of dollars of packages to leave. You’ve got a situation here where employee misconduct is never allowed, especially when it moves into sort of that me too behavior aspect. It cannot be tolerated. They did take action on it, but they kind of covered it up. They kept it quiet. Just from a perception perspective, paying somebody tens of millions of dollars when they go out the door after being let go for this type of misconduct, I think it just doesn’t sit right. I think that’s why I kind of tossed them up there as this week’s Tech Bytes Award winner.

Olivier Blanchard: Yeah. Yeah. It’s half measure, and on top of that, you’re kind of rewarding people for this awful behavior that they were kind of terminated for. If you’re going to fire people, don’t give them that kind of package.

Fred McClimans: Yeah. Like I said-

Olivier Blanchard: It’s like you want to find us in the public space, go ahead and find us in the public space if you really want your name attached to this kind of scandal. Go ahead. I think it’s an HR failure there.

Fred McClimans: Clearly an HR failure. I think also from a corporate perspective, this just kind of reinforces in my mind that Google, perhaps more so than others, maybe not but I believe more than other big tech firms, or big social firms out there, they have a transparency issue. They have a tendency and I’ll use China as an example here, where they pull out of China as a search engine because of the restrictions that China is trying to place around them. Transparency is great when you’re marketing yourself. Saying that you’re leaving China because the government is oppressing free speech and they are setting up the great Chinese firewall to block communications in and out of the country. That’s great that you pulled out. Again, we found out recently that they’re actually working on a slightly censored version of Google search that would work within China. It’s these kind of issues that especially when you’re dealing in an area where we are today with information and we live in the information era, the information economy, you’ve got to be transparent. Otherwise, that sense of digital trust is just lost.

Olivier Blanchard: I’m just going to say like one thing then we can move on from there. Typically, any company that’s engineering based, right? Whether it’s automotive or technology, it’s never really truly going to be transparent. Anything that has engineers has secrets. Any company that has secrets is going to be built around a culture of secrecy and confidentiality. That permeates out of the engineering departments and into other. It’s not really that surprising for me to find out that Google is unable to be as transparent as it probably tries to be. It’s very difficult to do that when you’re a company that’s basically run by engineers.

Fred McClimans: Very good. Before we wrap up here, Olivier, our Crystal Ball of the Week. What would the week be without us making some type of a prediction about what’s going to happen next.

Olivier Blanchard: We are always right.

Fred McClimans: We are. We are, in fact. Careful cultivation of our topics, perhaps. A little bias coming into play here.

Olivier Blanchard: Nah.

Fred McClimans: This week, we’re staying in the mobile realm, AT&T. During their most recent earnings call, they announced, in fact the CEO, John Donovan, announced that AT&T is on track to release within the next few weeks here, it’s 5G mobile network. Which they say will be the, to quote him here, “This will be standards based 5G.” The question I have to you, we’ve talked about 5G in the past. It is a complicated issue. It’s not something that you simply turn on.

Olivier Blanchard: That’s right.

Fred McClimans: There is a number of different things that need to happen first, such as all of our devices being 5G ready.

Olivier Blanchard: You also have to build a lot of infrastructure before you can turn it on.

Fred McClimans: Yes, you do indeed. Today we have 5G rolling out in several weeks by AT&T. I’d love your take. 12 months from now, what percent of the mobile devices out there, and I’ll just limit it to the U.S. to make it a bit easier, what percent of those mobile devices will actually be using true standards based 5G?

Olivier Blanchard: You know, well, apparently, none of the Apple devices will be per Apple. That cuts out about what, 20 percent of the market. I don’t even know really, truly if any of the Samsung or Pixels will be released in the next year or in fall of 2019 will be fully 5G compliant, however, on the modem speed side, they’ll be able to handle 5G. I don’t know if they’ll actually be equipped with 5G modem. That’s the first part of my answer.

The second part is, like I said, there’s a huge amount of infrastructure that has to go into place. 5G is essentially, on one hand it’s all the G’s, right? It’s five, four, three, so all the 4G LT stuff that we’re using now, and the 4LT Plus is part of 5G. Those deployments don’t negate the 4G, 3G, 2G that already exists. They just kind of build on top of that with the other frequencies. 5G in and of itself is really millimeter wave. Millimeter wave is very short, very intense. Beam tracking, it follows your phone around, so it’s a completely different technology. 5G is not super well designed for big open spaces like the country. You’re not going to have 5G on highway or Route 66 in the middle of the desert. It’s not going to happen. You can have 5G in the middle of New York City. You can have 5G in airports. You can have 5G in areas where it makes sense to have 5G.

I’m very cautious about predictions with regard to the deployment of 5G. AT&T and Verizon and anybody else who says that they’re releasing their 5G and just kind of launching their 5G program, it’s going to be very, very limited. It’s going to take years to kind of scale across the United States. We’re talking like four to five years to start being noticeable, not like a year and a half.

Fred McClimans: What we really have here is marketing.

Olivier Blanchard: To an extent. It’s marketing but we have to start showing use cases for 5G. The 5G phones are coming. I think it’s going to be really important for handset makers to kind of be like the first ones to have 5G phones. Even if most people will not be using 5G even 5% of the time, they already live in a fairly large city where those deployments are happening faster than anywhere else. It’s going to be a big minus, a big black mark for handset makers if they’re phone isn’t 5G ready. Just because it’ll be substandard essentially.

Fred McClimans: Yeah, so percent? I’m saying 12 months out from now, you know end of 2019, I don’t even think we’re at 5 percent.

Olivier Blanchard: Yeah, I don’t know. I think Samsung’s … I don’t know, they might have announced this. If any phones are 5G compliant a year from now it will be the next, I guess the S10 and the Note10 and perhaps the Pixel, maybe an HTC. The big handset makers, their flagship phones may be 5G compliant but none of the other ones will.

Fred McClimans: Very good. Very good. With that, Olivier, we have come to the end of this edition of The Futurum Tech Podcast. Thank you very much for sharing your take and your perspective on the topics here today. I want to thank our listeners and say if you enjoy the podcast, go ahead and subscribe. We’re on Sound Cloud. We’re on iTunes. We may even actually be on Spotify shortly here. Subscribe, share the podcast, if you have topics or feedback, we’d love to hear it as well. Just drop us a line back. On behalf of everyone here at Futurum Research and for the Futurum Tech Podcast team, thank you for listening and we’ll see you next week.

Outro: There will be plenty of more tech topics and tech conversations right here on the Futurum Tech Podcast, FTP. Hit that subscribe button. Join us, become part of our community. We would love to hear from you. Check us out, futurmresearch.com or Futurum Tech Podcast. Daniel Newman, Fred McClimans, Olivier Blanchard. We’ll see you later.

Author Information

Fred is an experienced analyst and advisor, bringing over 30 years of knowledge and expertise in the digital and technology markets.

Prior to joining The Futurum Group, Fred worked with Samadhi Partners, launching the Digital Trust practice at HfS Research, Current Analysis, Decisys, and the Aurelian Group. He has also worked at both Gartner, E&Y, Newbridge Networks’ Advanced Technology Group (now Alcatel) and DTECH LABS (now part of Cubic Corporation).

Fred studied engineering and music at Syracuse University. A frequent author and speaker, Fred has served as a guest lecturer at the George Mason University School of Business (Porter: Information Systems and Operations Management), keynoted the Colombian Associación Nacional De Empressarios Sourcing Summit, served as an executive committee member of the Intellifest International Conference on Reasoning (AI) Technologies, and has spoken at #SxSW on trust in the digital economy.

His analysis and commentary have appeared through venues such as Cheddar TV, Adotas, CNN, Social Media Today, Seeking Alpha, Talk Markets, and Network World (IDG).

SHARE:

Latest Insights:

The Futurum Group’s Dr. Bob Sutor looks at five generative AI Python code generators to see how well they follow instructions and whether their outputs check for errors and are functionally complete.
Cerebras CS-3 Powered by 3rd Gen WSE-3 Delivers Breakthrough AI Supercomputer Capabilities Matching Up Very Favorably Against the NVIDIA Blackwell Platform
The Futurum Group’s Ron Westfall assesses why the Cerebras CS-3, powered by the WSE-3, can be viewed as the fastest AI chip across the entire AI ecosystem including the NVIDIA Blackwell platform.
Rubrik Files an S-1 with the US SEC for Initial Public Offering
Krista Macomber, Research Director at The Futurum Group, shares her insights on Rubrik’s S-1 filing with the United States Security and Exchange Commission (SEC) to go public.
The Futurum Group’s Steven Dickens provides his take on Equinix's latest announcement as Equinix pivots to a digital infrastructure leader, investing in AI-ready data centers to meet growing technological demands with a new facility in California.