For this week’s episode of the Futurum Tech Podcast, I was joined by my colleague and fellow analyst, Ron Westfall. We discussed what organizations need to know about the EU’s newly proposed guidelines for artificial intelligence and Apple’s hit to the bottom line as a result of the coronavirus outbreak. We also covered the T-Mobile Sprint merger and what we think is ahead in the telecoms market and explored our thoughts about the role automation will play in 5G operations. Lastly, we covered news that Google is cracking down on Google Play apps that track location in the background, along with the FTC’s probe into high tech acquisitions over the course of the last decade. Let’s dive in.
Our Main Dive
An exploration of the European Union’s newly proposed guidelines for artificial intelligence. The European Union is taking a firm stance on regulating digital technologies, including the use of artificial intelligence in Europe, with a view toward requirements and a framework for the development of what it is calling “trustworthy artificial intelligence.” The guidelines are intended to create an environment, and specifically a regulatory environment, where technology works for people, in ways that promote a fair, competitive economy and economic environment, and an open, democratic, sustainable society.
And all of that? No small undertaking. Our discussion on this topic was an interesting one, and of special interest to me personally, as I am responsible for coverage of all things related to regulations and compliance within the tech industry. Beyond the podcast discussion, I covered this topic in greater detail in an article published last week: Quick Take: What you need to know about the EU’s newly proposed guidelines for Artificial Intelligence and invite you to check it out.
Our Fast Five
We dig into this week’s interesting and noteworthy news:
- Apple warns revenue will be lower than expected because of coronavirus impact. It is rare indeed for Apple to reach out to the investment community in advance of earnings to report on issues or concerns, which make an update on Monday of last week of particular interest. Apple said that the global effects of the coronavirus outbreak (do I call it a pandemic yet?) are expected to have a significant impact on the tech giant’s bottom line. The company warned that the worldwide iPhone supply will be impacted because of the impacts the coronavirus outbreak is having on the supply chain.
- The T-Mobile Sprint merger is a go — here’s a look ahead at what’s next. The T-Mobile Sprint acquisition, almost two years in the making, is wrapping up, with the $26.5 billion acquisition of Sprint and the formation of what will be the third largest mobile operator in the United States. Using a recently published article on this topic, Ron walked us through what he thinks are the key things ahead. These include: Expanded access that will have a definitive impact on the largely underserved rural America, better service and lower prices (for a variety of reasons), an alternative to in-home broadband (bringing competition to cable operators, especially in rural areas), and expanded retail locations and customer experience centers, which means jobs. Beyond the discussion here, check out Ron’s article: T-Mobile Sprint Deal Wins Merger Approval – Wreaking Competitive Havoc is Next Up.
- Google cracks down on Android apps tracking location in the background. The news that apps offered in the Google Play Store will face a new review process before being permitted to track users’ location in the background is, I think, very good news. Google says its review process will look at whether an app’s core functionality really needs (and justifies) background location access. These changes follow changes made by Apple in its i)S 13 updates, and are part of a wider focus on location tracking and limiting unnecessary tracking in Android 11.
- Automation is the key to successful 5G operations. Ron walked us through the recent news of Nokia’s launch of the Nokia Network Operations Master, intended to provide comms service providers with software that is both highly-automated and easily scalable, which will help tremendously with the management of their 5G networks. This discussion is a look at why 5G automation is critically important, and what is likely ahead for Nokia and rivals Huawei, ZTE, Cisco, and Ericsson as it relates to delivering automated management for 5G-IoT environments.
- Big Tech braces for sprawling FTC acquisitions review. In what seems like a daunting undertaking, the FTC has announced the intention to review the past decade of acquisitions by tech giants Apple, Amazon, Alphabet (Google’s parent company), Microsoft, and Facebook. The review is not limited to only large acquisitions by these companies, but also some of the smaller acquisitions as well. I thought Axios covered it nicely when reported, by saying: This could become a massive headache for the five target companies, but more “take some aspirin and a nap” than “go to the ER.” For a deeper dive on this, beyond our discussion here, be sure to check out coverage by Axios, which is always insightful.
Facebook. Political Advertising. Continues to be best described as “nightmarish.” Nearly four years after its controversial role in influencing the 2016 US Presidential elections, Facebook still struggles to figure out how to deal with paid political content. In fact, it seems a little like Facebook can’t figure out what it really wants on the topic of politics, transparency, and advertising, which is why this is featured in the Tech Bites section of our podcast. This week’s (ridiculousness) from Facebook centers on a purported concern around the lack of transparency around Bloomberg’s employees whose Facebook posts supporting Blomberg for president aren’t clearly identified. Excuse us while we go bang our collective heads against the wall. Again. Facebook, get it together. More on that story from CNBC.
Crystal Ball: Future-um Predictions and Guesses
For the crystal ball section of our show, we explore when and how the U.S. will develop or adopt guardrail guidelines for artificial intelligence. What do you think? Want to know what we think, well, you’ll need to listen to the podcast. And if you’ve not yet taken a moment to subscribe, do it — you won’t be sorry. Here’s a link for you to do that: Subscribe Here.
Olivier Blanchard: Welcome to this week’s edition of FTP, The Futurum Tech Podcast. I’m Olivier Blanchard, Senior Analyst with Futurum Research. Joining me today is fellow Analyst Ron Westfall. We’re going to start today’s show with a discussion about the EU’s new AI Guidelines, then we’ll share some of our favorite news stories of the week in our Fast Five segments, followed by Tech Bites, in which we highlight one of the biggest tech-related fails of the week. We will end the show with our Crystal Ball. As always, it goes without saying that this show is intended for informational purposes only, and no advice or insights provided here today, no matter how great it sounds, should be taken as investment advice.
Let’s dive into our topic, artificial intelligence, the European Union, and the setting of guidelines. For those of you listening who aren’t aware, Europe has been pretty aggressively trying to define the vision and the framework it wants to build for digital technologies over the next 20 years, maybe one to two generations because the European Union as an entity realizes that digital technologies of the future, they’re very powerful as economic forces but also as potentially political forces.
All these digital technologies from AI to the IoT to 5G, 6G, 7G, whatever comes next, robotics, self-driving cars, smart cities, all of these things are going to radically change the world that we live in. They’re going to have a huge impact on voting, on education, on jobs, on the economy, and so Europe is being pretty serious about becoming proactive and establishing guidelines so that it can sort of aim these technologies in the right direction and not end up in a weird situation where the technologies and the companies behind them have too much power and interfere with the type of society that the European Union wants to build for itself.
This week, we saw a little chapter of this particular giant digital strategy emerge in essentially what turned out to be the first iteration of the European Commission’s guidelines for artificial intelligence. Essentially, they just kind of outlined three different things. The first one is kind of an economic series of principles where the EC is prioritizing job creation, investments in innovation, a huge amount of hopefully partnerships between the public sector and the private sector. I think also the establishment of a track for European companies to start becoming leaders in artificial intelligence, as opposed to mostly China and the United States.
The other two elements of this, the other two dimensions are kind of a split, like this binary equation between high-risk artificial intelligence applications and low-risk artificial intelligence applications. The way that they kind of define them are essentially that high-risk artificial intelligence applications are the types that need to have some kind of human involvement. Whether it is making decisions about autonomous or a generally autonomous vehicle plowing into a crowd of pedestrians as opposed to just one pedestrian, who makes that decision? Should there be a human involved? Also, a slew of other applications from healthcare to law enforcement and civil liberties and things like that.
Then, again, the other kind of dimension to these or the low-risk applications where artificial intelligence kind of runs, manages, and analyzes data and devices or networks and where there’s not a really high risk for humans to be discriminated against or harmed in any way. Those will be treated differently, even though guidelines will exist for those as well. Now, that’s I’ve monologued for 20 minutes, Ron, what are your first impressions? If you’ve read up at all about this, does anything strike you as good? As bad? As kind of missing? What are your first impressions on this?
Ron Westfall: Excellent topic, Olivier, and yes, I think to start off, it’s a fair point to note that when it comes to the U.S. and China, there has been an AI arms race going on. That’s driven by government prerogatives to ensure competitive leadership, to ensure that AI does indeed augment national security, military, and other industrial capabilities that are key to any overall competitive strategy. It’s important to also observe that with the EU now attaching even more scrutiny on how to regulate it, countries such as France and the UK are also funding their own AI R&D efforts. This also has to be played out on an individual country basis in addition to wherever the EU has jurisdiction.
The positive, I think you’ve hit it on already. For example, when it comes to applications such as Black Box AI technology, that is technology that’s inscrutable for many humans to understand, or it’s difficult to trace the data sources for how that AI engine is running. Then, that’s an unknown unknown and it does warrant more governmental oversight. It’s actually good to see the EU take leadership on this. There can be some valuable takeaways for the U.S., for the UK, for China, and other countries that are also obviously interested in AI technologies.
Also, as we know, there are privacy concerns. For example, AI can power facial recognition technologies and we’ve had this debate before. How far do you take that? Yes, there are legitimate national security and police applications for it, but it has to be carefully balanced with respecting the civil rights and the privacy of lawful citizens. In terms of concerns, there’s always the issue of, “Okay, is the regulation going to do more harm than good?”
Obviously, this still has a frontier aspect to it. There’s still unknown unknowns, but also unknown knowns. For example, AI has very practical applications for alleviating manual repetitive tasks within the workforce, and as a result improve the workforce experience, which is a positive aspect. When you’re talking about some of the high-risk applications, how do we know that the low-risk applications also don’t have high risks because there’s less human oversight involved? How do we categorize these areas correctly? There will probably be more divisions needed, mid-risks and so forth.
The bottom line is this effort needs to be undertaken. I think the next three months are going to be important because that’s where the input and the commentary will be processed. We can at least have a template on, how do we better handle AI technology that has a greater impact throughout society and the global economy?
Olivier Blanchard: Right, and it’s interesting because it’s sort of trying to define an entire framework of law because basically this is going to end up as hard law with regard to AI and other technologies. It’s going to be exceedingly difficult for the European Commission to do this because they’re kind of working with a moving target. It’s very hard to define, as you suggested, the types of high-risk and low-risk applications that AI is capable of handling now, let alone the ones that we’ll be dealing with two years, three years, five years, 10 years down the road.
In some way, I’m a little bit curious about how the European Commission is going to tackle this because it’s going to have to be a very… Either the laws themselves are going to have to be very flexible, or they’re going to have to be constantly changing. I don’t know that Europe, or any country for that matter, has a process in place, a legislative process, that can be as fluid as… or fluid enough, anyway, to match the velocity with which technologies evolve year over year. That’s going to be a huge challenge for them because the laws that they enact two years from now may not reflect the reality of what really is happening with technology in three years or four years or five.
Ron Westfall: True, and this is another incidence of regulation chasing technology and innovation and only so much can be done. However, I think more resource allocation, more laying the groundwork is required. It’s helpful that the EU is in addition to obviously prioritizing R&D in this vital area but is also following up with, “Okay, we really need to think this through in terms of its across-the-board impact.” The U.S. and China is two prominent examples. It can benefit from also taking that next step as something that can actually aid their competitive race to who can get the biggest, fastest, bestest AI out there in the marketplace or to enhance existing government initiatives.
Olivier Blanchard: It’s interesting, though, also because we can kind of guess where not so much the pain points but the focus points are going to be for the Commission because they will probably reflect the larger, more overall objectives of its digital strategy for all technology. On the one hand, it’s obviously focusing on the risk of discrimination and the text of essentially all of the communications that have come out of the EC in the last few days regarding this focus a lot on unbiased data or on the importance to use unbiased data to train so-called “high-risk” artificial intelligence systems so that they’ll avoid discrimination. There’s a huge emphasis on limiting if not completely eliminating the risk of bias, which good luck with that. I’d love to see that happen, but we know that’s exceedingly difficult to do.
Aside from that and civil liberties and discrimination, the commission is extremely focused on data protection. We’ve already seen what they’ve kind of arrived at with GDPR, which is I think a really good first step in protecting consumers and voters, essentially individuals’ privacy and data rights. They’re also very focused on providing better access to online goods for consumers and businesses. They really are trying to build a true digital economy. Also, they’re trying to create the right regulatory environment for digital networks and services. Obviously, they’re trying to take a leadership role, which I think is probably the most important thing in data and technology strategy.
I don’t really see any movement like this in the United States or in Canada. We’re not really having these discussions yet. I’m noticing that, if nothing else, Europe may not find the perfect answer or the perfect solutions to some of these challenges, but at least Europe is providing us with a template that other countries can then look at and sort of pick and choose the items or the elements that they like, the ones that they want to adjust a little bit. It’s really good to have actually Europe kind of become the test case for some of these challenges.
Ron Westfall: I agree, and one thing that I think is encouraging that would help to establish that template that other countries could potentially emulate is the idea of a trustworthy AI certification process. This gives you that assurance, like, “Okay, this has been vetted, it’s been scrutinized, and this AI technology will not cause major headaches in terms of violating privacy”, for example.
Moreover, it also has been proven in many other technological areas that seal of approval can actually boost confidence in buying the technology and testing the technology. It can actually help the business case. As we know all, the other major internet players out there are obviously fully invested in this technology. It’s just a question of, how are they going to reap the benefits of using AI technology within their respective areas?
Olivier Blanchard: I think what the European Commission… Commission, I’m sorry, I can’t talk today, is trying to get at is essentially create the exact opposite of what China is building with its use of technologies. This is something… it’s a topic that comes up pretty often whenever Dan and I are involved because we… There’s a shameless plug ahead. I wish there was like a little sign on the side of the road there, but we just did publish a book called Human/Machine that talks about human/machine partnerships. One of the main themes of the book that we start with is this kind of divergence between the three types of uses of these same technologies. It’s basically technology-agnostic.
There are three types of uses. One is Big Brother, which is essentially the surveillance state that exploits the technologies to control people. There’s Big Mother, which is the exact same thing but with a sense of benevolence, so basically it’s for the benefits of the users but they don’t necessarily have any control over how the technology’s used.
Essentially, it could be a very loving sort of, “I’m going to protect you and provide you with the best choices”, but it can also be overbearing and intrusive in ways that we don’t necessarily appreciate as users. Then, there’s the third category, which is Big Butler, which is entirely opt-in and entirely in the service of people.
I think that while China seems to be learning very hard towards the Big Brother model, Europe appears to be kind of trying to steer itself towards Big Butler, with a little bit of overlap into the more benevolent versions of Big Mother.
At least the intent is good, I just hope that the EC’s spends towards overregulation and sometimes overbearing punitive treatments of U.S. companies, especially U.S. technology companies, won’t interfere with its objectives there. I think that the friction point that will be really interesting is seeing where these regulations lands and how companies like Google, Amazon, Apple, Microsoft, et cetera, sort of adjust themselves to this. Or, if their reluctance to do so gives rise to a new generation of digital and especially AI-driven companies in Europe, which I think is either plan A or plan B for the European Commission.
One last point before we move on, because we have to get to our Fast Five, thanks to Brexit, none of the benefits of whatever the European Commission comes up with in terms of these regulations will be necessarily applicable in the UK, so Britain will either copy the EU or do its own thing, but just a reminder that the UK is no longer part of Europe in the way that it used to be. That’s it for that, but we’ll definitely be talking about this some more because, on the one hand, artificial intelligence is everywhere now and distributed AI is going to be a big topic in the next few years. Also, there will be updates probably starting in May when the European Union starts to really take action on this particular topic.
All right, so now our Fast Five. Ron, what is your first big tech news item of the week that caught your eye?
Ron Westfall: Well, I think it’s important to note that the T-Mobile-Sprint merger is advancing. Obviously, the federal court decision, that came down courtesy of Judge Marrero, is fundamentally the greenlight for that merger to be completed, if not by April, then certainly within 2020. That’s good news for the new T-Mobile, as they’re dubbing themselves, but more importantly, it’s good news for the ecosystem because we’ve seen major suppliers like Ericsson have to invoke the delays and the approval of that merger as giving them headaches in terms of their earnings report for calendar year, fiscal year 2019, for example. Other suppliers like Nokia have had to address that same factor.
After we saw the greenlight, then lo and behold, the stock performance of the ecosystem for mobile communications such as Ericsson, such as Nokia got a clear bounce up.
In addition, it’s also evident that the deal will become even more attractive on the T-Mobile side because Sprint, through its parent company SoftBank, agreed to adjust the stock ration distribution so that T-Mobile through its parent company, Deutsche Telekom, owns more shares in the near term, but should they execute and they meet certain targets, then SoftBank can regain the previously negotiated ratios. With that, this is actually a positive news in the context of the coronavirus impact on the global ecosystem, especially within the mobile communications sector.
Olivier Blanchard: We’re going to talk about that some more, right?
Ron Westfall: Naturally, yes.
Olivier Blanchard: Yes we are, we are.
Ron Westfall: Stay tuned.
Olivier Blanchard: Okay. All right, so in a minute. In the meantime, I’m going to stick to AI for a minute. I was kind of interested by this article I found in The Financial Times that focused on AI’s uses in discovering new antibiotics that treat drug-resistant diseases. As you may be aware, we’re increasingly having challenges in the medical… Well, not we. We as humans, I’m not part of the medical community, have a challenge with a rise of antibiotic-resistant infections because people have been using antibiotics incorrectly for a long time. They don’t go through the entire course of antibiotics. We’ve sort of accelerated the developments negligently of antibiotic-resistant bacteria and other diseases. AI’s now being used to accelerate the process of trying to counter that by coming out with new more advanced antibiotics.
Here, I guess, a paper was published on Thursday, so just a few days ago, in the journal Cell. It was published by researchers at MIT, Then Massachusetts Institute of Technology, who reported the discovery of a new antibiotic called halicin, I think is how it’s pronounced, and it is able to kill 35 powerful bacteria’s, including clostridium difficile, tuberculosis, and Acinetobacter baumannii, which was up until now pretty much untreatable. Something that U.S. veterans have had to deal with because it’s an infection that usually enters wounds. This is really good and it’s just an example of how even though we talk about the risks of AI in terms of how they can impact our lives or civil liberties and so on, AI used for the good of mankind is a really powerful tool. I thought that was pretty great and it needed to be mentioned.
Ron Westfall: It’s good to see that not only is AI improving our health, but it’s also expanding our lexicon and-
Olivier Blanchard: Yeah, I just need to learn how to pronounce some of these things. I need a refresher course. My Latin’s a little rusty.
Ron Westfall: Well, the few, the proud that are fluent in Latin today. Along the health counts, yes, we’re returning to the headline issue of coronavirus. I think it’s pretty well known of the impact it’s having that obviously resulted in the cancellation of next week’s Mobile World Congress, the hugest show in the mobile industry, attracting over 109,000 people last year. That was major, and it just doesn’t stop there. For example, Apple had to come out and say that, “Due to the impact of coronavirus in China, it’s causing supply chain delays and disruption.” As a result, their Q2 expectations have had to be dialed back, specifically on the revenue guidance side. It’s a combination of factors. Yes, there were delays, and yes, the workforce that assembles and manufactures the Apple iPhones is back in place, but they’re slow in getting back up to speed and having everybody getting back onboard and so forth.
It’s also impacting, obviously, the Apple retail outlets and workforce throughout the entirety of China. It’s not unique to them. It’s impacting plenty of other players within the mobile industry. This is something that we’ll definitely be watching carefully and stay tuned and hopefully the coronavirus, say by the time Apple reports its actually Q2 earnings will be less of a factor and issue. We can’t bet on that yet.
Olivier Blanchard: Yeah. I was supposed to be on a plane today. I think we probably .
Ron Westfall: I was going to ask when you were leaving.
Olivier Blanchard: Today or tomorrow, anyway, yeah.
Ron Westfall: If you were to hop over there to visit relatives, but I guess you’ll have to wait a little longer.
Olivier Blanchard: Yeah, yeah. I always go a few days early and stay a few days late. Okay, so thanks for that. My second, which is our third-
Ron Westfall: Fourth. Fourth.
Olivier Blanchard: Fast Five, fourth, yeah, is about Google. The Google Play Store and Google Apps sometimes get a bad reputation, especially compared to the Apple ecosystem because it lets in a lot of apps that might be infected with malware or aren’t necessarily super on the up and up. Google has been cracking down a little bit and trying to improve its ecosystem of apps and make it safer and better for its users. I have some good news. The Google App Store is, or Play Store, is now cracking down on location tracking and it’s announced a new policy that essentially establishes new guidelines.
What’s going to happen is that all apps on the Google Play Store, so basically all Android apps, will be required to show why it needs to track users through GPS and track their location at all. Essentially some apps may require or need to track a user at all times, others may only need to track them when the app is open or when they’re in particular locations or doing particular things. It’s going to start becoming a little bit more specific about that.
The changes are going to take place over two waves, so essentially starting August 3rd, all new Google Play apps that ask for backgrounds GPS access will need to pass that review. Then, all existing apps that are not new to the app store will have until November 3rd to answer those questions and become compliant. It’s just a nice little thing and that made me feel a little bit better, especially when we talk about the tracking of our data and our location and our movements. I think this is a step in the right direction.
Ron Westfall: I agree. In terms of going back to the cancellation of NWC, that obviously robs basically the entire industry of the opportunity to see the latest, greatest technologies. One area I wanted to comment on is, again, AI enabling advancements in areas such as network operations. In fact, before the show, Nokia did unveil its network operations master technology, which is designed to use AI in order to basically turbo charge 5G capabilities that are going to be essential for the 5G ecosystem. That includes network slicing, real-time workload distribution, multi-cloud administration. Really, the nuts and bolts that are required of any operator to make 5G a success, whether it’s being used by the enterprise or by the individual consumers.
This is an example of AI making a clear difference in terms of improving the operational efficiency, improving the operational expenditure savings, and having a positive effect in offloading manual workloads and, in essence, falling in that low-risk category of technology that is going to basically be good for the entire 5G industry. Hopefully, we’ll get an opportunity to scrutinize these capabilities more in other upcoming shows, assuming the coronavirus issue has been resolved. In the meantime, at least we’ve had the benefit of previews getting some good information leading up to when the show was going to occur. We’ll just have to be patient to see how these real-world applications will play out during the course of 2020.
Olivier Blanchard: Yeah. No. No, it’s going to be interesting. I hope this doesn’t happen again next year, but we’ll cross our fingers.
Ron Westfall: Amen.
Olivier Blanchard: Prepared. All right, and now we come to my favorite part of the show, which is Tech Bites. We haven’t talked about Facebook in a while I don’t think. It used to be that Facebook was almost always the recipient of the Tech Bites Award every week because Facebook is often in kind of our crosshairs when it comes to bad behaviors, poor best practices. We could call them worst practices, and they kind of caught my eye today. It’s actually Facebook is trying to do something good here and it has something to do with political ads and Democratic candidate Bloomberg, who is asking some of his campaign officials to post content for and about his campaign. Apparently, this has kind of like slipped through the cracks of Facebook’s treatment of political ads. Facebook has been trying to label political ads to create a little bit of transparency and awareness so that when users of Facebook see a political ad or political content from a campaign, they’re made aware that what they’re seeing is an ad or sponsored content or something of that sort. Mike Bloomberg’s strategy here sort of doesn’t really run afoul of Facebook’s terms of service, but it does circumvent a little bit. It kind of slips through a crack that Facebook had left open, and so it’s trying to figure out how to label those types of pieces of content that are coming from staffers and they’re not necessarily being posted through normal advertising channels.
Why this is a Tech Bite for me is that Facebook has known since 2016 that it had very serious vulnerabilities and problems when it comes to transparency and disclosure with regard to political content, especially around elections. It’s had a few years now to focus on this. I remember both Mark Zuckerberg, and actually not just both, but Mark Zuckerberg and other executives at Facebook talking about how this was a huge priority for Facebook and how they were going to fix this. By 2020, they would have this thing sorted out, and evidently, what we’re seeing here is not a lot of work has been done at Facebook to really solve these problems and these challenges.
On the eve of the 2020 election, when we’re pretty much hitting strides with primary season now and we’re less than a year away from the national election, for Facebook to still be scratching its head and trying to figure out what it needs to do and how is just… I don’t want to use the term “deplorable”, because it’s a loaded political term now, but it’s really disappointed and annoying that Facebook has not taken this as seriously as it should. Comments?
Ron Westfall: I agree. In fact, this is what is driving, for example, the EU and the EC’s interest in regulating AI. It’s due to private sector bad actors like Facebook, which has racked up record fines for privacy violations in the EU region. It’s like you imagine this issue being squared or cubed with an AI engine enabling the circumvention of norms or expectations. This is, again, I think, a prime example of… Okay, if the private sector is going to not come up with better solutions, then we’ll quite simply have to be done by the governments and then they can’t complain about it because they had their opportunity and they just dropped the ball. Now, you’re going to have to deal with much more oversight and scrutiny because of situations like this.
Olivier Blanchard: Yep, this is true. That’s it for our Tech Bites. I’m sure Facebook will be back in our crosshairs sooner rather than later, although it had been a while, I’ll give them that. They haven’t been I think even mentioned in Tech Bites for quite a few weeks, so good for them as long as it lasted. Now, we’re moving to our Crystal Ball and we’ll as usual circle back to our main topic. I thought it’d be interesting to kind of… It’s too bad that’s it’s only the two of us this time because it’s usually there are three of us, and so we get a broader range of opinions, although we do tend to all agree with each other because we’re 99.9% right anyway, and so we’re usually all right together, so we’re typically kind of aligned with our answers.
Circling back, again, to our main topic of AI regulations or at least guidelines in Europe and the effort by the European Union to develop this kind of ecosystem of digital technologies that don’t go haywire, that are sort of aligned with the general betterments of the economy, of the society that Europe is trying to build, of companies and consumers and users being and existing in this state of symbiosis where no one is really kind of put down upon, no one is being exploited. Everyone is getting this mutually beneficial sort of series of benefits and shared experiences from all of these technologies.
When do you think that the United States will follow suit and become serious about not just GDPR-type regulations and data protection, which we desperately need, but just in general as a holistic approach to technology? When do you think the United States will start having a real discussion like the one taking place in Europe currently? Do you think we’ll see it by 2025? 2030? Ever?
Ron Westfall: Yeah. Well, I think the seeds for it will be as soon as 2021 simply because there won’t be a Presidential election basically taking up the bandwidth of the decision-makers in this regard. Now, the other reason why I’m saying that is because, again, there can be commercial benefits from having AI certification in place and having more consumer confidence or more business confidence in the AI technology that they’re adopting and applying passes the scrutiny of the government agency that is involved in oversight. That can’t be underestimated. It’s like nobody’s going to be buying, for example, a car that hasn’t been approved by the regulators, et cetera. I think it’s actually going to be sooner than we realize. It’s not going to be like a decade out for those two main factors.
Olivier Blanchard: Yeah. I mean, I’m not as optimistic as you are on this point. Usually, I kind of move a little… I don’t know, so it depends on technology. Sometimes I’m a little ahead of you guys, sometimes I’m a little bit behind, but in this particular case, I think that it’s going to take action from pretty big states, so states like California or Texas or New York to start passing laws that are similar to GDPR that include consumer protections or limitations on what technology companies can do before the federal government has to really start thinking about complying.
My question was kind of a trick question because I was talking about discussions, not necessarily legislative action, but I think that we may seem some states start to lead the way and before the federal government really starts having serious hearings and serious discussions about it. I’m looking at you, California, so I’m thinking 2022, 2023, and I think that Andrew Yang’s candidacy, which was short-lived but interesting nonetheless, raised some interesting issues. He was the only candidate, whether you like him or not, whether he was ever viable or not, who started talking about technology and who brought those discussions into the political discourse.
I think we’ll be seeing much more of that as more Millennials begin to run for office and as the potential erosion of the job ecosystem in the United States becomes more and more… not necessarily threatens but affected by automation, by AI, by new technologies. I’m thinking the next election cycle we’ll be seeing a lot more discussions about this. I think that’s when we’ll really start to see the volume get turned up.
Ron Westfall: Yeah, it’s no coincidence that Yang had Elon Musk’s endorsement because he is tackling these technology issues as a major campaign theme. It’s not the back pages of the campaign efforts.
Olivier Blanchard: Yep, exactly. All right, cool. Well, thanks a lot. That was actually pretty decent and not too long, so we stayed within our three-hour broadcasting goals. Thanks a lot for listening, everybody. That does it for this week’s edition of FTP, The Futurum Tech Podcast. Hit that subscribe button if you haven’t already and catch us next week for another round of news and analysis of the intersection of tech and business. In the meantime, you can catch us on Twitter, on our blog, on LinkedIn, pretty much everywhere you get your news and your social media content. Have a great week, everybody.
Disclaimer: The Futurum Tech Podcast is for information and entertainment purposes only. Over the course of this podcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.