Search

What Do We Do About Deepfakes?

What do we do about deepfakes?
Getting your Trinity Audio player ready...

If you’ve had enough of fake news, brace yourself. An eruption of deepfake technology is spreading across the internet, and soon digitally doctored videos might be the new norm on our social media pages. How do we handle this new technology that has the power to compromise everything from personal privacy to national security? Do we ban it? Monitor it? Find the good aspects? Or simply see where it leads us?

What is a Deepfake?

A deepfake is generally an AI manipulation of content that is made by splicing two or more people together. For instance, a user might splice together footage of a famous actress with footage from a porn video and present it is a legitimate “sex tape” of the stars. The University of Washington distributed a video of Barack Obama, in which they were able to make him say literally anything they wanted him to.

Recently, however, companies have even figured out how to create videos from a single photo—meaning someone could use a photo from your social media feed to create a video of you saying or doing any number of things, without your permission. An app called DeepNude gave users the ability to upload the clothed photo of a woman and create their own nonconsensual porn. Clearly deepfakes are getting out of control and pose a clear threat to us. But until now, we haven’t found a way to manage or prevent them.

Managing Digital Representations in Rapid Digital Transformation

It’s believed that within the next 12 months, deepfakes will be visually undetectable, ironically forcing us to rely on AI to help determine what is fake and what is real by looking for inconsistencies, time stamps, and other methods of verification. The issues bring up a number of ethical questions. Yes, legitimate use cases for deepfakes exist. But does the value of those use cases outweigh the privacy issues any of us could encounter thanks to deepfake technology?

For instance, the most viable use cases for deepfake technology currently belong to retail. The sector is using deepfake technology to place consumers into digital branding campaigns, virtual dressing rooms, and to allow them to explore their products in the real—er, virtual—spatial web. Conceivably, that’s good news for retail companies, which stand to gain a few more customers drawn to the opportunity to see themselves walking the runway in fancy clothes or driving through town in a fancy car. Other potential uses could be a movie producer using past footage of a scene to recreate a better version without the actor needing to be present. But is that gain valuable enough to legitimize the risks?

Actress Jameela Jamil has been on a campaign to eliminate photo retouching in acting and modeling to prevent young people from developing a distorted view of what they are supposed to look like. The issue of deepfakes could make that problem so much worse, as the line between real and digital becomes so much more confusing. Where does one’s digital self begin and end? Who owns in? Do experiences in the spatial web mean as much as those in real life? Do we have to pay for them? If we hand over our personal image to companies, how will they manage and protect—rather than sell or circulate—it?

There are no easy answers when it comes to deepfakes, and for the most part, there is no clear plan to eliminate them, even on the part of social media giants like Facebook. For instance, Facebook famously refused to delete a deepfake video of House Speaker Nancy Pelosi, even after it was determined to be a fake. Showing that they’re consistent in their willingness to accept deepfakes online, Facebook also refused to remove a deepfake video of CEO Mark Zuckerberg that was updated to Instagram in which he claims he’ll be able to control the future thanks to data stolen from his social media throne.

Find the Opportunities

While there are scary uses of deepfake technology, just like most technologies, I think we need to focus on the opportunities. Medical researchers are starting to use deepfakes to train AI to be able to identify certain diseases. Adobe is working on an AI that can be used to identify deepfakes. This technology could potentially push us to develop other technologies. We need to focus on the developers who are pushing the boundaries, in a positive way, with this tech before we grab our pitchforks and torches to shut it down. Yes, it’s obvious that we need to start creating some clear ethical boundaries surrounding the creation of deepfake content and the definition of “entertainment” itself, but I think it’s still too early to eliminate the technology completely.

Futurum Research provides industry research and analysis. These columns are for educational purposes only and should not be considered in any way investment advice. 

Check out some of my other articles:

VMware and NVIDIA Partnership Accelerates AI From On-Prem To The Cloud 

Cisco’s CloudCherry Acquisition Will Revitalize Its Contact Center Offering

Huawei Ascend AI Processors Show Its Ambition Despite Tensions

Photo Credit: CBS News

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

The Futurum Group’s Dr. Bob Sutor looks at five generative AI Python code generators to see how well they follow instructions and whether their outputs check for errors and are functionally complete.
Cerebras CS-3 Powered by 3rd Gen WSE-3 Delivers Breakthrough AI Supercomputer Capabilities Matching Up Very Favorably Against the NVIDIA Blackwell Platform
The Futurum Group’s Ron Westfall assesses why the Cerebras CS-3, powered by the WSE-3, can be viewed as the fastest AI chip across the entire AI ecosystem including the NVIDIA Blackwell platform.
Rubrik Files an S-1 with the US SEC for Initial Public Offering
Krista Macomber, Research Director at The Futurum Group, shares her insights on Rubrik’s S-1 filing with the United States Security and Exchange Commission (SEC) to go public.
The Futurum Group’s Steven Dickens provides his take on Equinix's latest announcement as Equinix pivots to a digital infrastructure leader, investing in AI-ready data centers to meet growing technological demands with a new facility in California.