Search

CAIDP Files Complaint with FTC Against OpenAI’s GPT-4 for Violating Consumer Protection Rules

The News: On March 30, the Center for AI and Digital Policy (CAIDP), an artificial intelligence-focused tech ethics group, filed a complaint with the Federal Trade Commission (FTC) asking the FTC to investigate OpenAI for violating consumer protection rules, arguing that the organization’s rollout of AI text generation tools has been “biased, deceptive,” and a risk to public safety. Read more from Engadget on the CAIDP complaint.

CAIDP Files Complaint with FTC Against OpenAI’s GPT-4 for Violating Consumer Protection Rules

Analyst Take: While the public has largely embraced OpenAI’s ChatGPT, AI researchers and others have expressed trepidation about the speed at which AI technologies are being developed. In a high-profile open letter, tech leaders and prominent AI researchers have called for AI labs and companies to “immediately pause” their work. Steve Wozniak, OpenAI co-founder Elon Musk, and other experts who signed the letter believe AI technology risks warrant a minimum six-month break from producing technology beyond GPT-4. The letter adds that care and forethought are necessary to ensure the safety of AI systems but expressed concerns that these things are being ignored in the race to deploy the most advanced AI technology first. I agree. Moving fast, especially in the tech space, can be a huge advantage. Moving too quickly with technology that can be inherently biased, inaccurate, and where there are often massive ethical concerns can be incredibly dangerous. Exciting and transformative? Absolutely. But excitement alone can’t – and shouldn’t — assuage legitimate concerns.

The CAIDP’s FCC Complaint

Following the letter’s publication, the Center for AI and Digital Policy (CAIDP), an artificial intelligence-focused tech ethics group, filed a complaint with the Federal Trade Commission (FTC) asking the FTC to investigate OpenAI for violating consumer protection rules. CAIDP argues OpenAI is violating the FTC Act through its releases of large language AI models like GPT-4. According to the CAIDP complaint, the OpenAI model is “biased, deceptive,” and threatens both privacy and public safety. CAIDP president Marc Rotenberg was one of the letter’s signatories, and like the letter, the complaint calls for the slowing down of the development of generative AI models and for the implementation of stricter government oversight.

CAIDP Files Complaint with FTC Against OpenAI's GPT-4 for Violating Consumer Protection Rules
Image Source: OpenAI

The CAIDP complaint claims GPT-4, which was released earlier this month, was launched without any independent assessment, and without any way for outsiders to replicate OpenAI’s results. According to CAIDP, the GPT-4 system could be used to spread disinformation, contribute to cybersecurity threats, and potentially worsen or lock in many of the inherent biases already well-known to AI models.

The complaint goes on to present several scenarios, including ones in which AI models failed to recognize or act on potential hazards to children, facilitated corporate espionage, allowed cybercriminals with limited technical skills to develop malware such as ransomware and malicious code, and lowers the knowledge barriers needed to create successful cyberattacks.

The CAIDP complaint against OpenAI’s GPT-4 requests the FTC:

  • Halt further commercial deployment of any GPT by OpenAI
  • Establish independent assessment of GPT products prior to future deployment
  • Ensure that future deployment of GPT is in alignment with FTC AI guidance
  • Require constant independent assessment throughout the GPT AI’s lifecycle
  • Establish a publicly accessible reporting mechanism for incidents
  • Initiate rule-making that would establish baseline standards for products in the AI market sector

A full copy of the CAIDP complaint is available here: https://www.caidp.org/cases/openai/.

While the public has eagerly embraced ChatGPT and GPT-4, along with companies locked in an AI race to deploy the technologies, tech leaders and AI researchers are wanting to put the brakes on further development to give time to better understand the potential impact and to get government oversight. And make no mistake: I’m a gen AI fan and see broad implications for its use. It’s exciting and we are at the beginning stages of seeing what’s possible. The technology’s potential for good is, without question, significant, but that doesn’t mean there shouldn’t be awareness and concern about the potential negative impacts it can have for the public without sufficient guardrails in place.

Disclosure: Futurum Research is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum Research as a whole.

Other insights from Futurum Research:

Italy DPA Announces Ban of OpenAI’s ChatGPT, Will Other EU Countries Follow Suit?

Google Invests $300mn in Artificial Intelligence Start-Up Anthropic, Taking on ChatGPT

Google Bard Takes on Microsoft’s Bing ChatGPT Integration

Image Credit: CNBC

Author Information

Shelly Kramer is a Principal Analyst and Founding Partner at Futurum Research. A serial entrepreneur with a technology centric focus, she has worked alongside some of the world’s largest brands to embrace disruption and spur innovation, understand and address the realities of the connected customer, and help navigate the process of digital transformation. She brings 20 years' experience as a brand strategist to her work at Futurum, and has deep experience helping global companies with marketing challenges, GTM strategies, messaging development, and driving strategy and digital transformation for B2B brands across multiple verticals. Shelly's coverage areas include Collaboration/CX/SaaS, platforms, ESG, and Cybersecurity, as well as topics and trends related to the Future of Work, the transformation of the workplace and how people and technology are driving that transformation. A transplanted New Yorker, she has learned to love life in the Midwest, and has firsthand experience that some of the most innovative minds and most successful companies in the world also happen to live in “flyover country.”

SHARE:

Latest Insights:

TSMC, Samsung, and Intel All Announced Agreements
Olivier Blanchard, Research Director at The Futurum Group, shares his insights on the geopolitical, market, and supply chain implications of finally securing domestic semiconductor chip production.
The Strategic Acquisition of Netreo by the Global Software Solutions Leader Has the Potential to Reshape the Future of IT Monitoring and Management
Discover insights from Steven Dickens, Vice President and Practice Lead at The Futurum Group, on how BMC's strategic acquisition of Netreo will shape the future of IT monitoring and management.
April 19 ‘Halving’ and New ETFs May Alter the Finance Ecosystem
Steven Dickens, VP and Practice Leader at The Futurum Group, highlights that as Bitcoin has introduced spot Bitcoin ETFs and experiences its fourth halving, it continues to redefine the financial landscape.
Unveiling the Montreal Multizone Region
Steven Dickens, Vice President and Practice Lead, and Sam Holschuh, Analyst, at The Futurum Group share their insights on IBM’s strategic investment in Canadian cloud sovereignty with the launch of the Montreal Multizone Region.