FTC chair Lina Khan warns that AI could “turbocharge” fraud and scams, emphasizing that companies cannot escape liability by claiming their algorithms are a black box. The FTC has substantial authority to crack down on AI-driven consumer harms under existing law.
FTC’s Authority to Investigate and Enforce AI-Driven Consumer Harms
The Federal Trade Commission (FTC) has substantial authority to crack down on AI-driven consumer harms under existing law. Despite the emergence of new AI technologies, the FTC can adapt its enforcement to changing technology.
FTC’s Enforcement History and Commitment to Adaptation
Throughout its history, the FTC has had to adapt its enforcement to changing technology. Commissioner Rebecca Slaughter stated that companies cannot escape liability simply by claiming their algorithms are a black box. The agency has consistently applied its unfair and deceptive practices authority, civil rights laws, fair credit laws, and Equal Credit Opportunity Act to existing technologies.
FTC’s Expectations for AI Companies
Commissioner Alvaro Bedoya stressed that companies will need to abide by existing laws and regulations. He stated that the agency’s staff has been saying that their unfair and deceptive practices authority applies, as well as their civil rights laws, fair credit laws, and Equal Credit Opportunity Act.
Previous FTC Guidance and Investigations
The FTC has previously issued extensive public guidance to AI companies and received a request to investigate OpenAI over claims that the company behind ChatGPT misled consumers about the tool’s capabilities and limitations.
FTC Warns of Potential Risks from AI Tools
The Federal Trade Commission (FTC) has expressed concerns that artificial intelligence tools, such as ChatGPT, could lead to a “turbocharging” of consumer harms, including fraud and scams. FTC chair Lina Khan stated that the potential for these tools to enable fraud and scams is a serious concern.
Background on AI Tools
In recent months, new AI tools have gained attention for their ability to generate convincing emails, stories, essays, images, audio, and videos. While these tools have the potential to change the way people work and create, some have raised concerns about how they could be used to deceive by impersonating individuals.
FTC’s Stance on AI-Driven Consumer Harms
Khan and fellow FTC commissioners emphasized that companies cannot escape liability simply by claiming their algorithms are a black box. The agency has consistently applied its unfair and deceptive practices authority, civil rights laws, fair credit laws, and Equal Credit Opportunity Act to existing technologies.
Previous Guidance from the FTC
The FTC has previously issued extensive public guidance to AI companies. In response to claims that OpenAI misled consumers about ChatGPT’s capabilities and limitations, the agency received a request to investigate the company last month.
Key Takeaways
-
The FTC is concerned that AI tools could lead to a “turbocharging” of consumer harms, including fraud and scams.
-
Companies cannot escape liability by claiming their algorithms are a black box.
-
The FTC has consistently applied its existing laws to new technologies, including AI-driven consumer harms.
-
The agency has previously issued public guidance to AI companies and is taking steps to address concerns about the potential risks from these tools.