FTC Warns That Deceptive AI Content Ownership Claims Violate the FTC Act
The buzz around generative AI has raised many IP-related questions, such as the legality of using IP to train AI algorithms or ownership of AI-generated content. But the FTC warns that claims about content ownership don’t just give rise to IP concerns – they could also constitute FTC Act violations if they meet the unfair or deceptive standard in Section 5. (Click here and here for our take on other recent AI-related guidance from the FTC.)
In a recent business blog, the Agency lays out several practices that could trigger scrutiny and enforcement:
- Promising full ownership but delivering a limited-use license. Telling consumers that they’re buying full rights to a digital product when in fact they’re just getting a limited-use license or being enrolled in a subscription service is likely to violate Section 5. The FTC warns companies against unilaterally changing their terms or undermining reasonable ownership expectations post-purchase, including in cases where the primary purchaser is deceased and survivors’ rights to the digital property are affected. This principle is hardly AI-specific. After all, the FTC has been bringing cases about deceptive offer terms and hidden negative options for decades – but could be increasingly relevant today, in a context where consumers’ digital purchases live largely in the cloud and companies have more control over post-purchase access and use.
- Failing to disclose use of IP in training data. Generative AI products that are trained on copyrighted or otherwise protected content should disclose that their outputs may include IP and failing to do so may be a deceptive practice under the FTC Act. Clear disclosures about the use of IP will help consumers and companies make informed choices about which AI products to use. For companies using generative AI tools for commercial purposes, such information could be particularly important, as they may be held liable for improperly including IP in their products.
- Passing off AI content as human-generated content. Advertising a digital product as created by a person when it was generated through AI would be a clear example of false advertising and, again, aligns with decades of FTC enforcement activity. The prohibition stands even though some platforms may assure users that the generated content “belongs” to them.
- Misleading creators about content ownership or use. When inviting content creators to upload content, platforms must be clear about ownership and access rights, as well as how the content will be used. If the platform will use the content to train AI algorithms or generate new content, this information must be clearly communicated up front.
Although these practices generally fall within well-established principles of unfairness and deception under Section 5, this blogpost highlights the FTC’s continued focus on all aspects and angles of the generative AI space. In short, expect extra scrutiny of any claims surrounding capabilities, features, ownership, and uses of AI tools and content. The summer may be finally cooling off, but regulators’ interest in AI is just heating up.