This summer has been hot all around, but perhaps the hottest topic on the minds of state attorneys general (AGs) continues to be artificial intelligence (AI). As we recently heard from Colorado Attorney General Phil Weiser, AI is a big concern for regulators trying to understand all the ways in which AI permeates our daily lives in order to effectively regulate the algorithms that create the AI.

While the benefits of AI are clear and constantly expanding to different sectors, the AG community believes potential harms to consumers cannot be overstated. In addition to calling for transparency with the use of AI, AGs are grappling with the varied outputs of AI and are looking at tools they can use to address consumer concerns that deal with privacy, discrimination, and data security. At both the recent 2023 AG Alliance Annual Meeting and the NAAG Eastern Region Meeting, AGs heard from AI experts and stakeholders on the state of play for AI and potential tools they can use to curb consumer harms.

AG Alliance Annual Meeting

At the 2023 AG Alliance Annual Meeting, AGs focused on how they enhance and refine their approaches to consumer data and privacy to include AI. Attendees heard from two panels: (1) The Evolving World of Consumer’s’ Data & Privacy,” which addressed the regulatory landscape of AI; and (2) AI and the AG,” which was geared towards the role that an AG could play in preventing misconduct and maximizing the benefits of AI and its technologies.

AI requires substantial data. Therefore, according to panelists, we cannot have ethical and responsible AI without rules about data. Some use of AI can be regulated by existing laws (a recurring theme throughout the panels). For example, health insurance providers, regardless of whether they rely on AI, are bound by HIPAA and must follow detailed privacy and security provisions to protect data, including data breach notifications. State UDAP laws have already been used to address AI. In 2020, then Vermont Attorney General T.J. Donovan filed a lawsuit against Clearview AI for allegedly violating the Vermont Consumer Protection Act for using facial recognition technology to map faces of Vermont residents (including children), and sold the data to private businesses, individuals, and law enforcement.

Additionally, New York City adopted NYC 144 prohibiting employers or agencies from using an automated employment decision tool (AEDT) to make an employment decision unless the tool is audited for bias on an annual basis and the employer publishes summaries of the audit, and the employer provides notice to the applicants and employees who are subject to screening by the AEDT.

AGs were asked to hear from stakeholders on how each sector relies on AI and to refrain from relying on a one size fits all” policy solution for AI. Using AI to make recommendations for a movie or song would require a different approach from using AI to make decisions in the lending or education sectors. Additionally, AGs were asked to consider collaboration and consistency with policymaking to reduce duplicative or disjoined rules between states. Finally, AGs heard that laws and regulations should be responsive to outcomes rather than the specific type of technology due to the ever evolving nature of technology.

NAAG Eastern Region Meeting

At the NAAG Eastern Region Meeting, attendees heard about the role AI is playing in antitrust and consumer protection – as well as the all-important Tong Tasting” of oysters. In addition to exemplifying the dangers of AI by making a fake audio recording of General Tong, General James and General Tong touched on the ways AI impacts markets, particularly how AI can lead to market dominance by large firms in antitrust. The increased concentration of industries can create a big firm advantage” as data is often proprietary with large training costs, essentially creating a barrier to entry for smaller players.

On the consumer protection side, the panel noted that possible consumer safeguards include: (1) applying general state consumer protection laws to AI such as state UDAP laws analogous to the FTC Act; (2) using state privacy laws and opting out of AI use; and (3) drafting state/federal AI-specific legislation.

In the NAAG meeting, panelists noted that we are seeing a shift based on recent FTC guidance which focuses on generative AI. Echoing what we previously reported, the panelists stated that the FTC can enforce company pledges to manage the risks posed by AI. As such, the FTC emphasized that claims about AI should not mislead consumers. AI should also not be used for bad things” such as fraud and scams, especially when they prey on vulnerable populations like the elderly. Similar to the sentiment expressed in the AG Alliance Meeting, businesses using AI have called for clear and consistent regulations. Businesses have expressed concerns about the relationships between AI and the current regulatory schemes such as private rights of action on state wiretapping laws.

In addition to being transparent about their AI practices, businesses can and should address the risks AI creates by:

  • Reviewing claims to ensure they are accurate and not exaggerated.
  • Figuring out who is responsible for each chain of AI.
  • Building compliance mechanisms into AI.

Kelley Drye will continue monitoring the AI regulatory landscape.