The National Association of Attorneys General (NAAG) closed out the year with its 2023 Capital Forum in early December. This year’s Forum focused heavily on AI questions and concerns and past and future NAAG Presidential Initiatives. In this first post, we will cover the highlights of the AI panels.

Enterprise AI Strategy for Government

Attorneys General David Yost of Ohio and Brenna Bird of Iowa kicked off the first panel focused on how government agencies can implement AI. Panelists reminded the AGs that AI has been around for many years, though generative AI was only released last year. AI is as good as a legal intern – users should check and review output to make sure it is doing a good job. Attorney General Platkin of New Jersey stated that while AI is not always a negative thing, it can produce hallucinations.” Platkin gave an example where a bio written with AI described him as the AG of Pennsylvania. However, AG Platkin later described the benefits his state has seen using AI to review police body camera information more efficiently. During the session, panelists described considerations for how governments can roll out AI in a thoughtful way, including through upgrading technology generally (the cloud) and having individuals responsible for AI in the organization.

Protecting the Public in the Age of AI – What Tools Are Right for the Job?

Attorneys General Andrea Campbell of Massachusetts and John Formella of New Hampshire moderated the next panel, shifting to enforcement and regulation of AI. AG Formella began by emphasizing regulators do not want to get in the way of new technologies but rather should explore practical ways to mitigate harm. Panelists described the different types of AI and the importance of monitoring inputs to help prevent errors in the end use case. When describing the AI landscape, the panelists agreed that there is a historical parallel to the internet .com bubble – where regulation was avoided and now is subject to greater scrutiny.

Panelists discussed an example of using AI for restaurant recommendations; while AI in that situation may be low risk, even low risk use cases should be transparent and ultimately not become deceptive. The panel emphasized that consumers need to understand when AI is being used. AG Formella asked on what immediate harms AGs should focus? The panelists said less well-known risks including the poisoning” of inputs – inaccurate data can get reused and amplified as misinformation. In addition, high risk AI uses should be transparent and have robust monitoring. Discrimination bias is already documented, but the panelists said AGs should keep in mind that design choices are often embedded and the end user should perhaps not be held accountable if the designer was at fault. When AG Campbell asked about bias and discrimination considerations, panelists raised existing laws already in the AG tool kit, including those protecting against discrimination and fair lending. Regarding unlicensed practices, such as the use of AI for medical or legal advice, AG Formella said there are already enforcement tools such as New Hampshire’s consumer protection statute, but that they could certainly still beef up the laws. Other panelists pointed out that lawyers already have ethical obligations, and the state of California even has guidance for lawyers and the use of AI.

AG Campbell asked for more information on how consumer protection and antitrust apply to AI. The panelists described how AI may have become the most efficient spam generator, causing more mundane and insidious problems for society. Fake information and doctored content may erode public trust. Some panelists also raised concern that AI may ultimately create an environment that concentrates power and influence. It can also be used to generate thousands of comments to rulemakings or thousands of complaints in a day. Regarding future enforcement, panelists questioned what remedies could be applied – including providing data to universities or nonprofits.

The Role of States in Internet Policy

Attorney General Phil Weiser of Colorado moderated this panel consisting of current and former FTC members (Samuel Levine, Director of the Bureau of Consumer Protection, and Maureen Ohlhausen, Former Chair), academia (Prof. Danielle Citron), and industry/former FCC (Michael Powell, President & CEO of the NCTA and former FCC Chair). While the topic covered regulation of the internet generally, it also specifically covered AI. Levine echoed the concerns of those in the previous panel regarding the history lesson” of the internet, and the desire to be more proactive with AI by coming up with principles, stating the FTC has made clear it believes Section 5 applies to AI use and deployment. Levine cautioned not to let the perfect be the enemy of the good, in terms of taking steps now to protect against fraud and inaccuracy and protect data security and privacy.

AG Weiser agreed with other panelists that legacy institutions often think about how they used to do things, when they should continue to look at bringing in new tools for new technologies. Levine said while the FTC has 15 technologists, it is not enough. However, he also said in defense of institutions that UDAP provisions have been incredibly versatile over the years adapting for radio, TV, internet and even AI, and that flexibility was by design. Ohlhausen pushed back some, explaining that Congress did put guardrails on unfairness, and that courts are currently more skeptical of regulatory agencies and she would hate to see the FTC lose the authority it has.

AI and Child Exploitation

Finally there was a brief session with South Carolina Attorney General Alan Wilson and New Mexico Attorney General Raul Torrez. AG Wilson began by pointing to his office leading a letter joined by 54 attorneys general to Congress asking them to look at how AI may impact child exploitation and sex abuse laws at the federal and state levels. AG Wilson summarized the letter, explaining how AI can use a child’s ordinary photo and create child sexual abuse material (CSAM), or AI can wholly create CSAM using generative abilities. AG Wilson said the letter urges Congress to create a commission at the federal level to be proactive on these issues, and study where to evolve the laws on AI. He also asked colleagues in the states to consider using the letter to Congress as a template for a letter to their own state legislatures.

AG Torrez discussed his background as an internet-crimes-against-children prosecutor at the office he now leads. He said his expectation is that a company that enables a depiction of CSAM can be held legally responsible, and wants to work with federal and state prosecutors to make sure they have the tools they need. AG Torrez suggested that corporate leaders need to be committed to solve the problem and get in front of the issue. [Note that the same day, AG Torrez’s office announced a lawsuit focused on similar issues.]

Bottom line? AGs remain incredibly focused on AI, and will continue to look for opportunities to develop policy and enforcement initiatives around AI in 2024.