In our final installment of our NAAG 2023 Consumer Protection Fall Conference debriefing (click here for parts one and two), unsurprisingly, fake reviews and generative AI were the big topics that closed out the conference.

Fake Online Reviews

This panel was moderated by Victoria Butler, Consumer Chief of the Florida Attorney General’s Office, and Mike Wertheimer, Consumer Chief of the Connecticut Attorney General’s Office. Panelists included John D. Breyault, Vice President, Public Policy, Telecommunications and Fraud at the National Consumers League, Monica Hernandez, Senior Corporate Counsel at Amazon, Michael Ostheimer, Senior Attorney at the Federal Trade Commission, and Morgan Stevens, Research Assistant at the Center for Data Innovation.

To jumpstart the discussion, Stevens outlined different types of review concerns (some of which we have previously reported):

  • Purchasing Reviews through Non-Customer Third Party Services – paying for positive reviews, or for negative reviews for competitors
  • Incentivizing Reviews – providing some kind of benefit for a review (i.e. revenue sharing)
  • Obtaining Reviews from Family/Friends – asking close connections to post positive representations
  • Using fraudulent reviews for social activism reactions
  • Paying individual customers for positive reviews or to post negative reviews on competitor sites
  • Suppressing or unnecessarily flagging reviews
  • Relying on review baiting by only allowing or encouraging positive feedback
  • Threatening to use the legal system to attack reviewers
  • Harassing reviewers into deleting negative reviews

Stevens cited a 2016 University of Central Florida and Case Western study that showed customers are more likely to consider extremely negative reviews useful than positive reviews. Therefore, regulators are concerned that businesses are willing to pay customers to remove negative reviews as a cost for a good investment.”

Breyault asserted that platforms have a role to play and have invested a lot in protecting integrity, and the solution to protect integrity of user reviews will require coordination from all stakeholders involved. Consumers need to learn to recognize warning signs of bad reviews and vote with their wallets. Finally, the AG community and the FTC should have the resources necessary to go after bad actors.

Breyault also recommended platforms maintain clear policies that prohibit inauthentic reviews, require that all reviews reflect honest opinions, and allow users to report abuse. The policies should outline clear consequences for violations such as removing related products, terminating, and/or withholding payment. Later, Hernandez echoed the message of working together to combat harms of fake reviews, and stated Amazon has made significant investments and created policies to address the issue.

Ostheimer referenced the updated FTC Endorsement Guides, which cover fake and incentivized reviews. The Guides also provide new specific examples on how and when reviews should include a clear and conspicuous disclosure. In addition to the Guides, Ostheimer emphasized the importance of appropriately training employees and monitoring reviews to ensure compliance.

Understanding the Consumer Impacts of Generative AI

In the final panel for the conference moderated by Rashida Richardson, Assistant Professor of Law and Political Science at Northeastern University School of Law, panelists tackled the role state consumer regulators must play to balance business innovation and consumer safety. This panel included Dr. Solon Barocas, Principal Researcher at Microsoft Research, Sayash Kapoor, a PhD Candidate at the Center for Information Technology Policy, Princeton University, and Ben Rossen, Associate General Counsel for AI Policy and Regulation at OpenAI.

Panelists discussed the concern that generative AI models are not created to be task-specific, leading to potential additional risks if not created and used carefully. For instance, questions can arise as to who owns the data used for training and how are people using generative AI in practice. Panelists also discussed the desire for transparency and aligning consumer expectations.

Rossen noted platforms have already taken a number of steps to mitigate potential harms like hate speech and fraud and called for companies to watch how people are actually using their tools and monitor closely. Rossen referenced President Biden’s recent Executive Order on AI calling for agencies and platforms to evaluate and reduce risks associated with generative AI.

Several panelists noted that the FTC should be able to regulate businesses that falsely claim their generative AI can do something, and general UDAP and Section 5 models can be used as tools to combat discrimination resulting from generative AI where appropriate. Barocas said that AG authority would likely be insulated from a challenge like the CFPB so the AGs have more room to maneuver. Richardson agreed as UDAP is a broader tool for states.

Bottom line

For best practices, remember:

  • Fake reviews can take many forms, including not disclosing incentivized reviews, purchasing positive reviews, or suppressing negative reviews.
  • Generative AI is here to stay and can provide benefits to consumers. However, consumer protection laws apply to generative AI and companies should be transparent and honest about how they obtain the data for their models, and how they are training the models for potential general use.