On December 6, 2023, Google introduced its new Gemini AI model with a video that showcased some of its capabilities. The video demonstrated how Gemini could identify hand-drawn images, make sense of gestures, create games, make connections between objects, identify a sleight-of-hand trick, and perform other impressive tasks. Most people who watched the video were left in awe, but some that dug a little deeper felt that Google had performed a sleight-of-hand trick on them.

A disclosure at the start of the video states: Sequences shortened throughout.” In the description below the video – and below the more” button – Google also disclosed: For the purposes of this demo, latency has been reduced and Gemini outputs have been shortened for brevity.” And a link next to that disclosure led to a blog post explaining how the video was made. After getting a glimpse behind the scenes, some commentators (like this one) felt that the video exaggerated Gemini’s capabilities.

Yesterday, NAD announced that it had investigated the video as part of its routine, ongoing monitoring program” in order to determine whether the video accurately depicted Gemini’s performance in “(1) responding to user voice prompts and (2) responding to video prompts, and the timing or pace of Gemini’s responses to prompts.” NAD explained that it wants to ensure that consumers and businesses receive truthful and accurate information about what AI products can do.

During the course of the proceeding, Google voluntarily agreed to unlist the standalone video, so that it no longer appears in search results. Instead, Google stated that it would only display the video in conjunction with the blog post that explained how the video was made. NAD was not concerned that the video viewed in conjunction with the blog post would mislead consumers.” Thus, NAD closed the case without any further analysis.

Although we don’t have specific guidance from NAD here, we can certainly make some educated guesses about what NAD was thinking. When companies create demonstrations to show how their products work, they need to ensure those demonstrations accurately reflect how the products perform in real-life. Be careful about editing or enhancing those demonstrations in a way that can exaggerate performance. At minimum, any material edits or enhancements need to be clearly disclosed.

Although this principle generally applies across all industries, we expect that NAD will continue to take a close look at demonstrations related the capabilities of AI products to ensure that consumers aren’t misled. They aren’t the only ones. As we noted last year, the FTC also issued a warning to advertisers reminding them that they need to ensure that their claims about AI are factually accurate, that they can support those claims, and that disclose any material limitations.

Tags: NAD, AI