Ad Law Access Updates on advertising law and privacy law trends, issues, and developments Fri, 29 Sep 2023 15:44:41 -0400 60 hourly 1 Neora Prevails In Landmark Decision For Direct Selling Industry Fri, 29 Sep 2023 09:41:00 -0400 Big, BIG win for the direct selling industry, as Judge Barbara Lynn (N.D. Texas) grants judgment for Neora, LLC (formerly Nerium) on all of the FTC’s claims, including that the company was operating an illegal pyramid scheme and made deceptive income and product claims (both directly and through its distributors). Expect the FTC to gather itself and explain that this is one district court case before one judge. But make no mistake about it, Judge Lynn (Clinton appointee) is a respected jurist, with a reputation of being thorough and well-prepared. This decision leaves a mark.

Illegal Pyramid Scheme?

As expected, the pyramiding claim turned on the second element of the Koscot test—the right to receive rewards that are unrelated to the sale of product to ultimate users, in exchange for recruiting other participants into the program. The court rejected the FTC’s expansive interpretation, which relied on the assumption of its expert witness and outspoken critic of the direct selling industry, Dr. Stacie Bosley. Dr. Bosley contended that purchases by Brand Partners (Neora distributors) are not sales to ultimate users. Dr. Bosley’s assumption was almost entirely based on the face of Neora’s compensation plan, which required (among other things) recruitment in some form to be eligible for many bonuses.

Distinguishing Neora from Vemma and BurnLounge where that assumption (including by Dr. Bosley in Vemma) was more persuasive, Neora presented evidence that sales revenue drove the business. Among this evidence, the court highlighted sales data, a survey of Brand Partners, and evidence showing that many Brand Partners enrolled for product discounts. Although not enough to establish the motivations of all Brand Partners, it was enough to rebut Dr. Bosley’s testimony—which the court framed as asking it to “slavishly look only to the compensation plan in isolation, with blinders on to the actual operational data and internal structure of [the] business.”

[Industry observers have nodded in agreement with this view as this argument has been advanced by Neora, while scratching their heads at the FTC’s narrow focus on the wording of the compensation plan.]

As the court put it in criticizing the FTC’s evidence and emphasizing what it perceived as a significant hole in the FTC’s case: “The FTC provided no evidence from actual BPs or participants, and made no effort to show that Dr. Bosley’s rigid theoretical opinions regarding BP purchasing motivations based on the Compensation Plan are borne out in reality.” Judge Lynn was unpersuaded by the FTC’s reliance on profitability data in a vacuum or that distributors who do not earn compensation are failed distributors, emphasizing that people may enroll as savings seekers looking to enjoy product discounts with no intention of pursuing the business opportunity: “Put differently, we may ‘walk away poorer than we started’ after a trip to the grocery store, but because we obtained valuable goods or services in return for our money, that exchange is not characterized as a loss.”

Income and Product Claims

The FTC has long asserted that its authority to prevent and remedy unfair and deceptive acts or practices encompasses actions against companies for failure to sufficiently monitor affiliates, agents, or other entities that used a company’s services or otherwise had a relationship with the company.

The FTC has relied on this authority as it pursued enforcement against companies under similar third party liability theories in a number of contexts, including: (1) claims made by independent distributors or influencers on behalf of a company when the company knew or should have known about the claim; (2) payment processers and money facilitators; (3) debt collection agencies; (4) franchise operators; (5) affiliate marketers; and (6) providers of telemarketing services and equipment. And while the FTC’s authority to bring such cases has not been extensively challenged, courts have upheld the agency’s capacity to bring these actions on the grounds that unfairness under the FTC Act includes instances where an entity facilitates or provides substantial assistance to another’s deceptive or unfair act or practice.

Well, Judge Lynn was not convinced that the line between distributor and company had been crossed. Once again, the court noted that the FTC failed to provide evidence that customers understood that Brand Partners were agents of Neora. She also credited Neora’s “rigorous compliance program” while quoting FTC guidance acknowledging that direct selling companies cannot possibly monitor every claim that is made by independent distributors in the field. This part of the decision is especially noteworthy given its focus on the importance of compliance programs—with an emphasis on training, monitoring, and enforcement—as well as continuing efforts to align company practices with the law as it develops and guidance provided through industry self-regulation. (Nicely done, Direct Selling Self-Regulatory Council.)

With regard to income claims, specifically, the court noted that Neora did not “guarantee any level of income for any Brand Partner,” disclosed typical earnings in its Income Disclosure Statement, and included the qualification that “the actual income of Neora Brand Partners varies.” With regard to product claims, the court ruled that there is no evidence that Neora is currently making claims that their products cure, treat, or prevent human disease. As a result, the court concluded that “enjoining Defendants and Neora’s BPs from making misleading income and product claims would have no effect beyond which is already being achieved through Defendants’ robust and reasonable compliance program, and thus an injunction is not warranted.”

What’s Next?

There will be a big sigh of relief (and some well-deserved crowing) by the direct selling industry, and of course, Neora, given what was at stake. Had the FTC won, the bar would have been raised to a dizzying height. But we have seen this before, most recently following the Supreme Court ruling in AMG Capital Management. After what seemed like a mortal blow to the FTC’s authority, the agency picked itself off the mat and responded: Notice of Penalty Offense Authority, the stretching of existing statutes, Section 19, collaboration with States, and on and on. What’s more, the court here emphasized the FTC’s failure of proof—that is, holes in the Agency’s case. Do not expect this mistake twice.

Having said that, however, and anticipating the familiar one-judge, one-court refrain, the magnitude of this decision cannot be overstated. It reinforces that sales data remains significant in any pyramiding analysis, permits compensation tied to product use (whether by a distributor or a preferred customer), recognizes the value “savings seekers” receive without earning any profit, and emphasizes that proactive compliance measures and a recommitment to best practices really do matter.

Retailer to Pay $10 Million to Settle Pricing Claims Tue, 26 Sep 2023 08:00:00 -0400 Lawsuits challenging how companies advertise sales are on the rise. In this year alone, we’ve posted about a lawsuit over a grocer’s BOGO offers, a lawsuit over a major retailer’s frequent sales, and a large settlement over another retailer’s sale practices. This week brought news of a new $10 million settlement in a lawsuit alleging that SelectBlinds’ sale practices violated California law.

The class action complaint focused on two types of practices:

  • False Sense of Urgency: The plaintiffs alleged that SelectBlinds used countdown timers and other tactics to suggest that sales were about to end. When the timers hit zero, though, the retailer would launch a new sale “with a different name but a comparable discount, with an updated timer stating that the sale would expire at midnight the next day.” Allegedly, that cycle continued for over a year.
  • Misleading Regular Prices Discounts: The plaintiffs alleged that SelectBlinds often advertised a “regular” price with a strikethrough, followed by the sale price. The retailer also often advertised the percentage of the discount comparing the sale price to the regular price. According to the plaintiffs though, SelectBlinds rarely sold the items at the regular prices – they were almost always on sale.

It’s likely that these types of lawsuits will continue, so retailers need to pay close attention to these cases and to pricing laws, particularly when they advertise discounts, sales, or other price reductions.

CFPB Previews Proposals that Could Fundamentally Shift Data Broker Business Mon, 25 Sep 2023 12:00:00 -0400 In connection with its convening of a panel of small businesses to provide input on potential regulatory actions, the CFPB released an outline of its proposals to use its rulemaking authority under the Fair Credit Reporting Act (FCRA) to cover data brokers and prohibit the use of medical debt collection data in making credit decisions. While the outline does not include any specific language, it evidences the Bureau’s desire to fundamentally alter the data broker business model by expanding the definition of “consumer reporting agency” (CRA) to cover more data brokers, and limit their ability to share consumer information without a permissible purpose. The CFPB also seeks to prevent CRAs from providing credit header data to third parties for purposes beyond the scope of the FCRA. In effect, the Bureau intends to significantly curtail the sale of certain personal data for marketing purposes.

This is just the latest development showing an increased, nationwide focus on the practices of data brokers, which we have detailed in this blog, and which recently led to a strict new data broker regulation in the state of California. Depending on how the CFPB’s proposals play out, they could transform how data brokers are regulated in this country.

Background on the FCRA

The FCRA covers “consumer reports” and imposes restrictions on CRAs that create and sell these reports, furnishers that provide data to CRAs, and users that consider consumer reports when making eligibility determinations about consumers. The famously circular statute (“famous” being an admittedly relative term when discussing a federal statute) defines a consumer report to be the communication of any information by a CRA bearing on a consumer’s credit worthiness, credit standing, credit capacity, character, general reputation, personal characteristics, or mode of living which is used or expected to be used or collected in whole or in part for the purpose of serving as a factor in establishing the consumer’s eligibility for (A) credit or insurance to be used primarily for personal, family, or household purposes; (B) employment purposes; or (C) any other permissible purpose authorized under FCRA section 604.

Meanwhile, CRA is defined as any person that regularly engages in whole or in part in the practice of assembling or evaluating consumer credit information or other information on consumers for the purpose of furnishing consumer reports to third parties.

At the risk of oversimplifying things, in general, CRAs are those entities that assemble information about consumers for the purpose of providing reports to third parties for use in making determinations about consumers’ eligibility for credit, employment, or housing. Data brokers, on other hand, are entities that collect information about consumers to be provided to third parties for non-FCRA purposes such as fraud prevention and marketing. Sometimes enforcers have alleged that companies purporting to be data brokers were, in fact, CRAs because they were selling consumers’ information to third parties such as background screeners and employers (see, e.g., the FTC’s cases against Spokeo and TruthFinder). More often, though, data brokers sell consumer information for marketing purposes without triggering the FCRA. There has been widespread concern that these data brokers can amass incredibly sensitive information about consumers without their knowledge, and that consumers have no control over how the data is shared.

The CFPB’s Proposal

The CFPB’s proposal would classify any report that includes data such as payment history, income, or criminal records as a consumer report. That would mean that any data broker selling this information would be a CRA and would only be able to share it for a permissible purpose – that is, for use in eligibility determinations. [The outline does not include a proposed definition of “sell” but, depending on how it is defined, the scope of the provision’s reach could be quite expansive.] So, for example, a data broker could no longer provide information about a consumer that includes her individual or household income (more on this later) to a retailer for marketing purposes. Data brokers could no longer sell criminal records to individuals that want to vet their dates.

The CFPB is also considering whether it should define “assembling and evaluating” to cover intermediaries or vendors that facilitate the transfer of consumer report information. Traditionally, companies that were mere conduits of information have not been considered to be assembling and evaluating information — and, hence, were not viewed as CRAs (see FTC’s 40 Years Report at 29). It is unclear if the Bureau intends to include “dumb pipes” in its definition of CRA, or just those vendors that clean or organize data before providing it to their clients.

While CRAs are prohibited from providing consumer reports without a permissible purpose, there has been a longstanding exception for the provision of credit header information. In particular, reports limited to identifying information such as name, address, previous address, SSN, and phone number, have been considered exempt from the definition of consumer report if they do not bear on one of the seven factors and are not used to make an eligibility determination (see 40 Years Report at 21). Relying in this exemption, CRAs have provided credit header data to purchasers for use in marketing and fraud detection purposes. The CFPB’s proposal would consider credit header data to be a consumer report and would eliminate a CRA’s ability to provide this information for fraud prevention or marketing.

The outline also includes discussion of the following topics:

Target Marketing

The Bureau is considering clarifying that CRAs cannot use any consumer report information for targeted marketing. The CFPB is concerned that CRAs may be using consumer report data to help customers target marketing, in violation of the FCRA. Per the Bureau, these CRAs may incorrectly believe that this use of data is outside the scope of the FCRA if they do not furnish the information directly to clients, but rather provide the marketing to the consumers themselves.

Aggregated and Household Data

Significantly, the CFPB is also contemplating whether aggregated and household level data should be considered a consumer report. This would be major change. A prohibition on the use of aggregated and household level data, such as the average income in a geographic area, for marketing purposes would reverberate across the marketplace.

Consumer Consent

Consumers can permit CRAs to share their consumer reports by providing written consent. The CFPB’s outline notes that it is considering placing limitations on how (and by whom) the consent may be collected, as well as on the scope of the consent, presumably to ensure that the consent is informed and meaningful. It is also mulling mechanisms through which consent may be revoked.

Legitimate Business Need

Another aspect of a potential proposal would be to limit the scope of the permissible purpose allowing a user to obtain a consumer report when it has a legitimate business need in connection with a business transaction initiated by the consumer. The CFPB may specify that this permissible business purpose must be for a personal, family, or household purpose. A legitimate business purpose related to account review would require that the consumer report be necessary to make a determination about a consumer’s continued eligibility for the account.

Data Security

Regulators have long made clear that they see the privacy provisions of the FCRA (limiting the use of consumer reports to certain permissible purposes) as requiring CRAs to take reasonable measures to protect those reports (see, e.g., the FTC’s case against SettlementOne and the statement of Commissioners Brill, Leibowitz, Rosch, and Ramirez). The Bureau’s outline notes that its proposal may address CRAs’ data security obligations under the FCRA. In addition, the CFPB is considering whether it should hold CRAs strictly liable for data breaches by considering the unauthorized release of any consumer report to be a violation of Section 604, which prohibits furnishing a consumer report to anyone without a permissible purpose.


Under the FCRA, consumers have the ability to dispute inaccurate information contained in their consumer reports with the CRA or directly with the furnisher of the information. Some private litigation has focused on whether CRAs and furnishers have a duty to investigate so-called legal disputes. The CFPB’s proposal would make clear that the FCRA requires investigation of both legal and factual disputes. Simply put, a legal dispute is a dispute that hinges on an interpretation of a law. The Bureau’s outline uses the example of a state foreclosure law. If a consumer disputes the accuracy of a report that lists him as having mortgage debt, the CFPB would require that the CRA investigate whether the state’s anti-deficiency statute required the debt to be extinguished.

In addition, the CFPB says it wants to tackle what it considers to be systemic issues that affect the completeness and accuracy of consumer reports – for example, outdated software or deficiencies in a furnisher’s policies and procedures to assure data accuracy. The outline notes that the CFPB is thinking about ways that CRAs and furnishers could be notified of potential systemic issues, which they would have to investigate and, if necessary, address. Among the CFPB’s proposals for consideration are requiring a mechanism where consumers could report suspected systemic issues. It is also considering whether consumers should be notified of any systemic issues that affected their reports, even if the issue was identified in response to a complaint from another consumer. This could potentially result in consumers receiving notices about issues at CRAs that may have affected their reports, but did not have a negative impact on them because the inaccurate reports were not shared.

Medical Debt Collection Information

Finally, the CFPB is considering revising Reg V which, among other things, covers medical debt collection information. The potential revisions would prohibit creditors from using this information to make credit eligibility determinations, and prohibit CRAs from including this information on consumer reports for credit eligibility. Medical debt collection information has long been a source of concern for consumers, legislators, and regulators, since it can prevent consumers from obtaining credit following a medical emergency or be used to coerce consumers into paying spurious or false unpaid medical bills. In addition, the CFPB believes there is compelling evidence that this information does not have predictive value for credit decisions. The Big 3 CRAs ceased reporting paid medical collection debt, medical collection debt under $500, and any medical collection debt that is less than one year past due. The Bureau’s proposal would further limit the ability of medical debt collection tradelines to affect a consumer’s ability to obtain credit.

Next Steps

The CFPB is accepting comments on this outline until October 30, 2023 and is especially interested in feedback from small businesses that would be affected by the rule. Once the Bureau completes this process, which is required under the Small Business Regulatory Enforcement Fairness Act of 1996 (SBREFA), it can issue a more formal rulemaking proposal which will be put out for public comment. It seems unlikely that any proposal would be announced before 2024. However, the CFPB is clear that it envisions a sea change in the scope of the FCRA, and businesses should be ready to provide input and comment.

FTC Confirmation Hearings Promise a Return to Bipartisanship Thu, 21 Sep 2023 00:00:00 -0400 Those who were looking for big fireworks at yesterday’s confirmation hearings for three FTC Commissioners were likely disappointed by the relatively business-as-usual nature of the proceedings. Yes, certain Senators decried the low morale at the agency, alleged ethical breaches, continuing telework, and loss of the FTC’s characteristic bipartisanship. However, those remarks were not unexpected and were generally directed at Chair Lina Khan, who was in the audience but not in the witness chair.

For the most part, the hearing was about restoring bipartisanship, and the FTC nominees (current Democratic Commissioner Rebecca Kelly Slaughter and Republican nominees Andrew Ferguson and Melissa Holyoak) all pledged earnestly to do so. They also appeared to agree on most issues, although the devil will be in the details if and when they are confirmed.

The Nominees

Rebecca Kelly Slaughter is being considered for a second term, having served at the FTC since 2018. In her opening statement, she stressed the benefits of bipartisanship and of considering different viewpoints, and said she would welcome Holyoak and Ferguson to the FTC line-up. Although Slaughter took fire on some issues, the fire was mostly directly at Khan as noted, and Slaughter was well-prepared to handle it.

Melissa Holyoak is currently the Solicitor General (SG) of Utah. In her opening statement, she echoed the bipartisanship theme, as well as the important role of the FTC on issues that impact consumers’ daily lives. She zeroed in on the privacy and safety of children as a particular area of focus if she is confirmed. She also cited her work to date on cases to rein in the big tech companies, as did Ferguson.

Andrew Ferguson is the SG of Virginia. In his opening statement, he discussed the importance of the free enterprise system, but said that it only works if it’s protected from fraud and monopolies. He stressed bipartisanship but also stated that agencies like the FTC need to “enforce the laws as Congress has written them, respect the separation of powers, and obey the constraints Congress imposes on their authority.”

All of the nominees brought their families to the hearing, and Sens. McConnell (R-KY) and Lee (R-UT), who are not Committee members, made guest appearances to introduce Ferguson (who had worked for McConnell) and Holyoak (who collaborated with Lee on issues affecting Utahans). Lee mentioned that Holyoak would bring litigation experience to the FTC that its current members lack.

If the nominees are confirmed, the FTC would have a full complement of five Commissioners (three Democrats and two Republicans) after operating for six months with just three Democrats (following the fiery resignation of former Commissioner Christine Wilson).

Opening Statements from the Chair and Ranking Member

Chair Cantwell (D-WA) invited Ranking Member Cruz (R-TX) to start the hearing, and start he did by railing against the FTC – saying that its current political approach departs from its bipartisan traditions; that it has weaponized law enforcement, undermined mergers, and engaged in unprecedented regulation; that it has destroyed documents and conspired with the EU on regulations; and that it has interfered with free speech and innovation. He also discussed the dramatic drop in morale under Khan’s leadership. Cruz concluded his remarks by stating that the FTC must restore bipartisanship. He also said later that all of the nominees are well-qualified, suggesting that all of them will ultimately get his vote.

Chair Cantwell emphasized the important role of the FTC in fighting fraud, dealing with the harmful effects of consolidation (including high prices and supply chain issues), protecting privacy, and tackling concerns related to AI and PBMs. Not unexpectedly, she made clear that she would vote for Slaughter.


The Q&A suggested surprising agreement among the nominees on most issues, though there were differences in emphasis and they spoke in generalities, as nominees typically do. Here’s a quick run-down of the highlights, with the Senator(s) who did the main questioning on each issue noted in brackets. Note that not every nominee answered every question.

  • Restoring the FTC’s Section 13(b) authority. [Sen. Cantwell] All nominees said they supported it, and generally indicated they would do so without limits, though Ferguson left himself a little wiggle room.
  • Action against abuses by PBMs. [Sen. Cantwell] Slaughter said she supported such action. Ferguson and Holyoak said they want to see the results of a PBM study now underway, but generally support action here.
  • Disregard of ethical advice. [Sens. Cruz and Sullivan (R-AK)] The underlying facts here are fairly detailed but, in short, there’s a dispute about whether Khan disregarded advice from an FTC ethics official to recuse herself in the Meta-Within matter, and whether Slaughter and Bedoya voted to shield the facts from the public. While the Senators directed their questions to Slaughter, their ire (again) seemed directed at Khan. Cruz also said that the FTC had not responded to his letters and had deleted emails sought by Congress.
  • Role of FTC in fighting robocalls. [Sen. Tester (D-MT)] Holyoak and Ferguson agreed this is an important problem. Slaughter said the FTC needs to focus on the “pipes” – e.g., by targeting VOIP services that enable robocalls.
  • Regulation of AI. [Sens. Thune (R-SD) and Blackburn (R-TN)] Questioned about the FTC’s authority to regulate AI, Slaughter said that existing laws apply to new technology, but that additional legislation would be needed to go beyond those laws. Ferguson and Holyoak agreed, emphasizing that any “grand scheme” to regulate AI should be left to Congress. All said they would stay within the bounds of the law in addressing AI.
  • Section 230. [Sen. Thune] All nominees said they supported removing Section 230 immunity from suits by the government.
  • FTC’s role in privacy [Sen. Hickenlooper (D-CO)] All nominees said they supported federal privacy legislation, though Slaughter emphasized the need to use existing tools until that happens, and Holyoak and Ferguson emphasized the need for Congress to set the standards here.
  • Merger review workload. [Sen. Klobuchar (D-MN)] Ferguson and Holyoak agreed that the volume is concerning.
  • Whether the FTC should table the FTC’s rulemaking until Congress provides clear legislative authority on privacy. [Sen. Blackburn] Holyoak and Ferguson both said yes. Slaughter said no.
  • Privacy protection for kids and teens. [Sens. Markey (D-MA) and Schmitt (R-MO)] All nominees said they supported strong protections in this area, and agreed, in response to Markey, that targeted advertising “can be” inherently manipulative for kids and teens. Holyoak mentioned that she has particular concerns about Ed Tech companies that force children to give up their privacy.
  • Data broker regulation. [Sen. Peters (D-MI)] Asked about the value of a federal data broker registry, all nominees said that greater transparency in the industry would be a good idea. Slaughter mentioned the FTC’s case against data broker Kochava, as did Holyoak, with apparent approval.
  • Considerations of race, sexual orientation, viewpoints, and other “progressive priorities” in antitrust enforcement. [Sen. Vance (R-OH)] As to mergers, Slaughter said that all Americans deserve equal protection under the laws and that just focusing on the highest dollar mergers, for example, would protect cities and not rural areas. She said she would think about the concerns that Vance raised about “organized advertiser boycotts.”
  • Remote work policy. [Sens. Capito (R-WV) and Cruz] Asked about the FTC’s policy, Slaughter said FTC employees must come to the office two days every two weeks. Capito said that may account for the low morale and reduced sense of camaraderie at the FTC, and noted that the private sector is now returning to work. Cruz pressed further, stating that the “Biden Administration has decided that federal employees don’t need to show up for work.” Slaughter was left to defend the FTC, while stating that the Chair sets the policy in conjunction with OPM.
  • Staying within the bounds of the FTC’s legal authority. [Sen. Sullivan (R-AK)] This theme came up multiple times but Sullivan was especially focused on it. He emphasized that the FTC’s power comes from the laws passed by Congress, and asked the nominees to commit to beginning ever FTC opinion with a detailed discussion of the legal basis for the FTC’s action. All committed to following the law but not to crafting FTC opinions in a particular way.
  • Committing to bipartisanship. [Sen. Fischer (R-NE) and many others] Asked multiple times about this issue, the nominees said yes, yes, and yes.


Based on the hearing, one would expect these nominees to be confirmed fairly handily. Whether the FTC will truly return to its bipartisan traditions, and whether Khan will get 5-0 votes on controversial matters pending at the agency, are another matter. Still, having two Republican Commissioners on board should provide a diversity of views and greater visibility into agency decision-making (through their speeches, votes, and written statements). We will see soon enough.

State AGs and CFPB Stop Tempoe’s Rhythm Tue, 19 Sep 2023 00:00:00 -0400 A bipartisan coalition of 42 Attorneys General, led by Iowa, Nebraska, and Tennessee, and the Consumer Financial Protection Bureau (CFPB) announced a $35 million settlement with Tempoe LLC, a specialty consumer finance company. The multistate settlement resolves claims that the company’s marketing sales practices misled consumers that they signed up for an installment plan or credit sale to purchase personal goods and services, when it fact they ended up leasing the items.

As part of its business model, Tempoe leased a variety of goods to consumers like furniture and appliances, but also goods and services that one might not normally think of in a lease – such as car parts and repairs as well as toys. The company offered their lease options after a consumer applied and was rejected for conventional financing through the retailer, and as a result the states alleged consumers were misled that they were entering into installment contracts or credit sales. States also alleged Tempoe’s leasing agreements had a complicated structure. Consumers made periodic payments for an initial term of five months. If the consumer did not affirmatively tell the company they wanted to purchase or return the product, Tempoe would auto-debit the consumers for the full month-to-month term of the contract which was typically 18 to 36 months. Confused consumers ultimately ended up having to pay double or triple the typical purchase price of the product or service, State AGs claimed.

Tempoe’s leasing agreements also allegedly lacked disclosures required by Regulation M, which implements the Consumer Leasing Act. The company did not provide some consumers with a copy of their lease agreement until after the transaction. Consequently, some consumers had to rely on oral descriptions from employees. Tempoe also trained its employees to avoid calling the product a “lease.”

States also claimed the company had complex return requirements. If a consumer wanted to cancel their lease agreement after the first 30 days, but within the initial five-month term, the consumer had to return the product to Tempoe. However, the company did not accept the returns of many items, including products that were less than $300. Instead of going through the hassle of returning, consumers would exercise the “purchase option” on the lease even though they ended up paying more than the original price due to the high leasing fees.

Tempoe entered into over 1.8 million financial agreements and generated approximately $192 million in revenue from about 325,000 consumers between 2015 and 2022. In addition to the provisions below, Tempoe must pay an additional $1 million to the states participating in the settlement, and another $1 million to the CFPB.

The settlement also includes injunctive provisions that:

  • Permanently bans Tempoe from engaging in future consumer leasing activities as well as cancelling all existing leases.
  • Permits consumers to keep any leased merchandise regardless of the status of their account with Tempoe, and without having to make any further payments (an in-kind financial relief estimating $33 million nationwide).
  • Prohibits the company from providing any negative information regarding consumers to any consumer reporting agency.

Biggest Takeaways: State AGs and the CFPB continue to collaborate on multistates with restitution provisions uniquely tailored to the alleged consumer injuries. This collaboration leads to varied allegations as the CFPB can enforce different laws than states such as the Consumer Lending Law.

Additionally, it’s important to note what consumers are expecting –the more a product or service is out of the ordinary, the more disclosures are needed. In Tempoe’s case, their simple disclosures were not enough to cure the expectations of consumers who are unlikely to expect that some of the types of goods they were “purchasing” were actually only leased. We’re seeing this expectation all throughout regulator enforcements, particularly in the AI space where State AGs have emphasized the importance of businesses being clear and transparent about the complexities of how they are using AI in consumer products and services.

To learn more about enforcement priorities throughout the state attorneys general landscape, join us for our upcoming webinar with the New Hampshire Attorney General on October 19.

FTC Recommends Sharpening Blurred Lines When Advertising to Kids Mon, 18 Sep 2023 00:00:00 -0400 In October, we posted about the FTC’s day-long workshop on “Protecting Kids from Stealth Advertising in Digital Media” and wondered what the FTC might be planning. Last week, we got an answer when the FTC released a “Staff Perspective” on Protecting Kids from Stealth Advertising in Digital Media. With a foreword written by Sam Levine, Director of the FTC’s Bureau of Consumer Protection, the staff recommends advertisers implement five key practices to protect kids from the harms of “stealth advertising.”

Here’s the summary:

  • Don’t blur advertising. FTC staff starts with the obvious, stating that “the best way to prevent harms stemming from blurred advertising is to not blur advertising.” Instead, there should be a clear separation between entertainment and educational content and advertising. For example, advertisers can use visual and verbal cues to signal to kids that they are about to see an ad.
  • Provide prominent just-in-time disclosures. These disclosures should appear when a sponsored product is introduced into content and in language that kids are likely to understand. For example, the staff suggests that “Company ABC paid me to show you this so you will think about buying it” is better than “paid promotion” or “sponsored content.”
  • Create icons to flag advertising. FTC staff encourages members of the digital ecosystem to work together to create and use an easy-to-understand icon to signal to kids that an advertiser provided money or other benefits to a content creator to advertise a product. Notably, the staff suggests the icon be used in conjunction with disclosures, not in place of them.
  • Educate kids, parents, and teachers. Stakeholders should work together to find ways to educate kids, parents, and teachers about how digital advertising works and to help kids recognize and evaluate ads. This education could also play an important role in helping promote and support the use of an icon suggested above.
  • Platforms should consider policies, tools, and controls to address blurred advertising. Platforms should consider (a) requiring content creators to self-identify content that includes advertising through policies and tools and (b) offering controls that allow parents to limit or block their children from seeing ads.

The guidance echoes the broad principles that the FTC outlined in 2016 in its Enforcement Policy Statement on Deceptively Formatted Advertisements and related business guide on Native Advertising – see our report here – but encourages stronger measures because techniques that may be sufficient to help adults differentiate content from ads may not be sufficient for kids.

A post on the FTC site warns that “the FTC will be monitoring this area and may take law enforcement action when blurred advertising is an unfair or deceptive practice prohibited by the FTC Act.” We’ll continue to watch what the FTC does in this space, and will keep you posted.

California Just Passed SB 362: Whatever You Think About the Merits of the Law, It’s a Big Deal Fri, 15 Sep 2023 00:00:00 -0400 As we’ve discussed here, data brokers have been in the hot seat lately, with the enactment of new state data broker registry laws, aggressive enforcement by the FTC, a looming rulemaking by the CFPB to extend the FCRA’s reach to a broader class of data brokers, multiple federal bills to restrict data broker sales, and a recent meeting at the White House to discuss “harmful data broker practices” and provide further impetus for regulation.

Among the most significant of these developments is California’s SB 362 – a data broker bill that goes well beyond the registration requirements contained in California’s existing data broker law. Proposed earlier this year, SB 362 met with various twists and turns all summer, including strenuous opposition from industry members. However, yesterday (on the last day of the legislative session), the California Senate gave the bill final approval, concurring in the version passed by the California Assembly.

Now the law is on its way to the Governor Newsom for signature, and there have been no signs that he’ll veto it. Indeed, the bill’s chief sponsor, state Senator Josh Becker, has said that, while he hasn’t reached out to the governor, he expects the governor to sign. Others have surmised that Newsom will sign in light of the prominence of privacy in the Golden State, as well as concerns about data brokers’ collection and sale of reproductive health care data (an issue referenced in Section 3 of the bill).

What Does SB 362 Require?

Although the bill was amended throughout the legislative process, the core requirements remain largely the same. In brief, SB 362 expands California’s current data broker law by providing a centralized place where consumers can delete their data and limit the further sale or sharing of it, and requiring data brokers to undertake new disclosure, recordkeeping, and audit requirements. Some provisions will take effect in 2024 but most will be delayed until 2026 or even 2028. Specifically, SB 362:

  • Requires data brokers to register with the California Privacy Protection Agency (CPPA) (instead of the California AG’s office, as required by the current law), pay a fee, submit detailed information, provide detailed disclosure to consumers, and comply with new recordkeeping requirements (expanded requirements phased in during 2024):
  • Requires the CPPA to create an “accessible deletion mechanism” where consumers can at no cost direct some or all data brokers to delete all of their information, subject to the same deletion and other exceptions available under CCPA (beginning in 2026);
  • Requires data brokers to continue to delete any new information received about the consumer every 45 days (2026);
  • Requires any data broker that receives a deletion request not to sell or share any new personal information about the consumer unless the consumer requests it (2026);
  • Requires any data broker that receives a request to direct their service providers and contractors to delete the information (2026);
  • Requires a data broker that denies a request to delete because the request cannot be verified to process the request as an opt-out of sale/sharing and to direct its service providers and contractors to do the same (2026);
  • Allows “authorized agents” to assist consumers in making deletion requests (2026);
  • Requires data brokers to undergo independent compliance audits every three years (beginning in 2028);
  • Authorizes penalties and administrative costs for noncompliance, including $200 for each day a data broker fails to register and $200 “for each deletion request for each day the data broker fails to delete information” as required. (These sanctions kick in as each of the above requirements become effective.); and
  • Gives the CCPA discretionary rulemaking authority to implement the new law.

Of significance, the term “data broker” is defined broadly as “a business that knowingly collects and sells to third parties the personal information of a consumer with whom the business does not have a direct relationship” (though it excludes entities covered by the Fair Credit Reporting Act (FCRA), the Gramm Leach Bliley Act, the Health Insurance Portability and Accountability Act and similar California laws, and a California insurance law). As result of this broad definition, the bill extends not just to data brokers as they are commonly understood, but also to many members of the advertising industry that collect and sell data but do not have a consumer-facing relationship.

What Did Opponents Argue?

In a website created for the purposes of opposing SB 362, industry members pointed to the many beneficial support services they provide – such as stopping fraud targeting companies and the government; verifying identities for the administration of unemployment and nutrition programs; identifying potential donors for political and charitable campaigns; and allowing small businesses to compete and reach a larger customer base. They also stated that the California Consumer Privacy Act already covers data brokers and provides a full set of transparency and deletion rights to consumers as to these entities. These arguments didn’t carry the day, although the bill garnered a chunk of “no” votes in the California Assembly.

Why is this Significant?

As discussed in our prior posts on this subject, policymakers at the federal and state levels have debated for years whether to impose new statutory and/or regulatory requirements on data brokers, citing the sensitive nature of the information and profiles that they sell, the use of this data in making consequential decisions about consumers, and the invisibility of most data brokers to the public. However, to date, data broker-specific legislation has largely been limited to the FCRA and to the state data registry requirements now in effect in four states (though data brokers fall within many privacy laws of general applicability, of course).

The new requirements in SB 362 raise the potential that large numbers of consumers might opt out of the collection and sale by data brokers (broadly defined), whether on their own or through “authorized agents.” Thus, while the law confers significant new privacy rights on consumers, it also could substantially impact the data broker and advertising industries and the many businesses and services that rely on them. In addition, because California typically leads the states on privacy issues, it’s possible that other states will follow suit, amplifying these effects considerably.

Stay tuned as we continue to monitor this important topic.

Texas Court Puts Hold on CFPB’s Use of Unfairness Authority to Include “Discrimination” Tue, 12 Sep 2023 00:00:00 -0400 As the Supreme Court deliberates over the Fifth Circuit’s ruling that the CFPB’s funding method is unconstitutional, another court in the Fifth Circuit dealt a blow to the CFPB’s aggressive agenda. On Friday, the District Court for the Eastern District of Texas invalidated the Bureau’s March 2022 updates to its examination manual that instructed CFPB examiners to determine whether financial institutions and service providers adequately protect against discrimination, including disparate impact. If it holds in likely appeals, the decision could have a far-reaching impact on both the CFPB’s and the FTC’s attempts to use their unfairness authority to bring enforcement to remedy perceived discriminatory practices, as well as other attempts by both agencies to broadly interpret statutory grants of authority and use them in novel and untested ways.

Examination Manual and Chamber of Commerce Challenge

In a March 2022 announcement, the Bureau announced “changes to its supervisory operations to better protect families and communities from illegal discrimination, including in situations where fair lending laws may not apply.” As explained in that press release, the Bureau updated its UDAAP examination manual, which instructs CFPB examiners how to evaluate whether covered entities are engaged in unfair, deceptive, or abusive acts and practices (UDAAP), to “require supervised companies to show their processes for assessing risks and discriminatory outcomes, including documentation of customer demographics and the impact of products and fees on different demographic groups.” The updates instruct examiners to evaluate potential discrimination in all consumer finance markets, including credit, servicing, collections, consumer reporting, payments, remittances, and deposits, and irrespective of whether specific discrimination statutes such as the Equal Credit Opportunity Act (ECOA) apply.

The Chamber of Commerce brought suit to challenge the examination manual on a number of constitutional and statutory grounds, including that the Bureau exceeded its authority by unilaterally attempting to sweep in discriminatory conduct as “unfair” without Congressional authorization.

District Court Decision and Key Takeaways

In the decision on Friday, Judge Barker first acknowledged that the Fifth Circuit already found the CFPB’s funding mechanism to be unconstitutional, but that that decision was pending Supreme Court review. The court nonetheless reached the issue of statutory authority because of a “compelling reason to reach at least one alternative ground for the same relief sought on Appropriations Clause grounds.” The court also found that the examination manual itself was final agency action warranting review under the Administrative Procedure Act because “it obligates agency personnel to act on a particular understanding of unfair act or practice in examining and supervising companies,” namely that they must also consider whether financial institutions have adequately considered potential discriminatory effects in advertising, pricing, and offering financial products and services.

On the crux of the court’s decision on statutory authority, the court applied the “major-questions canon” to hold that the Bureau could not sweep in discrimination under its unfairness authority because:

  • “whether the CFPB has authority to police the financial-services industry for discrimination against any group that the agency deems protected, or for lack of introspection about statistical disparities concerning any such group, is a question of major economic and political significance”; and
  • state and federal statutes, including the Consumer Financial Protection Act itself, at times authorize regulation of discrimination in clear and precise terms, so the lack of reference to discrimination in connection with the CFPB’s UDAAP authority suggests that it should not be impliedly included.

In so doing, the court also rejected the Bureau’s arguments that relied, in part, on the FTC’s historical and recent use of unfairness to police discriminatory practices. The CPFB had argued, among other things, that the plain language of unfairness clearly covered discrimination (“There has never been an unstated, atextual exception to the prohibition on unfairness for discrimination, just as there is not an unstated exception to unfairness for conduct that happens on Leap Day.”). While the court agreed that the unfairness language in Section 5 of the FTC Act (on which the relevant portion of the Dodd-Frank Act was based) could plausibly be construed to cover discrimination, including disparate impact, the language is not “exceedingly clear” and its history “does not refute its ambiguity.” Without that clear language, the court could not find that Congress intended for unfairness to encompass discrimination.

This decision suggests that the CFPB and FTC may face an uphill battle if they bring similar actions grounded on findings of discriminatory impact on unfairness grounds, as well as in other enforcement actions attempting to broadly construe longstanding statutory authority in new ways. As we discussed here, the FTC last year alleged that Passport Automotive violated the FTC Act by, among other things, engaging in unfair conduct that had a disparate impact on Black and Latino customers. The FTC’s settlement against Passport Automotive required the company to establish a fair lending program with written guidelines specifying the reasons for assessing or not assessing any fee or other charge and objective factors that may be considered in so doing. Nonetheless, the Bureau is likely to appeal the court’s decision and the FTC is similarly unlikely to accept the underlying rationale of the ruling on its face anytime soon. So while the decision marks a substantial setback in the CFPB’s attempts to take expansive views of its statutory authority, the battle of whether the Bureau – and the FTC – can use unfairness principles to combat practices with discriminatory effects will likely continue.

Can’t Lie About Your AI: The FTC’s Most Recent Case with AI Allegations Fri, 01 Sep 2023 00:00:00 -0400 The FTC is not holding its breath on whether Congress will enact AI legislation. Instead, as we have previously reported, the FTC is relying on its own toolkit and has warned businesses that false or unsubstantiated claims related to AI could run afoul of the FTC Act.

A recent example came earlier this month, in a lawsuit the FTC filed against Automators, Inc., three principals, and several related entities. The FTC alleged that the defendants made baseless claims that consumers could make significant income by investing in ecommerce stores – promising “4k-6k consistently monthly net profits,” soliciting false endorsements, and touting non-existent venture capital backing. In fact, the FTC alleged, most consumers didn’t make the promised earnings or even recoup their investments. According to the complaint, defendants took in at least $22 million from consumers in connection with these unlawful practices.

Among their false and unsubstantiated claims, says the FTC, defendants said they used AI tools to maximize revenues, offered an “artificial intelligence-integrated” business opportunity to help consumers find top-selling products to sell, and promoted an AI-powered coaching program. As part of the coaching program, consumers were told to use Chatgpt to write customer service scripts and heard claims like:

  • “We’ve recently discovered how to use AI tools for our 1 on 1 Amazon coaching program, helping students achieve over 10,000/month in sales!”
  • “That is how you make $6000 net profit and that is how you find a product in 5 minutes using AI, Grabbly, Priceblink.”

Atop at least one of Defendants’ advertisements was a claim in bold that read:

Soon after the FTC filed its case, a federal court granted the agency’s request for a temporary restraining order (TRO) with a hearing for preliminary injunction set for mid-September. Among other things, the TRO prohibits the company from misrepresenting that its products “will use Artificial Intelligence (AI) machine-learning to maximize revenues.”

Big takeaway: While the AI claims were only part of the alleged deception here, this case shows that AI deception is gaining steam, and that the FTC is unafraid of calling it out. Although some aspects of AI are uncharted territory, ensuring that claims are truthful and substantiated is a trail already blazed.

Practical Privacy: Lessons from the Front Lines Thu, 31 Aug 2023 00:00:00 -0400 With the continuing onslaught of state privacy laws, it’s easy to become overwhelmed by the number of new legal obligations while also trying to stay focused on identifying and mitigating the most pressing legal and business risks. Over the past couple of months, we’ve had the opportunity to meet with privacy professionals to hear about their top challenges and offer some practical perspectives of our own.

Three of the topics that stood out during these discussions – there were several others – were: understanding and managing data protection impact assessments (DPIAs), assessing sensitive personal information (SPI) risks, and implementing data deletion obligations. This post shares some of the tips that emerged from these sessions.

Data Protection Impact Assessments

Four states require DPIAs today for certain processing activities, and laws that go into effect in five additional states beginning in 2024 will require them, too. Across most of these states, the activities that trigger the need to conduct a DPIA include targeted advertising, data sales, and sensitive data processing. Beyond these clearly defined starting points, however, practical challenges abound. What form should a DPIA take? Who should be responsible for drafting the assessment? What are the best practices for keeping DPIAs up to date?

One way to look at these questions is that DPIAs tell the story about how a company uses personal data. Regulators will be one audience for these stories. Some states’ laws allow the attorneys general to request production of DPIAs from organizations. California law requires businesses to submit their DPIAs to the CPPA on a “regular basis” (with details now set forth in draft regulations). Regulators will expect to see (in the words of the Colorado Privacy Act’s implementing regulations) a “genuine, thoughtful analysis” of the benefits, potential harms, and mitigations in a company’s data practices.

At the same time, although some state privacy laws provide protection for attorney-client privilege and confidentiality, we expect DPIAs to generate investigations and to have their privilege and confidentiality protections challenged. Carefully planning the diligence and drafting stages of a DPIA – and taking care to maintain safeguards for communications that involve legal advice – is critical to ensuring that DPIAs are accurate and comprehensive while minimizing additional risk to the company.

Finding internal champions and identifying key stakeholders are also critical steps. DPIAs take time away from IT, engineering, business, legal, compliance, and privacy teams who have day jobs. In most cases, their contributions are essential to assemble an accurate picture of the activity that’s at the center of a given DPIA.

The message that spurs these teams to participate meaningfully in the DPIA process will vary. In some cases, buy-in might arise from a shared understanding that the company needs to align on whether its current practices are sufficient to protect against known risks. In other instances, a clear message of support from the top of the organization might be necessary.

In short, there isn’t a single format or process that will work for everyone. However, recognizing that the stakes involved in DPIAs are significant and planning accordingly are first steps toward identifying which processing activities to tackle first and how to go about it.

Sensitive Personal Information

In addition to triggering a DPIA obligation, SPI processing under state laws and emerging enforcement precedent may require opt-in consent. Identifying SPI collection and use is therefore a growing priority for many privacy professionals.

But the expansive definition of SPI under state privacy laws is only part of the equation. Sector-specific laws, such as the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule and the Illinois Biometric Information Privacy Act, expand the range of sensitive data that receives heightened protections. These laws have become an increasing focus for regulators and the plaintiffs’ bar at the same time that data from these sectors is becoming more valuable for new services and, in some instances, advertising.

Where SPI is used in marketing and advertising, companies face compliance challenges and potential exposure to private suits. Using alternatives to SPI can mitigate these risks. For example, in lieu of SPI, some companies are exploring the use of aggregated demographic data to power insights or target advertising based on non-sensitive purchasing behavior.

Practical approach to SPI

  • Cataloging data starts with thoughtful DPIAs and a robust understanding of the business use cases with SPI.
  • Consent is the baseline expectation for SPI processing. Consider building a consent management infrastructure that accounts for both direct collection and sourced (or inferred) data.
  • Explore emerging alternatives to SPI and implement mitigation measures in the meantime.
  • Think for today and for tomorrow. Short-term and long-term plans are crucial for developing a comprehensive and durable risk-management strategy. Set a cadence to revisit the plans.

Data Deletion Obligations

All comprehensive state privacy laws that have been enacted so far give consumers the right to request deletion. State laws vary in the level of detail they provide about deletion – regulations in California and Colorado are quite specific in their procedures – but all provide significant leeway to retain data for internal purposes that are reasonably aligned with consumers’ expectations. In practice, it is not uncommon to preserve some data to meet operational needs or comply with legal obligations.

This leads to two challenges. First, companies need to communicate clearly with consumers, service providers, and third parties about how they’re fulfilling deletion requests. Second, companies need ways to ensure that data they keep under an exemption is not used for other purposes.

A few practical steps can help:

  • Prior to developing a process for responding to deletion requests, map out your data to understand what personal information you have, where it is located, and with whom you share it. Identify any legal obligations surrounding how long you must keep it, including any minimum retention periods.
  • Develop and maintain systems to notify service providers and third parties about data deletion requests. Methods for sending deletion requests to partners vary widely, from self-serve, automated interfaces to ad hoc requests that are handled case-by-case, so be prepared to work with a wide range of processes.
  • Communicate clearly with consumers about deletion requests, whether the request will be approved in whole, in part, or not at all.

If Kelley Drye can help your organization develop a practical approach to building and maintaining a robust privacy program, please contact any member of our Privacy Group.

FDA’s Draft Guidance on Cosmetics Registration and Listing: A Good Reminder of End-of-Year Deadlines Tue, 29 Aug 2023 00:00:00 -0400 Earlier this month, in a step toward implementing the Modernization of Cosmetics Regulation Act of 2022 (MoCRA), FDA issued a draft guidance document titled “Registration and Listing of Cosmetic Product Facilities and Products: Guidance for Industry.” For those who may be new to MoCRA, this legislation significantly overhauled FDA oversight of the cosmetics industry by requiring manufacturers, packers, and distributors, to comply with a range of new standards, including mandatory product and facility registration, creation of safety substantiation information, and adverse event reporting subject to a December 2023 deadline (per the one-year timeframe set per statute). Additional compliance measures, including issuance of good manufacturing practices regulations, updated fragrance allergen disclosures, and domestic contact labeling are expected in 2024 and thereafter.

Regarding product and facility registration and listing, in question-and-answer format, the draft guidance addresses a range of topics, including:

  • Definitions
  • Who registers and submits product listings
  • What information is submitted to register and list
  • Will the information submitted be public
  • How and when stakeholders register and list
  • Treatment of cosmetic products that are also drugs
  • Fees

The draft guidance is open for comment until September 7th. Previous voluntary facility registrations will not be considered adequate to comply with the mandatory registration and listing requirement. FDA expects to have an electronic portal available in October 2023 along with a paper registration option.

Other December 2023 Deadlines

Stakeholders will recall that several provisions in MoCRA have one-year deadlines, making December 29, 2023 a key date. In addition to establishment registration and product listing (§607), these include:

  • Adverse event recordkeeping and reporting (§605) – Every responsible person must maintain records of all adverse events associated with any of its cosmetic products for six years. Serious adverse event reports must be submitted to FDA (along with retail packaging of the product at-issue) within 15 business days after receipt of same.
  • Safety substantiation (§608) – Responsible persons must ensure, and maintain documentation of, “adequate substantiation of safety” for each cosmetic product it distributes/manufactures.

Additional information regarding MoCRA’s requirements is available here.

FTC and Six States Announce Settlement Over Fake Reviews and Claims Tue, 29 Aug 2023 00:00:00 -0400 FTC and Six States Announce Settlement Over Fake Reviews and Claims 128 128 A year ago, the FTC and six states filed a lawsuit against Roomster and its owners, alleging that they had posted “tens of thousands of fake positive reviews to bolster their false claims that properties listed on their Roomster platform are real, available, and verified.” At the same time, the regulators announced a settlement with an individual (doing business as AppWinn) who allegedly sold Roomster many of the fake reviews. Today, the FTC and states announced a settlement Roomster and its owners that includes some notable provisions.

As we noted in our original post, Roomster advertised that its platform had “millions of verified listings” in a “safe community with real members worldwide.” To test how well Roomster verified the listings, the regulators listed a room for rent on the platform at an attractive price. Had Roomster attempted to verify the review, it would have learned that the address of the listing was actually a US Postal Office commercial facility. According to the complaint, though, Roomster never asked any questions.

Roomster also allegedly bolstered its false claims by flooding the internet and app stores with tens of thousands of 4- and 5-star reviews, many of which were fake. The complaint alleged that the company bought over 20,000 reviews from AppWinn alone. Emails between Roomster and AppWin showed a detailed plan addressing how and when the reviews should be posted. According to the complaint, the number of positive fake reviews diluted the real reviews, many of which warned of scams on the platform.

Yesterday, the FTC and states announced a settlement that includes some novel requirements. Among other things:

  • The defendants are permanently banned from buying or incentivizing consumer reviews and from disseminating reviews where they have a relationship with the reviewer that might affect the review’s weight or credibility. The outright ban on incentivized reviews is surprising since companies are generally allowed to incentivize reviews as long as those incentives are clearly disclosed.
  • The defendants are prohibited from misrepresenting any material fact, including that any review is truthful or represents a real user, or that any rental listing is verified, authentic, or available, when that isn’t the case. No surprises there.
  • The defendants are required to monitor affiliates to assess their compliance. Notably, the settlement provides a lot of detail about this obligation. For example, the defendants are required to take extensive steps to monitor “all marketing materials” used by affiliate marketers “on at least a monthly basis” and to promptly take steps to address issues that they discover during monitoring.
  • The settlement imposes a monetary judgment of $36.2 million and civil penalties totaling $10.9 million payable to the states. These amounts will be suspended after the defendants pay $1.6 million to the states, based upon their inability to pay the full amount. (If the defendants are found to have misrepresented their financial status or to have violated the terms of the order, the full amounts would immediately become due.)

This case demonstrates that the FTC and state AGs continue to focus on the integrity of reviews and that they will work together to seek monetary penalties for misleading practices. The novel provisions in the settlement agreement – such as the ban on incentivized reviews and the detailed monitoring requirements – also suggest that regulators are getting more creative and restrictive in settlement agreements, particularly when complaints involve particularly egregious conduct.

In Your Face: Connecticut District Court Denies Motion To Dismiss in Coppertone “FACE” Sunscreen False Ad Case Mon, 28 Aug 2023 00:00:00 -0400 NAD Addresses Apples and Oranges in Price Claims Fri, 25 Aug 2023 00:00:00 -0400 Google recently ran two commercials for its YouTube TV service, each of which ended with the following tagline: “More than cable. For $600 less than cable.” A disclosure at the bottom of the screen explained: “Annual average savings based on a study by SmithGeiger of the published cost of comparable standalone cable in the top 50 Nielsen DMAs, including all fees, taxes, promotion pricing, DVR box rental and service fees, and a 2nd cable box.”

Charter argued that consumers would reasonably interpret the two commercials to convey a claim that they could expect to pay $600 less for YouTube TV than they would pay for all cable providers, including Spectrum TV. It introduced evidence demonstrating that one year of Spectrum TV Select Service in Los Angeles cost only about $219 more than one year of YouTube TV, rendering the “for $600 less than cable” claim false.

Google argued that the claim was more narrow because the disclosure clearly limited the comparison to “comparable standalone cable,” and that any comparisons to pricing of streaming services (such as Spectrum TV Select) weren’t relevant. It also argued that the claim was substantiated by the SmithGeiger study, which compiled cable plan costs from the top 50 Nielsen DMAs over the first two years of service and calculated an average of the annual cost.

NAD had several concerns with Google’s approach. For example, it found that consumers could interpret the generic reference to “cable” in the tagline to suggest that “YouTube TV represents a savings of $600 over any service offered by a cable provider,” which wasn’t the case. NAD also noted that it was difficult to identify what constitutes a “comparable” service when different services offer different channel lineups. In this case, the lineups used in the comparisons didn’t match exactly, which called into question whether there services were actually “comparable.”

There’s more to this case and it’s worth a read if you work in the cable or streaming industries. More broadly, the case illustrates the challenges of making comparative price claims. Those claims can be easier to substantiate when making a narrow apples-to-apples comparison between two things, but they can be much harder to substantiate when making a broader comparison against multiple things, especially when it’s hard to determine whether the other things are apples or oranges.

Google promised to appeal the decision, so we haven’t heard the last word on this yet. In the meantime, click here for additional analysis on apples-to-oranges claims.

Mounting Focus on Data Brokers: Is More Regulation Coming? Thu, 24 Aug 2023 00:00:00 -0400 During the past year, there’s been a flurry of regulatory activity related to data brokers. Whether in Congress or state legislatures, at federal agencies or the White House, many policymakers are pushing in the direction of increased regulation. For those not following this issue closely, here’s a snapshot of some key developments, starting with some history:

Background on Data Broker Regulation

The debate surrounding data brokers and regulation isn’t new. For decades, policymakers and enforcers have raised concerns about the collection and sale of consumer data by these entities, citing the sensitive nature of the information and profiles that they sell, the use of this data in making consequential decisions about consumers, and the invisibility of most data brokers to the public. (See, e.g., here, here, and here.)

In the 1970s, Congress passed the Fair Credit Reporting Act (the nation’s first commercial privacy law) to regulate consumer reporting agencies (CRAs), an important subset of these entities. The FCRA sets forth data privacy and accuracy requirements when CRAs sell (and companies furnish and use) consumer data for decisions affecting people’s eligibility for credit, jobs, and insurance. The FCRA didn’t end the debate, however. Since then, some policymakers have pressed for broader regulation of data brokers, especially with the advent of mobile devices and other technological advances, enabling data brokers to collect more detailed data about consumers, and to make more granular inferences and predictions, and then sell this information to the public. In response, data brokers have pointed to the beneficial services they provide, and have argued that existing laws (including the FCRA, the Gramm Leach Bliley Act, the FTC Act, and now numerous state privacy laws) are adequate to address any harms that occur.

Recently, this debate has accelerated, as shown by the increased regulatory activity we are seeing today. For some policymakers, the repeal of Roe v. Wade and its implications for reproductive privacy has added an important new dimension to the debate. On April 15, the White House convened a roundtable of government officials, academics, advocates, and other experts to discuss “harmful data broker practices” and provide further impetus for regulation.


So, what specific proposals are we seeing? Not surprisingly, some of them are coming from Congress. In July, we blogged about two bipartisan efforts to stop the government from purchasing consumers’ location and web browsing and search history from data brokers, absent a warrant or other due process measures. One of these proposals (an amendment to the House National Defense Authority Act bill) would restrict such purchases by DOD. Another (the Fourth Amendment is Not for Sale Act, now introduced in both the House and the Senate) would restrict such purchases more broadly across the federal government. All of these bills are pending, with Congress now in recess.

Readers also may recall that the leading federal privacy bill (the bipartisan American Data Privacy and Protection Act) contains strict data broker provisions requiring online registration and a one-stop mechanism allowing consumers to delete data held by data brokers and prevent further collection by these entities. Other recent federal bills (e.g., the bipartisan DELETE Act) contain even stricter data broker requirements.

Federal Trade Commission

The FTC is also very active in this area. In a 2022 blogpost, an FTC official warned that the FTC will use the “full scope of its authorities” to stop the “illegal use and sharing” of consumers’ location, health, and other sensitive data. Soon after, the FTC filed a lawsuit against data broker Kochava, alleging that its sale of location data obtained from mobile devices harms consumers and is legally “unfair” because the data can reveal sensitive locations that consumers visit, such as reproductive health clinics, places of worship, homeless and domestic violence shelters, and addiction recovery facilities. In addition, the ANPR in the FTC’s Commercial Surveillance and Data Security Rulemaking is replete with references to data brokers and data sales, suggesting that this could be a focus of any rule it proposes.

Like Congressional efforts, the FTC’s actions here are pending. In Kochava, the court dismissed the FTC’s initial complaint due to what it viewed as the hypothetical nature of the FTC’s injury allegations, but the FTC has filed a new complaint (under seal). In the FTC’s rulemaking, the comment period for the ANPR closed last November, so the FTC could release a proposed rule any day now. We await news on both fronts.

California – SB 362

No privacy discussion would be complete without California. And sure enough, the California legislature is currently considering new data broker legislation. In brief, SB 362 would amend the state’s existing data broker law by establishing an “accessible deletion mechanism” where consumers can direct data brokers to delete their information. This would in turn trigger a ban on further data collection by these entities, unless consumers opt in. The law also would allow an “authorized agent” to request deletion for the consumer, require independent compliance audits every three years, and mandate regular reports to the public and to the California Consumer Protection Agency. Due to the broad definition of “data broker,” the bill would cover a wide array of entities, including members of the advertising industry.

If passed, this law would substantially up the ante for data brokers operating in California, and could spread to other states. Currently, eleven states have enacted comprehensive baseline privacy laws, but only a few have data broker laws, with mostly modest requirements. Not surprisingly, opposition to the bill is strong in the data broker and ad industries, who (according to news reports) say it will hurt anti-fraud efforts and the economy, and have launched an effort to defeat the bill. Because California’s legislature adjourns September 14, the window for action is closing soon.

Consumer Financial Protection Bureau

Finally, in what could be the most consequential data broker regulation of all, CFPB Director Rohit Chopra just announced (on the day of the White House roundtable) that the CFPB will soon launch a rulemaking to “modernize” the FCRA so that it reflects how today’s data brokers “build even more complex profiles about our searches, our clicks, our payments, and our locations” and “impermissibly disclose sensitive contact information” of people who don’t want to be contacted, such as domestic violence survivors.

Among other things, per Director Chopra, the CFPB is considering proposals to bring within the FCRA (1) a data broker’s sale of certain types of data (e.g., payment history, income, criminal records) because the data is “typically” used to make credit, employment, or certain other eligibility determinations and (2) credit header information, a major source of information for data brokers that has long been considered to fall outside the FCRA. Such proposals would dramatically extend the FCRA’s reach to a broader class of data brokers than are currently covered. According to Director Chopra, the CFPB will publish an outline of proposals and alternatives next month.


All of the above proposals are now pending, so it’s not clear whether they will reach fruition or what shape they will ultimately take. However, the sheer volume of activity shows that data brokers are in the spotlight and are likely to remain there for a while

The FTC Collaboration Act: Benefits for the Business Community Wed, 23 Aug 2023 00:00:00 -0400 The FTC Collaboration Act: Benefits for the Business Community 128 128 On October 10, 2022, the FTC Collaboration Act of 2021 became law. The Act’s stated purpose is to enhance cooperation between the Federal Trade Commission (FTC) and state attorneys general (AGs) in combatting unfair and deceptive practices. The Act requires the FTC to complete a study and issue a public report based on that study. Throughout the summer, the FTC accepted comments from interested stakeholders on a series of questions related to the roles and responsibilities of the FTC and AGs that best advance collaboration and consumer protection, and how to dedicate resources and implement accountability mechanisms to fulfill those goals.

As former state enforcers with approximately 40 years working in key consumer protection roles at AG offices, we shared our unique perspectives with the FTC in written comments. The potential benefits to the business community when enforcers with overlapping authority work together may not be apparent. Indeed, one might think that such collaboration would lead to increased enforcement and oversight of businesses’ activities. However, it is our experience that such collaboration can actually help eliminate duplicative enforcement and allow for more streamlined communication between the business and enforcement communities. To that end, we recommend increased transparency between the FTC, AGs, and business community including greater insight into the complaints stored in the FTC’s Consumer Sentinel database. While politics is increasingly causing a split in the AG community, the AGs and FTC remain united on key consumer protection enforcement priorities, which we expect to continue. As such, we believe greater collaboration will help ensure a fair marketplace and level playing field for the legitimate business community.

Led by the attorneys general of Connecticut, Illinois, New Hampshire, and Tennessee, a bipartisan group of 30 AGs also filed comments. The AGs note that the states, FTC, and other federal partners have worked together for decades to benefit businesses and individual consumers, and that the states benefit significantly from the FTC’s expertise, resources, and national reach, which facilitates cross-border enforcement. In regard to the Supreme Court’s AMG Capital Management, LLC vs. FTC decision, while some, including the authors of this blog, feel the AMG case is a nexus for collaboration between the states and the FTC, the AGs expressed concerns, pointing to the limitation on the FTC’s authority creating a risk of depriving victims of restitution in certain matters, as well as the potential loss of the FTC’s resources to deliver restitution to their residents.

Read our comments here:

FTC Warns That Deceptive AI Content Ownership Claims Violate the FTC Act Tue, 22 Aug 2023 00:00:00 -0400 FTC Warns That Deceptive AI Content Ownership Claims Violate the FTC Act 128 128 The buzz around generative AI has raised many IP-related questions, such as the legality of using IP to train AI algorithms or ownership of AI-generated content. But the FTC warns that claims about content ownership don’t just give rise to IP concerns – they could also constitute FTC Act violations if they meet the unfair or deceptive standard in Section 5. (Click here and here for our take on other recent AI-related guidance from the FTC.)

In a recent business blog, the Agency lays out several practices that could trigger scrutiny and enforcement:

  • Promising full ownership but delivering a limited-use license. Telling consumers that they’re buying full rights to a digital product when in fact they’re just getting a limited-use license or being enrolled in a subscription service is likely to violate Section 5. The FTC warns companies against unilaterally changing their terms or undermining reasonable ownership expectations post-purchase, including in cases where the primary purchaser is deceased and survivors’ rights to the digital property are affected. This principle is hardly AI-specific. After all, the FTC has been bringing cases about deceptive offer terms and hidden negative options for decades – but could be increasingly relevant today, in a context where consumers’ digital purchases live largely in the cloud and companies have more control over post-purchase access and use.
  • Failing to disclose use of IP in training data. Generative AI products that are trained on copyrighted or otherwise protected content should disclose that their outputs may include IP and failing to do so may be a deceptive practice under the FTC Act. Clear disclosures about the use of IP will help consumers and companies make informed choices about which AI products to use. For companies using generative AI tools for commercial purposes, such information could be particularly important, as they may be held liable for improperly including IP in their products.
  • Passing off AI content as human-generated content. Advertising a digital product as created by a person when it was generated through AI would be a clear example of false advertising and, again, aligns with decades of FTC enforcement activity. The prohibition stands even though some platforms may assure users that the generated content “belongs” to them.
  • Misleading creators about content ownership or use. When inviting content creators to upload content, platforms must be clear about ownership and access rights, as well as how the content will be used. If the platform will use the content to train AI algorithms or generate new content, this information must be clearly communicated up front.

Although these practices generally fall within well-established principles of unfairness and deception under Section 5, this blogpost highlights the FTC’s continued focus on all aspects and angles of the generative AI space. In short, expect extra scrutiny of any claims surrounding capabilities, features, ownership, and uses of AI tools and content. The summer may be finally cooling off, but regulators’ interest in AI is just heating up.

This Summer’s Hot Topic: AGs and AI Thu, 17 Aug 2023 00:00:00 -0400 This summer has been hot all around, but perhaps the hottest topic on the minds of state attorneys general (AGs) continues to be artificial intelligence (AI). As we recently heard from Colorado Attorney General Phil Weiser, AI is a big concern for regulators trying to understand all the ways in which AI permeates our daily lives in order to effectively regulate the algorithms that create the AI.

While the benefits of AI are clear and constantly expanding to different sectors, the AG community believes potential harms to consumers cannot be overstated. In addition to calling for transparency with the use of AI, AGs are grappling with the varied outputs of AI and are looking at tools they can use to address consumer concerns that deal with privacy, discrimination, and data security. At both the recent 2023 AG Alliance Annual Meeting and the NAAG Eastern Region Meeting, AGs heard from AI experts and stakeholders on the state of play for AI and potential tools they can use to curb consumer harms.

AG Alliance Annual Meeting

At the 2023 AG Alliance Annual Meeting, AGs focused on how they enhance and refine their approaches to consumer data and privacy to include AI. Attendees heard from two panels: (1) “The Evolving World of Consumer’s’ Data & Privacy,” which addressed the regulatory landscape of AI; and (2) “AI and the AG,” which was geared towards the role that an AG could play in preventing misconduct and maximizing the benefits of AI and its technologies.

AI requires substantial data. Therefore, according to panelists, we cannot have ethical and responsible AI without rules about data. Some use of AI can be regulated by existing laws (a recurring theme throughout the panels). For example, health insurance providers, regardless of whether they rely on AI, are bound by HIPAA and must follow detailed privacy and security provisions to protect data, including data breach notifications. State UDAP laws have already been used to address AI. In 2020, then Vermont Attorney General T.J. Donovan filed a lawsuit against Clearview AI for allegedly violating the Vermont Consumer Protection Act for using facial recognition technology to map faces of Vermont residents (including children), and sold the data to private businesses, individuals, and law enforcement.

Additionally, New York City adopted NYC 144 prohibiting employers or agencies from using an automated employment decision tool (AEDT) to make an employment decision unless the tool is audited for bias on an annual basis and the employer publishes summaries of the audit, and the employer provides notice to the applicants and employees who are subject to screening by the AEDT.

AGs were asked to hear from stakeholders on how each sector relies on AI and to refrain from relying on a “one size fits all” policy solution for AI. Using AI to make recommendations for a movie or song would require a different approach from using AI to make decisions in the lending or education sectors. Additionally, AGs were asked to consider collaboration and consistency with policymaking to reduce duplicative or disjoined rules between states. Finally, AGs heard that laws and regulations should be responsive to outcomes rather than the specific type of technology due to the ever evolving nature of technology.

NAAG Eastern Region Meeting

At the NAAG Eastern Region Meeting, attendees heard about the role AI is playing in antitrust and consumer protection – as well as the all-important “Tong Tasting” of oysters. In addition to exemplifying the dangers of AI by making a fake audio recording of General Tong, General James and General Tong touched on the ways AI impacts markets, particularly how AI can lead to market dominance by large firms in antitrust. The increased concentration of industries can create a “big firm advantage” as data is often proprietary with large training costs, essentially creating a barrier to entry for smaller players.

On the consumer protection side, the panel noted that possible consumer safeguards include: (1) applying general state consumer protection laws to AI such as state UDAP laws analogous to the FTC Act; (2) using state privacy laws and opting out of AI use; and (3) drafting state/federal AI-specific legislation.

In the NAAG meeting, panelists noted that we are seeing a shift based on recent FTC guidance which focuses on generative AI. Echoing what we previously reported, the panelists stated that the FTC can enforce company pledges to manage the risks posed by AI. As such, the FTC emphasized that claims about AI should not mislead consumers. AI should also not be used for “bad things” such as fraud and scams, especially when they prey on vulnerable populations like the elderly. Similar to the sentiment expressed in the AG Alliance Meeting, businesses using AI have called for clear and consistent regulations. Businesses have expressed concerns about the relationships between AI and the current regulatory schemes such as private rights of action on state wiretapping laws.

In addition to being transparent about their AI practices, businesses can and should address the risks AI creates by:

  • Reviewing claims to ensure they are accurate and not exaggerated.
  • Figuring out who is responsible for each chain of AI.
  • Building compliance mechanisms into AI.

Kelley Drye will continue monitoring the AI regulatory landscape.

FTC Assesses Primary Purpose of Emails in CAN SPAM Enforcement Wed, 16 Aug 2023 00:00:00 -0400 FTC Assesses Primary Purpose of Emails in CAN SPAM Enforcement 128 128 As most people know – either from professional or personal experience – the CAN SPAM Act requires companies who send “commercial” email messages to give consumers an opportunity to opt-out of receiving those messages in the future. The opt-out requirement does not apply to “transactional” messages, which generally facilitate an already agreed-upon transaction or update a customer about an ongoing transaction.

If a message contains more than one type of content, the “primary purpose” of the email is the deciding factor. That’s a fuzzy concept and it depends on how a consumer views an email, but the FTC has stated that it considers two key factors in this analysis. A message will likely be deemed commercial if (1) the subject line would lead the recipient to think it’s a commercial message, or (2) if the bulk of the transactional or relationship part of the message doesn’t appear at the beginning.

Using these factors, marketers will often come up with subject lines that appear transactional and play around with how different types of content are presented in the email in order to avoid triggering the opt-out requirement. This week, the FTC announced a settlement with Experian over its email campaigns that demonstrates that the agency will look closely at emails that are purportedly transactional to make sure they aren’t actually commercial messages in disguise.

Some of the company’s emails told recipients that “this email was sent because it contains important information about your account” and others reassured them that “this is not a marketing email – you’re receiving this message to notify you of a recent change to your account.” Despite these statements, the FTC alleged that the emails were primarily designed to pitch new products or services – making the messages commercial – and that the emails did not include an unsubscribe link.

Samuel Levine, director of the FTC’s Bureau of Consumer Protection, sad: “You always have the right to unsubscribe from marketing messages, and the FTC takes enforcing that right seriously.” In addition to requiring Experian to comply with CAN SPAM, the proposed order requires them to pay a $650,000 penalty.

If you haven’t looked at your email marketing practices recently, now may be a good time to do that. In particular, make sure you’re careful about how you interpret what constitutes a “transactional” message and how you determine the “primary purpose” of an email. Regardless of how you present the email, if the primary purpose is commercial, you need to make sure that recipients have the ability to opt-out of receiving those messages.

NAD Reads into Emojis Fri, 11 Aug 2023 00:00:00 -0400 Earlier this year, Coca-Cola reformulated its Powerade beverage to include more electrolytes. In some ads, it boasted that the beverage now contained “50% more electrolytes vs. Gatorade Thirst Quencher.” One social media post featured a headline “Powerade vs. Gatorade Thirst Quencher” above a side-by-side comparison of the electrolyte and vitamin content of the two beverages. The caption read: “Don’t Underestimate our Electrolytes” followed by a flexed arm emoji. 💪

Stokely-Van Camp, the maker of Gatorade, wasn’t happy. 😤 Although the claims about electrolyte content were literally true, SVC argued that the ads implied that Powerade “delivers superior heath, hydration, and performance benefits and through the flexed arm emoji, that drinking Powerade will make consumers stronger than Gatorade.” Despite this implication, SVC argued that Coca-Cola hadn’t provided evidence that the increase in electrolytes actually provides additional benefits for consumers.

NAD determined that “50% more” is a “powerful term” that “flags a significant difference” and could lead to high consumer expectations. In this case, the increase in Powerade’s electrolyte content represents only a 3% increase and a 2% increase in the daily values for sodium and potassium, respectively. “As such, although literally true, NAD determined that Coca-Cola’s ‘50% More’ claims mischaracterize the significance of the electrolyte increase and the nutrient difference” and could confuse consumers. 😕

What about the flexed arm emoji? 💪 SVC argued that it “conveyed the additional message that consumers who drink Powerade will be stronger than if they drink Gatorade.” Coca-Cola counted that the flexed arm emoji is “a common symbol for determination and persistence and that Gatorade employs the same emoji in its advertising for similar purpose.” As such, it reasoned that consumers aren’t likely to see that and think that they’re going to get stronger.

NAD noted that emojis “are a powerful source of messaging” that can add meaning to an ad. NAD acknowledged that the flexed arm emoji could stand for “determination and persistence,” but noted emojis can often convey multiple meanings. Consulting Emojipedia, NAD wrote that “a flexed arm emoji evoking the arm of a body builder can reasonably convey a message of strength and fortitude.” Thus, it could suggest Powerade helps make you stronger than Gatorade does. 🤔

This decision has broad implications for many ad campaigns. The ideas that companies can’t make claims that are literally true and that common emojis can convey claims that require substantiation may surprise many advertisers. 😵 Coca-Cola plans to appeal the decision, so we’ll need to wait to see whether the NARB reads as much into Powerade’s claims and emojis as NAD did. In the meantime, you can read about other cases dealing with emojis here and here.

Gonzalo Thinking Emoji