NTIA’S Listening Sessions On Privacy and Civil Rights: What’s the Significance?
In case you missed it, last week (on November 30), the National Telecommunications and Information Administration (NTIA) announced that it would convene a series of virtual listening sessions on privacy, equity, and civil rights. According to NTIA, the sessions (scheduled for December 14, 15, and 16) will provide data for a report on “the ways in which commercial data flows of personal information can lead to disparate impact and outcomes for marginalized or disadvantaged communities.”
NTIA cites the following examples to illustrate how data collection, “even for legitimate purposes,” leads to disparate impacts:
- Digital advertising offers content and opportunities based on proxy indicators of race, gender, disability and other characteristics, perpetuating historical patterns of discrimination.
- Insurance companies use information such as neighborhood, safety, bankruptcy, and gun ownership to infer who will need expensive health care, warranting higher premiums.
- Universities predict which students will struggle academically based on factors that include race.
Why is this News?
As our readers may have noticed, NTIA is hardly the first agency or constituency to draw the link between data collection and discrimination. In 2013, Harvard Professor Latanya Sweeney published a groundbreaking study showing racial discrimination and stereotyping in online search and ad delivery. In 2014, FTC hosted a workshop, followed by a report (Big Data: A Tool for Inclusion or Exclusion?) detailing the problem and making recommendations for companies and researchers. In recent years, scores of studies and conferences have examined the discriminatory assumptions embedded in algorithms and artificial intelligence (AI). And civil rights groups have raised concerns for years and, in 2019, obtained an historic settlement with Facebook to stop discrimination on its online advertising platform.
NTIA’s announcement is nevertheless significant for two reasons. First, by its own description, NTIA is the President’s principal advisor on information policy issues, responsible for evaluating the impact of technology on privacy and the sufficiency of existing privacy laws. Further, its announcement states that the listening sessions are designed to “build the factual record for further policy development in this area.” For these reasons, the notice has been heralded as the Administration’s “first move” on privacy and a possible attempt to revive stalled efforts in Congress to enact a federal privacy law.
Second, in case there was any doubt, NTIA’s announcement affirms that the link between privacy and civil rights is now a widely accepted policy position, and will remain front-and-center in any debate about whether to enact a comprehensive federal privacy law. Whereas once there were questions about whether civil rights provisions should be “added” to a privacy law, now they’re essential building blocks.
This is true not only among Democrats, but among Republicans too. For example, provisions related to discrimination and/or algorithmic decision-making appear in recent privacy legislative proposals from, not just Representative Eshoo and Senator Cantwell, but also Senator Wicker and the Republican members of the House Energy and Commerce (E&C) Committee. The Republican E&C bill is especially notable for how much it leans into the issue – prohibiting data practices that “discriminate against or make an economic opportunity unavailable on the basis of race, color, religion, national origin, sex, age, political ideology, or disability or class of persons.”
But What Does this Mean for Companies Today?
You may be wondering – what does this mean for companies now, with Congress still (endlessly) debating whether to pass federal privacy legislation? It means that:
- Data discrimination is on everyone’s radar, regardless of whether Congress finally decides to pass a federal privacy law.
- Companies should expect more enforcement – even now, under existing laws – challenging data practices that lead to discriminatory outcomes. Such laws include the FTC Act (recently used to challenge racial profiling by an auto dealer), state UDAP laws, the Fair Credit Reporting Act, the Equal Credit Opportunity Act, and (of course) the civil rights laws.
- To steer clear of discrimination (and any allegations of discrimination), companies should test their data systems and use of algorithms and AI for accuracy and fairness before using them in the real world.
We will continue to monitor developments on this issue and post updates as they occur.