NJ Employers On Notice: Use of AI Tools Must Comply With New Jersey’s Law Against Discrimination
Unlike other jurisdictions, New Jersey has not yet implemented legislation directly governing the use of AI tools in employment practices. But, earlier this month, New Jersey Attorney General Matthew J. Platkin announced that the Division on Civil Rights (“DCR”) had launched a new Civil Rights and Technology initiative to address the risks of discrimination created by the use of artificial intelligence (“AI”) and automated decision-making tools in the workplace.
In conjunction with this initiative, the New Jersey Office of the Attorney General and the DCR jointly published a Guidance Memorandum to explain how the New Jersey Law Against Discrimination (“LAD”) applies in the face of an advancing technological landscape.
The message is clear: the LAD applies to the use of AI tools to the same degree as it does to human decisions. While acknowledging that AI and automated decision-making tools can serve an important function in reducing bias, the Guidance Memorandum explains that bias can nonetheless be introduced into automated decision-making tools if systemic racism, sexism, or other inequalities are not accounted for when designing, training, and deploying these tools.
Whether intended or not, implementation of AI in the workplace can result in discriminatory outcomes for employees. Critically, an employer cannot be shielded from liability by pointing to the use of an outside or third-party automated decision-making tool.
Practically, what does all of this mean for employers?
There is no question that automated tools can have a variety of beneficial outcomes when implemented appropriately by employers. However, employers should be aware that bias can be introduced into automated decision-making tools – with or without intent – and as a result unchecked use of automated tools will be subject to the same level of scrutiny as a human decision maker under the LAD.
To avoid liability, employers should carefully vet their technology vendors and familiarize themselves with the tools that they have implemented. Asking questions and maintaining human oversight over these tools will be critical to avoid discriminatory outcomes and ward off potential claims.
The Kelley Drye team will continue to monitor this important topic for any additional guidance from the state, enforcement efforts, or private litigation that sheds light on how best to navigate the use of AI and automated decision-making tools in the workplace. In the interim, please contact your Kelley Drye attorney with any questions.
Tags: NLAD