Deepfake Best Practices Amid Developing Legal Practices
Deepfake technology has significantly improved over the past few years, allowing for mainstream commercial uses. Deepfake technology is the use of synthetic image, video, or audio. While there are good uses such as protecting the identities of whistleblowers or victims and bad uses such as non-consensual pornography and elder fraud, the advertising industry is already demonstrating how synthetic media has great potential as a tool for advertisers.
From celebrities licensing their likeness to brands personalizing online shopping experiences, there are some really innovative use cases. The legal framework around the use of deepfake technology is still forming. While there are several state laws—California, Virginia, and Texas—that protect against certain uses of deepfake technology (mostly in non-consensual pornography and election interference), there are other state laws pending. As the legal and regulatory framework continues to develop, we ought to be proactive in setting the standard for how this technology is used.
Law360 published the article “Deepfake Best Practices Amid Developing Legal Practices,” co-authored by partner John Villafranco. The article provides an analysis of deepfake use cases, describes legal tools available to protect against harmful uses of the technology, and suggests some best practices for responsible use of deepfake technology.
To read the article, please click here.