Invasion of Privacy Suit Filed Against Facial Recognition Photo Harvester

Clearview AI facial recognition software logo on the glowing screen and blurred faces from social media on the background. Photo Source: Timon - stock.adobe.com

Did you post a picture of your trip to Paris on Twitter? How about one that shows your whole family smiling at your niece’s first birthday party? Did you put that one on Facebook and tag all your relatives? If so, the faces of those smiling people could be among the three billion photos that are now stored in an enormous facial-recognition database. The Orwellian possibilities of this technology have prompted an invasion of privacy lawsuit from civil liberties groups.

The complaint, filed March 9 in Alameda County Superior Court by two civil rights organizations and four individual privacy activists, seeks an injunction to prevent Clearview AI from collecting any more images in California that were secretly amassed and/or used without consent. The lawsuit also wants Clearview AI to press “delete” on all existing face scans in its files. It targets the New York-based company whose website it “is a new research tool used by law enforcement agencies to identify perpetrators and victims of crimes.”

Over 2,000 law enforcement agencies are already using Clearview AI’s app, while concerns grow over the scope and propriety of its use. The company website states it “has helped law enforcement track down hundreds of at-large criminals, including pedophiles, terrorists and sex traffickers…and victims of crimes, including child sex abuse and financial fraud.” But others are finding darker potential, as national concerns grow about inappropriate and frightening uses of the algorithm.

Sejal Zota, legal director of Just Futures Law who is a lawyer for plaintiffs, told the San Francisco television station KPIX that Clearview AI’s algorithm technology is “a terrifying leap toward a mass surveillance state where people’s movements are tracked the moment they leave their homes.”

One of the individual plaintiffs, Steven Renderos, executive director of MediaJustice, told the Los Angeles Times, “While I can leave my cellphone at home (and) I can leave my computer at home if I wanted to, one of the things that I can’t really leave at home is my face.”

Clearview AI, however, explains on its website that it only makes faceprints. It says that the company, founded in 2017, is an “after-the-fact research tool,” and “not a surveillance system.” It just “uploads images from crime scenes and compare(s) them to publicly available images.”

But exactly what is a crime scene? Will Clearview AI’s scrapbooks help identify protesters who are exercising their First Amendment rights? Will it target those whose activities are critical of US Immigration and Customs Enforcement (ICE) and invade local communities where “over-policing” is common? Will profiling abilities lead to disproportionate arrests of those from certain demographics? Might its photos cause false arrests due to misidentification, a serious problem with persons of color? And what about hacks? Could stalkers get into its database? How will it handle protected political speech?

Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University adds even more chill. He is quoted in a New York class action complaint as saying, “The weaponization possibilities of this are endless. Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail.”

The complaint also alleges that the massive database not only violates the individual privacy rights of Californians, but its “mass surveillance technology disproportionately harms immigrants and communities of color.” There is data behind plaintiffs’ claims. Clearview AI’s database far exceeds even the FBI’s that held only 641 million pictures of US citizens last year.

Prominent First Amendment lawyer Floyd Abrams, who represents Clearview AI, issued a statement that said, “Clearview AI complies with all applicable law and its conduct is fully protected by the First Amendment.”

Currently, no federal laws govern the use of controversial face-scraping software. But several cities, police departments and major technology companies have adopted laws or policies to protect the privacy of their populations. San Francisco was the first city to ban its use. In February, 2020, YouTube sent a cease-and-desist notice, and Facebook demanded Clearview’s photo scraping stop because it violates the company’s policies.

A New York Times investigation in January 2020 led to the introduction of a bipartisan bill to limit how federal law enforcement agencies can use technology such as Clearview AI’s. “The Facial Recognition Technology Warrant Act,” introduced by Sen. Chris Coons (D-DE) and Sen. Mike Lee (R-UT), requires agencies such as the FBI and ICE to get a warrant before using the software for ongoing surveillance. ICE regularly scans millions of drivers’ licenses. The website Congress.gov does not list the bill as being reintroduced in the current 117th Congress.

Local law enforcement would not be covered by the legislation, and several cities, including San Francisco, Oakland, Portland, and Somerville, Massachusetts, have already prohibited their governments from using the technology. In addition, Illinois, California, and Washington have enacted legislation that limits its use. The technology is now illegal in Canada, and the company was ordered to remove all Canadian faces from its database. In January, the European Union said Clearview AI’s data processing violated the General Data Protection Regulation.

Clearview AI was sued four times last year. An Illinois complaint, filed by The American Civil Liberties Union in 2020, argues the company’s actions violated the state’s Illinois Biometric Information Privacy Act. A similar lawsuit, an invasion of privacy class-action lawsuit against Facebook in 2015, resulted in $650 million for users who alleged the company created and stored their face scans from tags without obtaining permission. Additional lawsuits are pending in New York and now California.

Recent Clearview AI activities raise additional concerns. A news leak in February showed that its clients were not limited to law enforcement as stated on its website. Customer records showed retailers such as Macy’s and Best Buy, along with universities and charities, used its services as well.

If the case goes before a jury, how will juror expectations play into their decision-making? Will they have so completely adapted to the digital age that Clearview AI’s algorithm is just another app, or will they think that 2021 is a new, more frightening version of 1984?

Maureen Rubin
Maureen Rubin
Maureen is a graduate of Catholic University Law School and holds a Master's degree from USC. She is a licensed attorney in California and was an Emeritus Professor of Journalism at California State University, Northridge specializing in media law and writing. With a background in both the Carter White House and the U.S. Congress, Maureen enriches her scholarly work with an extensive foundation of real-world knowledge.
Legal Blogs (Sponsored)