Clearview AI rose to prominence earlier this year thanks to media reports, including a high-profile investigation from The New York Times in January, that revealed its technology is highly accurate and in widespread use throughout both law enforcement agencies and private sector companies. The reason the company’s tool is effective is because Clearview scraped photographs and other data from social media sites against those platforms’ rules, resulting in a database of more than 3 billion images which are then matched against an uploaded photo using sophisticated algorithms trained by machine learning. Tech companies have sent numerous cease and desist orders to Clearview over the past five months, but it’s unclear if the company has ever complied with orders to remove data it’s siphoned from public social networks’ pages and posts. The company has responded to criticism by claiming it would never offer its tool to the public for individual use and more recently said it would stop selling its technology to private companies and focus only on law enforcement. However, numerous media reports have discovered that Clearview provided its product to investors, high-profile executives, and other individuals as part of trial periods to promote the product, and critics fear its facial recognition system is the foundation for rampant civil rights violations.
“By building a mass database of billions of faceprints without our knowledge or consent, Clearview has created the nightmare scenario that we’ve long feared, and has crossed the ethical bounds that many companies have refused to even attempt. Neither the United States government nor any American company is known to have ever compiled such a massive trove of biometrics,” Wessler explains. “Adding fuel to the fire, Clearview sells access to a smartphone app that allows its customers — and even those using the app on a trial basis — to upload a photo of an unknown person and instantaneously receive a set of matching photos.”
One avenue to combat Clearview’s conduct that may prove effective is BIPA, the Illinois privacy law and one of the only pieces of US legislation that protects facial recognition data from misuse. It’s the same law that earlier this year pushed Facebook to a $550 million settlement over its use of unauthorized facial recognition on photos uploaded to its social network. The ACLU says “Clearview’s actions clearly violate BIPA,” which demands a company inform citizens and gain written consent when any biometric identifier, be it a faceprint or a fingerprint, is collected and stored for any reason.Clearview has tried to skirt BIPA when it announced earlier this month that it would no longer sell its technology to private companies. Clearview made the announcement as part of a separate BIPA-based lawsuit in Illinois, and the company framed the decision as a voluntary action alongside its decision to no longer provide its product to any organization in the state regardless of whether it was a private company or law enforcement agency. The company also said it would no longer collect data from Illinois-based IP addresses, take additional measures to prevent data collection on Illinois residents, and build an opt-out tool, but it’s not clear what steps, if any, Clearview has actually taken in the weeks since.
By continuing to store information on Illinois residents, it appears Clearview may still be subject to BIPA, giving the ACLU an opportunity to file another suit. The ACLU says it’s teaming up with its local Illinois chapter and the law firm Edelson PC, and it’s asking that a court demand Clearview delete all biometric data on Illinois residents it has stored and to cease collecting any new data until it can comply with BIPA’s consent rules. “If allowed, Clearview will destroy our rights to anonymity and privacy — and the safety and security that both bring,” Wessler writes. “People can change their names and addresses to shield their whereabouts and identities from individuals who seek to harm them, but they can’t change their faces.”