Walk around any city and your face will be caught on camera and might even be added to a facial-recognition database. That data can now be processed in real-time. Regulations about how it can be used are minimal and generally weak.
The military, law-enforcement agencies, and commercial corporations are exploiting facial recognition and Artificial Intelligence (AI) to collect personal data. Yet, the legal frameworks controlling how that data can be used have not kept pace with the technology.
In May 2019, San Francisco became the first US city to ban the use of facial recognition by its authorities. However, the city ordinance did not prevent private companies from using facial ID in ways that people find objectionable.
In July 2019, the first independent evaluation of the use of facial recognition by London’s Metropolitan police warned it is “highly possible” the system would be ruled unlawful if challenged in court.
As face recognition becomes more and more common, there are also growing concerns about the gender and racial bias embedded in many systems. Writing in “FaceApp Makes Today’s Privacy Laws Look Antiquated” (The Atlantic, 20 July 2019) Tiffany C. Li, a Fellow at Yale Law School’s Information Society Project, puts the onus on tech companies themselves:
“Developers need to go further and build actual privacy protections into their apps. These can include notifications on how data (or photos) are being used, clear internal policies on data retention and deletion, and easy workflows for users to request data correction and deletion. Additionally, app providers and platforms such as Apple, Microsoft, and Facebook should build in more safeguards for third-party apps.”
All well and good, but misuse, misappropriation, and mistaken identity require legislation and regulation that includes better privacy laws that address the potential for harms inherent in these technologies. In Li’s opinion:
“To deal with privacy risks in the larger data ecosystem, we need to regulate how data brokers can use the personal information they obtain. We need safeguards against the practical harms that invasions of privacy can cause; that could mean, for example, limiting the use of facial-recognition algorithms for predictive policing. We also need laws that give individuals power over data they have not voluntarily submitted.”
In short, global corporations play by their own rules and require oversight. The problem is how to guarantee compliance.