Researchers at the University of Chicago were not happy with the creeping erosion of privacy posed by facial recognition apps. So they did something about it.
They developed a program that helps individuals fend off programs that could appropriate their images without their permission and identify them in massive database pools.
Fawkes, named after the fictional anarchist who wore a mask in the “V for Vendetta” comics and film, makes subtle pixel-level alterations on images that, while invisible to the naked eye, distort the image enough so that it cannot be utilized by online image scrapers.
“What we are doing is using the cloaked photo in essence like a Trojan Horse, to corrupt unauthorized models to learn the wrong thing about what makes you look like you and not someone else,” Fawkes co-creator Ben Zhao, a computer science professor at the University of Chicago, said. “Once the corruption happens, you are continuously protected no matter where you go or are seen.”
The advent of facial recognition technology carried with it the promise of great benefits for society. It helped us protect our data and unlock our phones, organized our massive photo collections by matching names with faces, made air travel more tolerable by cutting the wait at ticket and baggage check-ins, and is helping the visually impaired recognize facial reactions in social situations with others.
There are obvious advantages for law enforcement who use facial recognition to detect and catch bad actors, track transactions at ATMs, and find missing children.
It is also helping businesses crackdown on thefts, tracking student attendance in schools, and, in China, allowing customers to leave their credit cards behind and pay for meals with just a smile. And the National Human Genome Research Institute is even using facial recognition with a near 100-percent success rate to identify symptoms of a rare disease that reveals itself in facial changes.
But concerns abound as well. With few federal regulations guiding the use of such an invasive technology, abuse is inevitable. The FBI has compiled a database exceeding 412 million people. Some, to be sure, are criminals. But not all. The notion of an increasingly surveilled population suggests to many the slow erosion of our privacy and, along with it, possibly our freedoms and rights. A society increasingly scrutinized under the watchful eye of Big Brother evokes images of totalitarian societies, imagined, as in “1984,” and real, as in North Korea.
Concerns have been raised about the consequences of misidentification, especially in situations involving serious crime, as well as the capacity for abuse when corrupt governments or rogue police agents have such tools at hand. Also, facial recognition programs can sometimes be wrong. Recent troubling studies have found recognition programs have particular problems with correctly identifying women of color.
Earlier this year, The New York Times reported on the controversial activity of Clearview AI, an app that claims to have compiled a database of more than 3 billion images from sources such as Facebook, YouTube, and Venmo. All of this was done without the permission of the subjects. Linked to a pair of reality-augmented eyeglasses, Clearview AI-equipped members of law enforcement and security agencies can walk down the street and identify anyone they see, along with their names, addresses, and other vital information.
The tool certainly can be used for good. Federal and state law enforcement officers, according to the Times, say the app helped solve murders, shoplifting crimes, identity theft, credit card fraud, and child sexual exploitation cases.