1

Planned Expansion of Facial Recognition by US Agencies Called ‘Disturbing’

By Julia Conley | Common Dreams

Digital rights advocates reacted harshly Thursday to a new internal U.S. government report detailing how ten federal agencies have plans to greatly expand their reliance on facial recognition in the years ahead.

The Government Accountability Office surveyed federal agencies and found ten have specific plans to increase their use of the technology by 2023—surveilling people for numerous reasons including identifying criminal suspects, track government employees’ level of alertness, and match faces of people on government property with names on watch lists.

The report (pdf) was released as lawmakers face pressure to pass legislation to limit the use of facial recognition technology by the government and law enforcement agencies.

Sens. Ron Wyden (D-Ore.) and Rand Paul (D-Ky.) introduced the Fourth Amendment Is Not for Sale Act in April to prevent agencies from using “illegitimately obtained” biometric data, such as photos from the software company Clearview AI. The company has scrapped billions of photos from social media platforms without approval and is currently used by hundreds of police departments across the United States.

The bill has not received a vote in either chamber of Congress yet.

The plans described in the GAO report tweeted by law professor Andrew Ferguson, author of “The Rise of Big Data Policing,” are “what happens when Congress fails to act.”

Six agencies including the Departments of Homeland Security (DHS), Justice (DOJ), Defense (DOD), Health and Human Services (HHS), Interior, and Treasury plan to expand their use of facial recognition technology to “generate leads in criminal investigations, such as identifying a person of interest, by comparing their image against mugshots,” the GAO reported.
DHS, DOJ, HHS, and the Interior all reported using Clearview AI to compare images with “publicly available images” from social media.
The DOJ, DOD, HHS, Department of Commerce, and Department of Energy said they plan to use the technology to maintain what the report calls “physical security,” by monitoring their facilities to determine if an individual is on a government watchlist is present.
“For example, HHS reported that it used [a facial recognition technology] system (AnyVision) to monitor its facilities by searching live camera feeds in real-time for individuals on watchlists or suspected of criminal activity, which reduces the need for security guards to memorize these individuals’ faces,” the report reads. “This system automatically alerts personnel when an individual on a watchlist is present.”
The Electronic Frontier Foundation said the government’s expanded use of the technology for law enforcement purposes is one of the “most disturbing” aspects of the GAO report.
“Face surveillance is so invasive of privacy, so discriminatory against people of color, and so likely to trigger false arrests, that the government should not be using face surveillance at all,” the organization told MIT Technology Review.
According to the Washington Post, three lawsuits have been filed in the last year by people who say they were wrongly accused of crimes after being mistakenly identified by law enforcement agencies using facial recognition technology. All three of the plaintiffs are Black men.
A federal study in 2019 showed that Asian and Black people were up to 100 times more likely to be misidentified by the technology than white men. Native Americans had the highest false identification rate.
Maine, Virginia, and Massachusetts have banned or sharply curtailed the use of facial recognition systems by government entities, and cities across the country including San Francisco, Portland, and New Orleans have passed strong ordinances blocking their use.
But many of the federal government’s planned uses for the technology, Jake Laperruque of the Project on Government Oversight told the Post, “present a really big surveillance threat that only Congress can solve.”

Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.



As U.S. Government Report Reveals Facial Recognition Tech Widely Used, WEF-Linked Israeli Facial Recognition Firm Raises $235 Million

By Derrick Broze

In June the U.S. Government Accountability Office released a report detailing the widespread use of facial recognition technology, including law enforcement using databases of faceprints from government agencies and private firms. Privacy and civil rights organizations have been warning for the last few years that the use of facial recognition technology was a digital Wild West with little to no regulation determining the limits of the tech.

Now, the GAO’s new report shows that at least twenty of the forty-two U.S. government agencies surveyed have used the technology. These departments include those associated with law enforcement – the FBI, Secret Service, US Immigration and Customs Enforcement, US Capitol Police, Federal Bureau of Prisons, and the Drug Enforcement Administration – as well as less obvious departments such as the U.S. Postal Service, the Fish, and Wildlife Service and NASA.

Six U.S. agencies admitted to using facial recognition on people who attended the protests after the killing of George Floyd in May 2020. The report states that the agencies claim they only used the tech on people accused of breaking the law.

“Thirteen federal agencies do not have awareness of what non-federal systems with facial recognition technology are used by employees,” the report said. “These agencies have therefore not fully assessed the potential risks of using these systems, such as risks related to privacy and accuracy.”

The GAO calls for increased training for law enforcement, stating that such training could “reduce risks associated with analyst error and decision-making; understand and interpret the results they receive; raise awareness of cognitive bias and improve objectivity; and increase consistency across agencies.” The GAO also calls for agencies to implement controls to better track what systems their employees are using.

While some of the U.S. government agencies have their own databases, the FBI’s database of faceprints is likely the most extensive, with some estimates at over 100 million faceprints. The U.S. government’s top law enforcement agency has been fighting to keep the database a secret since at least 2013.

Agencies have also used facial recognition databases from Amazon Rekognition, BI SmartLink, Giant Oak Social Technology, Clearview AI, and Vigilant Solutions. By far, government agencies used technology from Clearview and Vigilant the most. The report provides further insight:

“Moreover, federal law enforcement can use non-government facial recognition service providers, such as Vigilant Solutions and Clearview AI. For example, law enforcement officers with a Clearview AI account can use a computer or smartphone to upload a photo of an unknown individual to Clearview AI’s facial recognition system. The system can return search results that show potential photos of the unknown individual, as well as links to the site where the photos were obtained (e.g., Facebook). According to Clearview AI, its system is only used to investigate crimes that have already occurred and not for real-time surveillance.”

The US Postal Inspection Service said it has used Clearview AI’s software to help track down people suspected of stealing and opening mail and stealing from Postal Service buildings. Altogether, ten agencies used Clearview AI between April 2018 and March 2020. The U.S. Capitol Police used the company’s tech to investigate suspects from the event at the Capitol on January 6th.

TLAV has previously reported on the dangers associated with facial recognition technology, and specifically, how Clearview AI’s technology was being used to target so-called domestic extremists.

In 2020, the NY Times wrote about Clearview’s efforts to gather, store, and sell faceprint data as “the end of privacy as we know it” and they are not wrong. This company has been capturing billions of faceprints from online photos and now claims to have the world’s largest facial recognition database. This gives Clearview the opportunity to sell customers access to all our faces to secretly target, identify, and track any of us. This could be for marketing and advertising purposes, but it could be for government and law enforcement surveillance of activists, journalists, and organizers who are performing constitutionally protected activity. As the Mind Unleashed reported, Clearview is collecting data from unsuspecting social media users and the Chicago Police Department (CPD) is using the controversial facial recognition tool to pinpoint the identity of unknown suspects.

Clearview said in May it would stop selling its technology to private companies and instead provide it for use by law enforcement only – they have thus far made their technology available to some 2,400 law enforcement agencies across the United States. The American Civil Liberties Union has filed a lawsuit against Clearview, alleging that the company violated Illinois’ Biometric Information Privacy Act (BIPA), a state law that prohibits capturing individuals’ biometric identifiers without notice and consent.

While Vigilant Solutions is less well-known than Clearview, they are an essential part of the growing surveillance apparatus operated by private industry and shared with U.S. government agencies. The company is listed in the GAO report for their role in facial recognition technology, but they are widely known for their database of license plate records. In 2018 it was revealed that Vigilant Solutions signed a contract with U.S. Immigration and Customs Enforcement (ICE) making the controversial agency the latest of several federal agencies that have access to billions of license plate records that can be used for real-time location tracking.

Vigilant Solutions has more than 2 billion license plate photos in its database due to partnerships with vehicle repossession firms and local law enforcement agencies with vehicles equipped with cameras. Local law enforcement agencies typically use some version of an Automatic License Plate Reader. ALPRs are used to gather license plates, times, dates, and locations that can be used to create a detailed map of what individuals are doing. The devices can be attached to light poles, or toll booths, as well as on top of or inside law enforcement vehicles.

AnyVision, Softbank, and the Push Towards a Technocratic Surveillance Grid

While the GAO report stands as a warning to anyone paying attention, the reality is that facial recognition technology is already ubiquitous. Despite the warnings of privacy organizations, the public has blindly walked into an era of facial recognition for opening your smartphone while purchasing groceries, and for video games. Generally speaking, the public seems downright ignorant of the attacks on privacy taking place every single day.

The GAO report notes that places like San Francisco and Portland, Oregon have banned police from using facial recognition technology, and Amazon currently has a moratorium on selling their Rekognition program to law enforcement. Most recently, Maine has passed what is being called the strongest law against facial recognition in the country. (The law does allow law enforcement to make use of the federal databases mentioned in the GAO report.)

Will these steps be enough to stem the tide of facial recognition cameras intruding into every aspect of your life? Not likely.

In the month since the release of the GAO report, we have seen Israeli facial recognition firm AnyVision raise $235 million in startup funding. AnyVision uses Artificial Intelligence techniques to identify people based on their faces. TechCrunch notes that “AnyVision said the funding will be used to continue developing its SDKs (software development kits), specifically to work in edge computing devices — smart cameras, body cameras, and chips that will be used in other devices — to increase the performance and speed of its systems.”

AnyVision has not been without controversy. A report in 2019 alleged that AnyVision’s technology was being secretly used by the Israeli government to run surveillance on Palestinians in the West Bank. AnyVision denied the claims. Another report published in The Markup examined public records for AnyVision, including a user guidebook from 2019, which showed the company is collecting vast amounts of data. One report involved tracking children in a school district in Texas. AnyVision collected 5,000 student photos in just seven days.

An April report from Reuters detailed how many companies are using AnyVision’s technology today, including hospitals like Cedars Sinai in Los Angeles, retailers like Macy’s, and energy giant BP. AnyVision was also the subject of a New York Times report in 2020 which highlighted how the company was partnering with Israel’s Defense Ministry to use its facial recognition technology to “detect COVID-19 cells”.

As further evidence that companies offering facial recognition technology are here to stay – and play a vital role in the Great Reset agenda – we need look no further than the institutions investing in AnyVision. The latest round of fundraising is being co-led by SoftBank’s Vision Fund 2 and Eldridge. Interestingly, AnyVision’s CEO Avi Golan is a former operating partner at SoftBank’s investment arm. Softbank is also a partner organization with the World Economic Forum, the international public-private organizations pushing for The Great Reset.

The reason this small detail matter is because the technocratic agenda being promoted by the fine folks at the WEF will absolutely involve a world of AI and facial recognition. The technology is ostensibly being used to catch criminals for the moment, but it’s also ripe for abuse by law enforcement agencies. Not to mention the larger role the technology will play in implementing future “social credit score” schemes, as seen in China.

The U.S. GAO is right to warn about the widespread use of this dangerous technology, but the fact is that it is already pervasive. It has become extremely difficult to avoid having your faceprint stolen and stored by both governmental agencies and private organizations. If we are to regain any semblance of privacy we must find a way to put an end to this technology before it is too late.

Source: The Last American Vagabond

Visit TheLastAmericanVagabond.com. Subscribe to TLAV’s independent news broadcast on iTunes. Follow on FacebookTwitter, and Minds. Support at PayPal or with Bitcoin.




Image Cloaking Tool Thwarts Facial Recognition Programs

Credit: University of Chicago

Researchers at the University of Chicago were not happy with the creeping erosion of privacy posed by facial recognition apps. So they did something about it.

They developed a program that helps individuals fend off programs that could appropriate their images without their permission and identify them in massive database pools.

Fawkes, named after the fictional anarchist who wore a mask in the “V for Vendetta” comics and film, makes subtle pixel-level alterations on images that, while invisible to the , distort the image enough so that it cannot be utilized by online image scrapers.

“What we are doing is using the cloaked photo in essence like a Trojan Horse, to corrupt unauthorized models to learn the wrong thing about what makes you look like you and not someone else,” Fawkes co-creator Ben Zhao, a computer science professor at the University of Chicago, said. “Once the corruption happens, you are continuously protected no matter where you go or are seen.”

The advent of facial recognition technology carried with it the promise of great benefits for society. It helped us protect our data and unlock our phones, organized our massive photo collections by matching names with faces, made air travel more tolerable by cutting the wait at ticket and baggage check-ins, and is helping the visually impaired recognize facial reactions in social situations with others.

There are obvious advantages for law enforcement who use facial recognition to detect and catch bad actors, track transactions at ATMs, and find missing children.

It is also helping businesses crackdown on thefts, tracking student attendance in schools, and, in China, allowing customers to leave their credit cards behind and pay for meals with just a smile. And the National Human Genome Research Institute is even using facial recognition with a near 100-percent success rate to identify symptoms of a rare disease that reveals itself in facial changes.

But concerns abound as well. With few federal regulations guiding the use of such an invasive technology, abuse is inevitable. The FBI has compiled a database exceeding 412 million people. Some, to be sure, are criminals. But not all. The notion of an increasingly surveilled population suggests to many the slow erosion of our privacy and, along with it, possibly our freedoms and rights. A society increasingly scrutinized under the watchful eye of Big Brother evokes images of totalitarian societies, imagined, as in “1984,” and real, as in North Korea.

Concerns have been raised about the consequences of misidentification, especially in situations involving serious crime, as well as the capacity for abuse when corrupt governments or rogue police agents have such tools at hand. Also, facial recognition programs can sometimes be wrong. Recent troubling studies have found recognition programs have particular problems with correctly identifying women of color.

Earlier this year, The New York Times reported on the controversial activity of Clearview AI, an app that claims to have compiled a database of more than 3 billion images from sources such as Facebook, YouTube, and Venmo. All of this was done without the permission of the subjects. Linked to a pair of reality-augmented eyeglasses, Clearview AI-equipped members of law enforcement and security agencies can walk down the street and identify anyone they see, along with their names, addresses, and other vital information.

Image cloaking tool thwarts facial recognition programs

Credit: University of Chicago 

The tool certainly can be used for good. Federal and state law enforcement officers, according to the Times, say the app helped solve murders, shoplifting crimes, identity theft, credit card fraud, and child sexual exploitation cases.

READ THE REST OF THIS ARTICLE…




Boston City Council Bans Facial Recognition Surveillance

The technology has been criticized for racial bias in identifying subjects. (Photo: Fight for the Future)

By Eoin Higgins | Common Dreams

The Boston City Council on Wednesday unanimously voted to ban the city from the use of facial recognition technology, making the Massachusetts capital the ninth city in the country and the second largest to do so.

“This is a historic day for Boston where the budget conversation is really focused on shifting resources away from police and towards the community,” civil liberties advocacy group Muslim Justice League tweeted. “And the conversation will not end here! We want community control over budgeting.”

The council cited the technology’s inaccuracy incorrectly identifying people of color as the main reason for rejecting its use by the city.

“Boston should not use racially discriminatory technology that threatens the privacy and basic rights of our residents,” council member Michelle Wu said in a statement.

On Tuesday, the Santa Cruz, California city council voted unanimously to end the use of facial recognition technology—and the use of predictive policing—by the city. Civil liberties community organization Oakland Privacy welcomed the decision in a statement from advocacy director Tracy Rosenberg.

“All over the country, people are calling for transformational changes in how policing is done,” said Rosenberg. “Walking away from intrusive and invasive surveillance technologies that have never been free of racial bias is one big step on that road.”

A false facial recognition match in Michigan resulted in the arrest of innocent Michigan resident Robert Williams, who is Black, in January. The Michigan ACLU filed a complaint about the arrest on Wednesday just before the Boston vote.

“What happened to Robert Williams and his family should be a wake-up call for lawmakers,” said Evan Greer, deputy director of technology with the advocacy group Fight for the Future. “Facial recognition is doing harm right now.”

Massachusetts ACLU executive director Carol Rose in a statement Wednesday on the vote referred to Williams’ case.

“This is a crucial victory for our privacy rights and for people like Robert Williams, who have been arrested for crimes they didn’t commit because of a technology law enforcement shouldn’t be using,” said Rose.

Rose added that leaders around the country should follow the example from Boston and other municipalities that have banned facial recognition.

“Lawmakers nationwide should follow suit and immediately stop law enforcement use of this technology,” said Rose. “This surveillance technology is dangerous when right, and dangerous when wrong.”

The group’s Kade Crockford told WBUR that the ramifications of allowing the technology to be used by the city could have dangerous effects down the line.

“Let’s just ensure that we put the policy horse before the technology cart and lead with our values so we don’t accidentally wake up someday in a dystopian surveillance state,” said Crockford.

“Behind the scenes,” Crockford added, “police departments and technology companies have created an architecture of oppression that is very difficult to dismantle.”

Our work is licensed under a Creative Commons Attribution-Share Alike 3.0 License. Feel free to republish and share it widely.




New Facial Recognition Software Predicts You’re a Criminal Based on your Face

Image by succo from Pixabay

By MintPress News / Creative Commons

(MintPress News) – A team from the University of Harrisburg, PA, has developed automated computer facial recognition software that they claim can predict with 80 percent accuracy and “no racial bias” whether a person is likely going to be a criminal, purely by looking at a picture of them. “By automating the identification of potential threats without bias, our aim is to produce tools for crime prevention, law enforcement, and military applications,” they said, declaring that they were looking for “strategic partners” to work with to implement their product.

In a worrying use of words, the team in their own press release, move from referring to those the software recognizes as being “likely criminals” to “criminals” in the space of just one sentence, suggesting they are confident in the discredited racist pseudoscience of phrenology they appear to have updated for the 21st century.

Public reaction to the project was less than enthusiastic, judging by comments left on Facebook, which included “Societies have been trying to push the idea of ‘born criminals’ for centuries,” “and this isn’t profiling because……?” and “20 percent getting tailed by police constantly because they have the ‘crime face.’” Indeed, the response was so negative that the university pulled the press release from the internet. However, it is still visible using the Internet Wayback Machine.

While the research team claims to be removing bias and racism from decision making, leaving it up to a faceless algorithm, those who write the code, and those who get to decide who constitutes a crime in the first place, certainly do have their own biases. Why are the homeless or people of color who “loiter” on sidewalks criminalized, but senators and congresspersons who vote for wars and regime change operations not? And who is more likely to be arrested? Wall Street executives doing cocaine in their offices or working-class people smoking marijuana or crack? The higher the level of a person in society, the more serious and harmful their crimes become, but the likelihood of an arrest and a custodial sentence decreases. Black people are more likely to be arrested for the same crime as white people and are sentenced to longer stays in prison, too. Furthermore, facial recognition software is notorious for being unable to tell people of color apart, raising further concerns.

Crime figures are greatly swayed by whom the police choose to follow and what they decide to prioritize. For example, a recent study found 97.5 percent of Brooklyn residents arrested for breaking social distancing laws were people of color. Meanwhile, an analysis of 95 million traffic stops found that police officers were far more likely to stop black people during the daytime when their race could be determined from afar. As soon as dusk hit, the disparity greatly diminished, as a “veil of darkness” saved them from undue harassment, according to researchers. Thus, the population of people convicted of crimes does not necessarily correspond to the population that commits them.

The 2002 hit movie Minority Report is set in a future world where the government’s pre-crime division stops all murders well before they happen, with future criminals locked up preemptively. Even if accurate, is an 80 percent accuracy rate worth risking the creation of a dystopian Minority Report-style society where people are monitored and arrested for pre-crimes?

Phrenology, the long-abandoned study of the size and shape of the head, has a long and sordid history of dangerous racist and elitist pseudoscience. For instance, Cesare Lombroso’s 1876 book, Criminal Man, told students that large jaws and high cheekbones were a feature of “criminals, savages and apes,” and was a sure sign of the “love of orgies and the irresistible craving for evil for its own sake, the desire not only to extinguish life in the victim, but to mutilate the corpse, tear its flesh, and drink its blood.” Meanwhile, rapists nearly always have jug ears, delicate features, swollen lips, and hunchbacks. Lombroso himself was a professor in psychiatry and criminal anthropology and his book was taught in universities for decades. To Lombroso, it was almost impossible for a good-looking person to commit a serious crime.

The latest technological development from the University of Harrisburg appears to be an updated, “algorithmic phrenology,” repackaging a dangerous idea for the 21st century, all the more noteworthy because they are trying to sell it to law enforcement as an unbiased tool helping society.




Just as Digital Privacy Advocates Warned, Bezos Admits Amazon Writing Its Own Laws on Facial Recognition

Amazon CEO Jeff Bezos announced Wednesday that the company is developing its own facial recognition laws. (Photo: Fight for the Future)

By Julia Conley | Common Dreams

A casual announcement made Wednesday by Amazon CEO Jeff Bezos that his company is writing facial recognition regulations for legislators to enact is exactly what “digital rights activists have been warning” would emerge from Silicon Valley unless lawmakers pass a full ban on facial recognition surveillance. 

Bezos told reporters at a product launch event that the company’s “public policy team is actually working on facial recognition regulations.”

“It makes a lot of sense to regulate that,” Bezos said. “It’s a perfect example of something that has really positive uses so you don’t want to put the breaks on it. At the same time, there’s lots of potential for abuses with that kind of technology and so you do want regulations.”

For a form of technology that digital rights advocates call “uniquely dangerous,” regulations—especially those that Amazon lobbyists have a hand in developing—are not sufficient to keep Americans safe from the privacy violations facial recognition can cause, said Fight for the Future.

“This is why we need to ban facial recognition,” the group tweeted.

“Amazon wants to write the laws governing facial recognition to make sure they’re friendly to their surveillance is driven business model,” said Evan Greer, deputy director of Fight for the Future, in a statement. “But this type of technology…poses a profound threat to the future of human liberty that can’t be mitigated by industry-friendly regulations. We need to draw a line in the sand and ban governments from using this technology before it’s too late.”

“Silicon Valley’s calls to ‘regulate’ facial recognition are a trap, designed to hasten the widespread adoption of this invasive and harmful technology by implementing weak regulations that assuage public concern without putting a dent in corporate profits.”
—Fight for the Future

Fight for the Future launched a campaign in July aimed at pushing Congress to pass a full ban on facial recognition, following the lead of Somerville, Massachusetts; San Francisco; and Oakland, California, which have barred government use of the technology in recent months.

Fight for the Future and other civil liberties advocates warn that the use of facial recognition technology by federal, state, and local agencies increases the risk of discrimination, police harassment, and false arrests and deportations. Women and people of color are particularly likely to be misidentified by the programs, U.K. government data showed last year.

“Silicon Valley’s calls to ‘regulate’ facial recognition are a trap, designed to hasten the widespread adoption of this invasive and harmful technology by implementing weak regulations that assuage public concern without putting a dent in corporate profits,” Fight for the Future said Wednesday.

Matt Cagle, a civil liberties attorney at the ACLU of Northern California, tweeted that the organization would be on alert for “weak corporate proposals seeking to undermine” the efforts of cities which have passed facial recognition bans and lawmakers in states including New York, Michigan, and California who are pushing for statewide bans.

In the United Kingdom, Labour politician Darren Jones said statements like that of Bezos should push his members of Parliament to fight for a ban on facial recognition surveillance.

“We can’t outsource thought leadership and now even the drafting of our laws to private companies,” tweeted Jones.

https://twitter.com/darrenpjones/status/1177214367395450885?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1177214367395450885&ref_url=https%3A%2F%2Fwww.commondreams.org%2Fnews%2F2019%2F09%2F26%2Fjust-digital-privacy-advocates-warned-bezos-admits-amazon-writing-its-own-laws

“We know that members of Congress are currently drafting legislation related to facial recognition,” said Greer, “and we hope they know that the public will not accept trojan horse regulations that line Jeff Bezos’ pockets at the expense of all of our basic human rights.”

Our work is licensed under a Creative Commons Attribution-Share Alike 3.0 License. Feel free to republish and share widely.

Read more great articles at Common Dreams.




Amazon’s Facial Recognition Technology Can Now Detect Fear in People

By Vandita | We Are Anonymous

(CD) — Privacy advocates are responding with alarm to Amazon’s claim this week that the controversial cloud-based facial recognition system the company markets to law enforcement agencies can now detect “fear” in the people it targets.

“Amazon is going to get someone killed by recklessly marketing this dangerous and invasive surveillance technology to governments,” warned Evan Greer, deputy director of the digital rights group Fight for the Future, in a statement Wednesday.

Amazon Web Services detailed new updates to its system—called Rekognition—in an announcement Monday:

WITH THIS RELEASE, WE HAVE FURTHER IMPROVED THE ACCURACY OF GENDER IDENTIFICATION. IN ADDITION, WE HAVE IMPROVED ACCURACY FOR EMOTION DETECTION (FOR ALL 7 EMOTIONS: ‘HAPPY’, ‘SAD’, ‘ANGRY’, ‘SURPRISED’, ‘DISGUSTED’, ‘CALM’, AND ‘CONFUSED’) AND ADDED A NEW EMOTION: ‘FEAR’. LASTLY, WE HAVE IMPROVED AGE RANGE ESTIMATION ACCURACY; YOU ALSO GET NARROWER AGE RANGES ACROSS MOST AGE GROUPS.

Pointing to research on the technology conducted by the ACLU and others, Fight for the Future’s Greer said that “facial recognition already automates and exacerbates police abuse, profiling, and discrimination.”

“Now Amazon is setting us on a path where armed government agents could make split-second judgments based on a flawed algorithm’s cold testimony. Innocent people could be detained, deported, or falsely imprisoned because a computer decided they looked afraid when being questioned by authorities,” she warned. “The dystopian surveillance state of our nightmares is being built in plain sight—by a profit-hungry corporation eager to cozy up to governments around the world.”

VICE reported that “despite Amazon’s bold claims, the efficacy of emotion recognition is in dispute. A recent study reviewing over 1,000 academic papers on emotion recognition found that the technique is deeply flawed—there just isn’t a strong enough correlation between facial expressions and actual human emotions, and common methods for training algorithms to spot emotions present a host of other problems.”

Amid mounting concerns over how police and other agencies may use and abuse facial recognition tools, Fight for the Future launched a national #BanFacialRecognitioncampaign last month. Highlighting that there are currently no nationwide standards for how agencies and officials can use the emerging technology, the group calls on federal lawmakers to ban the government from using it at all.

Fight for the Future reiterated their demand Wednesday, in response to Amazon’s latest claims. Although there are not yet any federal regulations for the technology, city councils—from San Francisco to Somerville, Massachusetts—have recently taken steps to outlaw government use such systems.

Activists are especially concerned about the technology in that hands of federal agencies such as U.S. Immigration and Customs Enforcement (ICE) and Customs and Border Patrol (CBP), whose implementation of the Trump administration’s immigration policies has spurred condemnation from human rights advocates the world over.

Civil and human rights advocates have strongly urged Amazon—as well as other developers including Google and Microsoft—to refuse to sell facial recognition technology to governments in the United States and around the world, emphasizing concerns about safety, civil liberties, and public trust.

However, documents obtained last year by the Project on Government Oversight revealed that in the summer of 2018, Amazon pitched its Rekognition system to the Department of Homeland Security—which oversees ICE and CBP—over the objection of Amazon employees. More recently, the corporation has been targeted by protesters of the Trump administration’s immigration agenda for Amazon Web Service’s cloud contracts with ICE.

In a July report on Amazon’s role in the administration’s immigration policies, Al Jazeera explained that “U.S. authorities manage their immigration caseload with Palantir software that facilitates tracking down would-be deportees. Amazon Web Services hosts these databases, while Palantir provides the computer program to organize the data.”

“Amazon provides the technological backbone for the brutal deportation and detention machine that is already terrorizing immigrant communities,” Audrey Sasson, executive director of Jews For Racial and Economic Justice, told VICE Tuesday. “[A]nd now Amazon is giving ICE tools to use the terror the agency already inflicts to help agents round people up and put them in concentration camps.”

“Just as IBM collaborated with the Nazis, Amazon and Palantir are collaborating with ICE today,” added Sasson. “They’ve chosen which side of history they want to be on.”

Read more great articles at We Are Anonymous.




The Rise of Facial Recognition Should Scare Us All

By Derrick Broze | Activist Post

In the last ten years, our world has been completely transformed thanks to the exponential growth of digital technology. Technological advances with computer processors and the Internet have quickly advanced our world into one that resembles some of the most well-known sci-fi films and novels. Not a single day passes without a report on an emerging technology or new feature in an already existing product. The last ten years alone have seen rapid growth in information technology, encryption, the medical industry, and 3D printing technology, just to name a few.

Unfortunately, as technology is a tool, there are equally frightening developments taking place in the first two decades of the 21st century. Specifically, the ability for governments and private actors to monitor and spy on the activity of the average person has nearly become accepted as the norm. In fact, it has become commonplace to hear Americans respond to warnings of Orwellian futures with the timeless trope, “If you’re not doing anything wrong, there’s nothing to hide!” This is what makes it all the more surprising to see a surplus of recent reports examining the dangers and implications of a world where facial recognition technology is commonplace.

Here’s a small sample of the current headlines related to facial recognition:

Even the Washington Post published a warning titled “Don’t smile for surveillance: Why airport face scans are a privacy trap.”

Questions surrounding the emerging technology have reached enough of a tipping point that just this week, House Democrats questioned the Department of Homeland Security over the use of facial recognition tech on U.S. citizens. The Hill reported that more than 20 House Democrats sent a letter on Friday to the DHS over the Customs and Border Protection’s (CBP) use of facial recognition technology at U.S. airports. The Border Patrol claims that they are rolling out the facial recognition program at a number of airports under a congressional mandate and with an executive order from President Donald Trump. Lawmakers say the program was supposed to focus on foreign passengers, not Americans.

The group of lawmakers wrote:

We write to express concerns about reports that the U.S. Customs and Border Protection (CBP) is using facial recognition technology to scan American citizens under the Biometric Exit Program.

The letter to DHS comes shortly after a representative with the Government Accountability Office (GAO) the House Oversight and Reform Committee said that the FBI has access to hundreds of millions of photos that are used for facial recognition searches. Gretta Goodwin, a representative with the GAO, said the FBI uses expansive databases of photos—including from driver’s licenses, passports, and mugshots—to search for potential criminals. Goodwin noted that the FBI has a database of 36 million mugshots and access to more than 600 million photos, including access to 21 state driver’s license databases.

Rep. Jim Jordan of Ohio reminded Ms. Goodwin that the FBI has access to more photos than there are people in the country. “There are only 330 million people in the country,” Jordan stated.

The TSA was also questioned about their use of facial recognition at airports. Austin Gould, the TSA’s assistant administrator on Requirements and Capabilities Analysis, said the facial recognition program has been helpful for travelers. However, critics say the potential benefits of saved time and reducing passenger volume should not override the greater risk to privacy. The TSA plans to have facial recognition tech at the top 20 airports for international travelers by 2021 and at all airports by 2023. The TSA has also previously expressed their desire to scan the face of every single American who enters the airport.

The push back against facial recognition—and biometric technology in general—has moved beyond words in some areas. Most recently, San Francisco became the first city to ban government use of facial recognition. Due to the success in San Francisco, California lawmakers are considering AB 1215, a bill that would extend the ban across the entire state. The Electronic Frontier Foundation (EFF) spoke in favor of the bill, stating that the technology has been shown to have disproportionately high error rates for women, the elderly, and people of color. EFF also warned about the dangers of combining face recognition technology with police body cameras.

The editorial board of the Guardian also recently spoke out about the privacy threats, calling the technology “especially inaccurate and prone to bias.” The editorial board also noted that a recent test of Amazon’s facial recognition software by the American Civil Liberties Union found that it falsely identified 28 members of Congress as known criminal. Although the technology is currently dangerous due to its inaccuracy, the Guardian warns:

It may be too late to stop the collection of this data. But the law must ensure that it is not stored and refined in ways that will harm the innocent and, as Liberty warns, slowly poison our public life.

It’s clear that the debate on the benefits and threats of facial recognition technology is not going anywhere anytime soon. It’s up to us as individuals to educate ourselves and inform our peers about the threats to privacy and freedom that are becoming increasingly more apparent every day.

By Derrick Broze | Creative Commons | TheMindUnleashed.com

Read more great articles at Activist Post.




Walmart Has An Incredibly Creepy Cart Patent To Monitor Your Biometric Data

Image Credit: Activist Post

By Aaron Kesel | Activist Post

Walmart has a totally creepy idea to monitor your biometric data, pulse, and location from the sensors on a shopping cart handle, Motherboard reported.

Walmart recently applied for a patent that details biometric shopping handles that can track a customer’s heart rate, palm temperature, grip, and how fast the cart is being pushed.

The patent titled “System And Method For A Biometric Feedback Cart Handle” published August 23rd, details a cart with sensors in it that would then send data to a server. That server would then notify store employees to check on individual customers.

The company has yet to clarify the use-cases of such a patented cart besides creepy privacy-invasive technology. However, it can be assumed that some of these cart features would be for customer safety and anti-theft measures.

More specifically, imagine there is a shoplifter and the person’s description is partially known but associates seem to have lost them in the store. If the shoplifter was using this cart with biometric data and location data employees would be able to quickly locate the perpetrator.

Other features of the cart include a pulse oximeter, and a weight-triggered assisted push innovation to allow the cart to move somewhat with automation.

That doesn’t mean that this high-tech cart is a good idea. The problem is that all customer data would be retained without any form of regulation denoting what Walmart can and can’t do with the data.

This news comes as hundreds of retail stores — and soon thousands — are investigating using biometric facial recognition software FaceFirst to build a database of shoplifters to aid in the fight against theft, Activist Post reported.

However, facial recognition technology currently has a lot of problems. Activist Post recently reported how Amazon’s own facial “Rekognition” software erroneously and hilariously identified 28 members of Congress as people who have been arrested for crimes.

Activist Post previously reported on another test of facial recognition technology in Britain which resulted in 35 false matches and 1 erroneous arrest. So the technology is demonstrated to be far from foolproof.

The fact that hundreds of retail stores want facial recognition technology is a scary thought. But now Walmart wanting our biometric data is an even scarier prospect.

Increasingly our rights are decreasing with the help of big corporations like Amazon and Walmart. Our privacy is disappearing at an alarming rate in trade for convenience.

As previously written, “we are entering the Minority Report; there is no going back after this technology is public and citizens are indoctrinated that it’s ‘for their safety.’”

At that point, we are officially trading liberty and privacy for security. As Benjamin Franklin said, “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.”

The scariest thought is Walmart and other retail stores are consistently hacked. If these corporations are storing individuals’ biometric data, one has to wonder how secure their systems are to protect customers’ recorded biometrics.




AnyVision’s Facial Recognition Cameras Are Being Installed In “Smart Cities” Everywhere

By MassPrivateI | Activist Post

Everywhere you turn politicians and corporations are trying to convince the public we need to convert our cities into ‘smart cities’.

Last week AnyVision and Nvidia announced that they are working together to put facial recognition cameras in cities across the globe. AnyVision is an Israel-based company that profits from spying on everyone.

NVidia has partnered with AI developer AnyVision to create facial recognition technology for “smart cities” around the world. The two companies will work to install automatic facial recognition into CCTV (closed-circuit television) surveillance cameras. (Source)

Five months ago, I warned everyone that Nvidia also wants to turn police vehicles into 360 degree facial recognition platforms.

Facial recognition cameras are being used to spy on everyone.

Facial recognition cameras identify marathon runners in real-time

AnyVision claims their facial recognition technology can detect, track and recognize any person of interest with more than 99% accuracy. Their video also claims they can identify marathon runners in real-time.

https://vimeo.com/210790220

New York Marathon from AnyVision on Vimeo.

Soon nowhere will be safe from law enforcement’s prying eyes.

AnyVision utilizes NVidia hardware to achieve high-speed, real-time face recognition from surveillance video streams. Our system is highly optimized for GPU acceleration allowing us to deliver real-time analysis of streaming data whilst achieving unprecedented accuracy. (Source)

Nearly a year ago, I warned everyone that ‘smart cities’ are being run by the CIA, DHS and the NDOT.

But it gets worse — private companies are being used to do an end-run around our Bill of Rights.

Mashable asks, “is this technology terrifying, and possibly everything Orwell warned us about?  Absolutely.”

Companies like Philips Lighting, Siemens, IntellistreetsShotPoint and ShotSpotter are all working together to install facial recognition cameras and microphones in every city in America.

Don’t be fooled, ‘smart cities’ are really just a euphemism for total control.

You can read more at the MassPrivateI blog, where this article first appeared.

Read more great articles at Activist Post.




Facebook’s New ‘DeepFace’ Software Can Match Faces Almost As Well As Humans

 | Techradar | March 18th 2014

facebook image recognisationShow most any adult human two different photographs of another person and they’ll be able to match them with nearly 100% accuracy. These are the kind of results Facebook’s latest research is also zeroing in on.

Technology Review reported this week that Facebook has developed new facial verification software capable of matching the same person correctly nearly every time, regardless of whether the subject is actually facing the camera.

Using new artificial intelligence technology known as “deep learning,” researchers working for the social network have managed to reach a 97.25% accuracy, even with images where the lightning is different.

Those results compare favorably to human beings, who are said to match two different images of the same person with a typical 97.53% degree of accuracy.

About (Deep)Face

“You normally don’t see that sort of improvement. We closely approach human performance,” explains Facebook AI team member Yaniv Taigman, noting the latest software has eliminated nearly one quarter of facial matching errors found in earlier versions.

DeepFace uses “facial verification” software to match two different images where the same face appears, rather than focus on the relatively easier task of only recognizing a person based on their facial characteristics.

[read full post here]




Creepiest Smartphone App Yet Scans Crowd for People with Dating Site Profiles

Kimberly Paxton | Activistpost | Jan 17th 2014

nametag_slide1Imagine for a moment, that you are at the farmer’s market on a Saturday morning, getting your veggies and minding your own business. Suddenly, a creepy guy with a comb-over approaches you. “Hey, there. I bet you like long walks on the beach and strawberry margaritas, baby.” 

What? you think. How on earth did he know that? 

Then he begins to talk to you, and it’s eerie, simply uncanny all the things Mr. Creepster has in common with you. Suddenly you realize, he is all but plagiarizing that profile you put on OKCupid last month in the hopes of meeting Mr. Right. He knows that you don’t smoke, that you have 3 children, the city in which you reside, what you do for a living, and that you go hiking alone to enjoy the solitude of a nearby mountain trail every single weekend. 

Putting the “stalk” in stalker, a new facial recognition app for Smartphones will allow a user to scan a crowd and pinpoint people with profiles on online dating sites or social media sites. NameTag, designed for Android and iOS, scans a person in whom the user is interested and looks for that person on dating sites such as PlentyOfFish, OkCupid, and Match as well as social media sites like Facebook, Twitter, and LinkedIn.

NameTag wirelessly sends the photo that the user has surreptitiously taken of the prospective date to a server, where it is then compared to millions of records. In seconds, a match is returned that has the unwitting victim’s full name, additional photos and all social media profiles.
Check out this rather disturbing blurb on the NameTag website, where they’re actually encouraging people to register their photos voluntarily:

With NameTag, Your Photo Shares You.

Why leave meeting amazing people up to chance? Don’t miss out on the opportunity to connect with others who share your passions!

Connect your info and interests with the world by simply sharing your most unique feature – your face. Nametag links your face to a single, unified online presence that includes your contact information, social media profiles, interests, hobbies and passions and anything else you want to share with the world.

Using the NameTag smartphone or Google Glass app, simply snap a pic of someone you want to connect with and see their entire public online presence in one place.

Don’t be a Stranger 

The app strongly encourages you to register yourself so that people on the street can instantly know everything about you. Who on earth would think that this is a good idea? I fear an alarming number of people might think so, sadly.

Here’s Jane, NameTag’s example profile holder.

Meet Jane – by using NameTag 

Jane has lots of different social media profiles and loves to meet new people. By using NameTag, she can link all her social networks to her face and share her information and meet new people in an instant. At work, she opts to have just her Professional Profile information visible, but when she goes out to happy hour with her friends, she changes her profile settings to Personal and displays more details, like her hobbies, interests and relationship status.

Bad idea, Jane. There’s a pervy dude that just took your picture and is now salaciously thinking about your single self doing yoga.

The techy folks think that this is just great:

NameTag’s creator Kevin Alan Tussy said: ‘I believe that this will make online dating and offline social interactions much safer and give us a far better understanding of the people around us.

‘It’s much easier to meet interesting new people when we can simply look at someone, see their Facebook, review their LinkedIn page or maybe even see their dating site profile. Often we were interacting with people blindly or not interacting at all. NameTag can change all that.’

Tom Wiggins, Deputy Editor of tech mag Stuff, thinks the app is a good idea, but that users should exercise caution.

He said: ‘It could be very handy if you’re not afraid of scaring people off with your creepy app. It’s evidently pretty clever but I think most people would find it quite invasive. And isn’t the point of dating to find out more about people? This kind of defeats the object.

‘In terms of privacy, I assume it’s only finding information that you’ve already put online, so it’s not really any more of a risk to privacy than adding photos to Facebook.’ (source)

Could it get any creepier or more invasive? I’m glad you asked. YES! It actually CAN.

The app doesn’t stop at accessing dating profiles and Facebook accounts. Oh no! Just like a Ginsu knife commercial, wait, there’s more! If you order right now, you’ll get this great bonus!

For added peace of mind, the user can also cross-reference the photos against more than 450,000 entries in the National Sex Offender Registry and other criminal databases. (source)

So this cheapo app is going to take ONE PICTURE and tell you that someone is in a criminal data base. Can you imagine the potential vigilante applications of this?

First of all, we all know that the “justice” system is anything but just, and that not everyone who is registered as a “sex offender” is actually a threat to society. Think about an 18 year old who dated a 16 year old, for example. Suddenly anyone could be pinpointed as a sex offender, while they’re just going about their business at the grocery store or the mall.

Secondly, this is not a state-of-the-art facial recognition program. What if it’s wrong? What if it says that guy pushing the grocery cart full of juice boxes and animal crackers to the checkout stand is a purveyor of kiddie porn, but he’s actually just a dad with 3 kids at home?

Smartphones seem to have taken the place of SmartPeople. Not only have electronic devices taken away many interpersonal communications and experiences (see this video), now they’re taking away the mystery of getting to know somebody new, and they’re boiling the magic of attraction down to facial recognition and algorithms.

With stuff like this, the eugenicists won’t need birth control to depopulate the world. People will just connect via their smartphones. Problem solved.

Kimberly Paxton is a staff writer for the Daily Sheeple, where this first appeared. She is based out of upstate New York.

More from Activistpost