Amazon Shareholders to Vote on Sales of Facial Recognition Software to Government

Image

When Amazon shareholders convene for their annual general meeting in Seattle later this month, they will be asked to vote on whether the company should continue to sell facial recognition software to government agencies. The vote was organized by a small group of shareholders, led by nonprofit Open MIC.

Rekognition, the software marketed by Amazon, is offered as part of a suite of Amazon Web Services (AWS) software at a cost of under $20 a month. It has both photo and video recognition capabilities in which facial descriptions and body movements are analyzed and compared against databases including Amazon’s own.

Activists raised the alarm last July after the American Civil Liberties Union (ACLU) discovered that Rekognition failed some basic accuracy tests. For example, when photos of the members of the U.S. Congress were compared with a publicly available ‘mugshot’ database of 25,000 images, Rekognition incorrectly reported that 28 of them had previously been arrested. The analysis also revealed that a majority of these incorrect matches were African-American or Latino members of Congress.

Nicole Ozer, the ALCU’s Technology and Civil Liberties Director, told the Washington Post that the software could be used “to track protesters, target immigrants, and spy on entire neighborhoods.” “Once powerful surveillance systems like these are built and deployed, the harm can’t be undone. We’re talking about a technology that will supercharge surveillance in our communities,” she said.

Last December, Open MIC, a non-profit that fosters shareholder engagement at leading tech and media companies, joined forces with the Tri-State Coalition for Responsible Investment, the Sisters of St. Joseph of Brentwood and Harrington Investments to put Rekognition software sales up for vote by Amazon shareholders. Together the group of shareholders introducing the proposal represent a total of $1.32 billion in assets under management.

"Shareholders are deeply concerned by the serious chilling effect of surveillance on immigrant communities and on all of us, including the ways in which this technology can be used by government to instill fear, prevent people from accessing the services they need, and perpetuate racism," said Mary Beth Gallagher, the executive director of Tri-State Coalition for Responsible Investment in a press release announcing the action.

"I continue to be fearful that Amazon will encourage the expansion of public intimidation of people of color, immigrants, and democracy advocates throughout the world, including ordinary consumers who will also be continually monitored and subjected to privacy invasion solely intended to maximize private sector financial gain," added John Harrington, the president of Harrington Investments.

The board of directors at Amazon opposes the vote in proxy documents mailed out to shareholders in preparation for the annual meeting. “We do not believe that the potential for customers to misuse results generated by Amazon Rekognition should prevent us from making that technology available to our customers,” the company said.

But other major companies in the industry oppose the sale of sell facial recognition software. Indeed Microsoft has halted the sale of its facial recognition technology to multiple parties over concerns of abuse. “Imagine a government tracking everywhere you walked over the past month without your permission or knowledge. Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech,” Brad Smith, Microsoft's Chief Legal Officer, wrote on the company’s blog. “The issues become even more complicated when we add the fact that facial recognition is advancing quickly but remains far from perfect.”

“Whether or not you believe government surveillance is okay, using commercial facial recognition in law enforcement is irresponsible and dangerous,” Brian Brackeen, CEO of Kairos, a facial recognition software developer located in Miami, Florida wrote in an article for TechCrunch. “Let’s say the wrong person is held in a murder investigation. Let’s say you’re taking someone’s liberty and freedoms away based on what the system thinks, and the system isn’t fairly viewing different races and different genders. That’s a real problem.”

And Axon, an Arizona based technology and weapons company which is the largest supplier of U.S. police department body cameras, has told investors that it will not yet include facial recognition software in its cameras as the technology does not meet the “accuracy thresholds.”

“This is one where we think you don’t want to be premature and end up either where you have technical failures with disastrous outcomes or…there’s some unintended use-case where it ends up being unacceptable publicly in terms of long-term use of the technology,” Axon CEO Rick Smith, said at a recent company quarterly earning meeting with investors

U.S. government watchdogs have also raised the alarm. A March 2017 Government Accountability Office report on the Federal Bureau of Investigation (FBI) and Department of Justice use of facial recognition technology, concluded that the agencies were not doing enough to ensure accuracy, nor addressing privacy concerns.

The GAO report noted that the FBI’s claims of 86 percent accuracy rate were based on a very small candidate sample size of just 50 people. Furthermore the GAO said that the FBI had failed to include false positive rates.

Several cities and states around the U.S. are looking into ways to limit facial recognition software. While a proposed ban was recently defeated in California’s State Senate, the city of San Francisco is moving forward with a proposed ordinance titled Stop Secret Surveillance that would regulate the use of this technology within city limits. The ordinance is supported by non profit groups as the ACLU and the Electronic Frontier Foundation.
 

Stop Secret Surveillance – which is open for public comment before it goes before the city’s Board of Supervisors for a vote - would require city agencies to explain why they are buying recognition software, be subject to solicitation of public input before deploying or increasing surveillance technologies, annual reports on use, as well as regular audits.

Speaking to the San Francisco Rules Committee, Tim Kingston, the San Francisco’s Public Defender’s Office Investigator, said: “Current facial recognition technology is corporate, privately held, it’s proprietary. We don’t know what goes into it, and we’re not going to have access to that unless there’s oversight.”

Not surprisingly local police departments have pushed back. The San Francisco Police Officers Association has even gone so far as to send out preformatted letters opposing a ban with blank spaces for supporters to add their names and comments and submit them to the city. The lobbying scheme was exposed, however, when several supporters filed the forms without bothering to fill in the blanks.

Meanwhile, activists in other countries have raised flags over similar technology. An analysis of the use of facial recognition technology to surveill 170,000 people at a soccer match in May 2017 by the South Wales police department in the United Kingdom showed that the software pulled up 2,470 potential matches of which 2,297 – or 92 percent - were wrong.

Despite this massive inaccuracy rate, the UK police defended the use of the technology. “We need to use technology when we’ve got tens of thousands of people in those crowds to protect everybody, and we are getting some great results from that,” Matt Jukes, chief constable of South Wales, told the BBC.

Subsequently Big Brother Watch, a privacy watchdog based in the UK, issued a report on facial recognition technology titled “Face Off: The Lawless Growth of Facial Recognition in UK Policing.” The activists said they were concerned that the data that being collected was being stored, shared, or used elsewhere, breaking individuals rights to privacy. “We are deeply concerned that the securitisation of public spaces using biometrically identifying facial recognition unacceptably subjects law abiding citizens to hidden identity checks, eroding our fundamental rights to privacy and free expression,” the report concluded.

Regardless facial recognition has slowly permeated into aspects of everyday life from airports to dating apps, social media sites, hotels and even schools. An Arkansas school board recently approved nearly $300,000 in funding for 200 cameras in two different schools. The Magnolia Reporter, a local newspaper, reported that the system will allow for “facial recognition and tracking, live coverage, the ability to let local local law enforcement tap into the system in the event of a school situation, infrared capability and motion detection.”

“Schools are justified in thinking about safety, both in terms of gun violence and other possible hazards,” Rachel Levinson-Waldman, the senior counsel at the Brennan Center for Justice, told Gizmodo, the online technology publication. “At the same time, these technologies do not exist in a vacuum; we know, for instance, that facial recognition is less accurate for women and people of color, and also that school discipline is imposed more harshly on children of color.”

Levinson-Waldman urged operators of facial recognition technology to be “fully transparent about how information gathered is used, retained, or shared, particularly with law enforcement or school resource officers.”

* indicates required