Amazon face match mishmash | The Mercury

Amazon’s facial-recognition software may have a problem with accuracy.
INTERNATIONAL – Facial-recognition software developed by Amazon and marketed to local and federal law enforcement as a powerful crime-fighting tool struggles to pass basic tests of accuracy, such as correctly identifying a person’s gender, new research said.

Researchers with MIT Media Lab also said Amazon’s Rekognition system performed more accurately when assessing lighter-skinned faces, raising concerns about how biased results could tarnish the artificial-intelligence technology’s exploding use by police and in public venues, including airports and schools.

Amazon’s system performed flawlessly in predicting the gender of lighter-skinned men, the researchers said, but misidentified the gender of darker-skinned women in about 30% of their tests. Rival facial-recognition systems from Microsoft and other companies performed better but were also error-prone, they said.

The problem, AI researchers and engineers say, is that the vast sets of images the systems have been trained on skew heavily toward white men. The research showed, however, that some systems have rapidly grown more accurate over the last year following greater scrutiny and corporate investment into improving the results.

Amazon disputed the study’s findings, saying the research tested algorithms that work differently than the facial-recognition systems tested by the Federal Bureau of Investigation and deployed by police departments in Florida and Washington state.

Matt Wood, an Amazon Web Services executive who oversees AI and machine learning, said researchers based their study on “facial analysis” algorithms, which can detect and describe the attributes of a face in an image, like whether the person is smiling or wearing glasses.

“Facial recognition” algorithms are used to directly match images of different faces, and would be more commonly used in cases such as identifying a wanted fugitive or missing child.

Amazon said in November, that it had updated its facial-analysis and facial-recognition features to boost matching faces and “obtain improved age, gender and emotion attributes.”

But independent researchers said the findings raised important questions about the deployment of Amazon’s AI.

“Asking a system to do gender classification is in many ways an easier task for machine learning than identification, where the possibilities are far more than binary, and could number in the millions,” said Clare Garvie, a senior associate at Georgetown law school’s Center on Privacy & Technology who studies facial-recognition software.

The study’s co-author Joy Buolamwini, who conducted similar research last year said the study’s methodology was ethically sound and has been cited by companies such as IBM and Microsoft. She called Amazon’s defence “a deflection,” and said facial analysis and gender classification were fundamental tools that could be used by the algorithms to help speed up a facial-recognition search. She urged Amazon to submit their models for public benchmark tests.

She also urged Amazon to alert clients of system biases and “immediately halt its use in high-stakes contexts like policing and government surveillance.”

The promise of facial-recognition technology that could identify people from afar has touched off a multimillion-dollar race among tech companies, who contend that the technology could speed up police investigations, improve public security and save lives.

But questions on the systems’ accuracy – and concerns that the technology could be used to surveil people without their knowledge or consent, stifling public protests and chilling free speech – have led civil-rights and privacy advocates to speak out.

Satya Nadella, the chief executive of Microsoft, which is developing facial-recognition software but has also called for stronger regulation, said last week that the facial-recognition business was “just terrible” and “absolutely a race to the bottom.”

A study last year by Buolamwini and co-author Timnit Gebru found gender-classification error – and broad accuracy gaps between lighter and darker skin tone – in the systems by IBM, Microsoft and Chinese tech company Face++. 

– The Washington Post

Source link

قالب وردپرس