Artificial intelligence has made major strides in the past few years, but those rapid advances are now raising some big ethical conundrums.
Chief among them is the way machine learning can identify people’s faces in photos and video footage with great accuracy. This might let you unlock your phone with a smile, but it also means that governments and big corporations have been given a powerful new surveillance tool.
A new report from the AI Now Institute (large PDF), an influential research institute based in New York, has just identified facial recognition as a key challenge for society and policymakers.
The speed at which facial recognition has grown comes down to the rapid development of a type of machine learning known as deep learning. Deep learning uses large tangles of computations—very roughly analogous to the wiring in a biological brain—to recognize patterns in data. It is now able to carry out pattern recognition with jaw-dropping accuracy.
The tasks that deep learning excels at include identifying objects, or indeed individual faces, in even poor-quality images and video. Companies have rushed to adopt such tools.
Sign up for the The Algorithm
Artificial intelligence, demystified
By signing up you agree to receive email newsletters and
notifications from MIT Technology Review. You can change your preferences at any time. View our
The report calls for the US government to take general steps to improve the regulation of this rapidly moving technology amid much debate over the privacy implications. “The implementation of AI systems is expanding rapidly, without adequate governance, oversight, or accountability regimes,” it says.
The report suggests, for instance, extending the power of existing government bodies in order to regulate AI issues, including use of facial recognition: “Domains like health, education, criminal justice, and welfare all have their own histories, regulatory frameworks, and hazards.”
It also calls for stronger consumer protections against misleading claims regarding AI; urges companies to waive trade-secret claims when the accountability of AI systems is at stake (when algorithms are being used to make critical decisions, for example); and asks that they govern themselves more responsibly when it comes to the use of AI.
And the document suggests that the public should be warned when facial-recognition systems are being used to track them, and that they should have the right to reject the use of such technology.
Implementing such recommendations could prove challenging, however: the toothpaste is already out of the tube. Facial recognition is being adopted and deployed incredibly quickly. It’s used to unlock Apple’s latest iPhones and enable payments, while Facebook scans millions of photos every day to identify specific users. And just this week, Delta Airlines announced a new face-scanning check-in system at Atlanta’s airport. The US Secret Service is also developing a facial-recognition security system for the White House, according to a document highlighted by UCLA. “The role of AI in widespread surveillance has expanded immensely in the U.S., China, and many other countries worldwide,” the report says.
In fact, the technology has been adopted on an even grander scale in China. This often involves collaborations between private AI companies and government agencies. Police forces have used AI to identify criminals, and numerous reports suggest it is being used to track dissidents.
Even if it is not being used in ethically dubious ways, the technology also comes with some in-built issues. For example, some facial-recognition systems have been shown to encode bias. The ACLU researchers demonstrated that a tool offered through Amazon’s cloud program is more likely to misidentify minorities as criminals.
The report also warns about the use of emotion tracking in face-scanning and voice detection systems. Tracking emotion this way is relatively unproven, yet it is being used in potentially discriminatory ways—for example, to track the attention of students.
“It’s time to regulate facial recognition and affect recognition,” says Kate Crawford, cofounder of AINow and one of the lead authors of the report. “Claiming to ‘see’ into people’s interior states is neither scientific nor ethical.”