Scientists Create Online Games to Show Risks of AI Emotion Recognition

Technology designed to identify human emotions using machine learning algorithms is a huge industry, and its proponents claim there is great value to it, from keeping road safety to analyzing market research.

But critics say the technology not only raises privacy concerns, it is imprecise and racially biased.

A team of researchers have created a website – emojify.info – where the public can try out emotion recognition systems through their own computer cameras. 

One game focuses on pulling faces to trick the technology, while another explores how such systems can struggle to read facial expressions in context.

Their hope, the researchers say, is to raise awareness of the technology and promote conversations about its use.

“It is a form of facial recognition, but it goes farther because rather than just identifying people, it claims to read our emotions, our inner feelings from our faces,” said Dr. Alexa Hagerty, project lead and researcher at the University of Cambridge.

Facial recognition technology, which is often used to identify people, has come under intense scrutiny in recent years. 

Last year, the Equality and Human Rights Commission said it should stop its use for mass screening, saying it could increase police discrimination and harm freedom of expression.

But Hagerty said many people were not aware how common emotion recognition systems were, noting they were employed in situations ranging from job hiring, to customer insight work, airport security, and even education to see if students are engaged or doing their homework.

She said the technology is used all over the world, from Europe to the United States and China.

Taigusys, a company that specializes in emotion recognition systems with its head office in Shenzhen, China, says it has used them in places ranging from care homes to prisons.

While according to reports earlier this year, The Indian city of Lucknow plans to use technology to find out what distress women are experiencing as a result of harassment, a move that has drawn criticism from digital rights organizations and others.

While Hagerty said that emotion recognition technology may have some potential benefits, this must be balanced with concerns about accuracy and racial bias, as well as whether the technology was even the right tool for a particular job.

“We need to be having a much wider public conversation and deliberation about these technologies,” she said.

The new project allows users to experiment with emotion recognition technology. The site notes that “no personal data is collected and all images are stored on your device.”

In one game, users are invited to pull out a series of faces to fake an emotion and see if the system has been tricked.

“People developing this technology say it reads emotions … but the system was actually reading the movement of the face and then combining that with the assumption that those movements are related to emotions, for example a smile means that someone is happy,” Haggerty said.

She adds that the matter is more complicated than that, as human experience has shown that it is possible to fake a smile. “This is what the game we created is trying to prove, to show you that you did not change your inner state and feelings 6 times, just changed the way you look and the features of your face.”

Vidushi Marda, chief program officer at the human rights organization, said it was crucial to press “pause” on the growing market for emotion recognition systems.

He added, “The use of emotion recognition technologies is deeply concerning as not only are these systems based on discriminatory and discredited science, their use is also fundamentally inconsistent with human rights.”

Sources:

https://www.cambridgeindependent.co.uk/news/play-the-fake-smile-game-and-expose-flaws-in-ai-powered-emot-9193941/

emojify.info