Thursday, June 27, 2024

Microsoft to retire controversial facial recognition tool that claims to identify emotion

Must read

Microsoft is removing public access to a number of AI-powered face-analysis tools – including one that claims to identify a subject’s emotion from videos and images.

Such “emotion recognition” tools have been criticized by experts. They say not only that facial expressions, which are supposed to be universal, differ between different populations, but that it is unscientific to equate external displays of emotion with internal feelings.

“Companies can say whatever they want, but the data is clear,” said Lisa Feldman Barrett, a professor of psychology at Northeastern University who conducted a review on the topic of emotion-driven AI recognition. The Edge in 2019. “They can detect a forehead, but that’s not the same thing as detecting anger.”

The decision is part of a a larger review of Microsoft’s AI ethics policies. The company’s updated Responsible AI Standards (first outlined in 2019) emphasize accountability for finding out who uses its services and greater human oversight of where these tools are being applied.

In practical terms, this means that Microsoft will restrict access to certain functions of its facial recognition services (known as Azure Face) and remove others altogether. Users will need to apply to use Azure Face for face identification, for example, by telling Microsoft exactly how and where to deploy its systems. Some use cases with less harmful potential (such as automatically blurring faces in images and videos) will remain accessible.

In addition to removing public access to its emotional recognition tool, Microsoft also withdraws Azure Face’s ability to identify “attributes such as gender, age, smile, facial hair, hair, and makeup.”

“Experts inside and outside the company have highlighted the lack of scientific consensus on the definition of ’emotions’, the challenges of how conclusions are generalized across use cases, regions and demographics, and the enhanced privacy concerns around this type of capability,” wrote The Principal. Microsoft’s chief AI officer, Natasha Crampton, in a blog post announcing the news.

Microsoft says it will stop offering these features to new customers starting today, June 21, while existing customers will have their access canceled on June 30, 2023.

However, while Microsoft is retiring public access to these features, it will continue to use them in at least one of its own products: an application called Seeing AI who uses machine vision to describe the world for people with visual impairments.

In a blog post, Microsoft’s chief product manager for Azure AI, Sarah Bird, said that tools such as emotional recognition “can be valuable when used for a set of controlled accessible scenarios.” It is unclear whether these tools will be used in any other Microsoft products.

Microsoft also introduces similar restrictions to its Custom Neural Voice, which allows customers to create AI voices based on real-world recordings (sometimes known as audio deep falsification).

The tool “has exciting potential in education, accessibility and entertainment,” writes Bird, but notes that “it’s also easy to imagine how it could be used to improperly parody speakers and deceive listeners.” Microsoft says in the future it will limit access to the feature to “managed customers and partners” and “ensure the active participation of the speaker while creating a synthetic voice.”

Source

More articles

Latest article