A new report shows automated “life tests” used by banks and other institutions to help verify the identity of users easily fooled by deep scams.
Security firm Sensity, which specializes in spotting attacks with AI-generated faces, has investigated the vulnerability of identity tests provided by 10 top vendors. Sensity used deep fakes to copy a target face onto an ID card to be scanned and then copied that same face onto a video stream of an alleged attacker to pass vendors’ life tests.
Life tests generally ask someone to look into a camera on their phone or laptop, sometimes turning their head or smiling, to prove both that they are a real person and to compare their appearance with their ID using facial recognition. In the financial world, such checks are often known as KYC, or “know your customer” tests, and can be part of a broader audit process that includes document and invoice audits.
“We tested 10 solutions and found that nine of them are extremely vulnerable to deep fake attacks,” said Sensity chief Francesco Cavalli. The Edge.
“There’s a new generation of AI power that can pose serious threats to companies,” Cavalli says. “Imagine what you can do with fake accounts created with these techniques. And no one is able to detect them.”
Sensity shared the identity of the corporate vendors with whom it tested The Edge, but requested that the names not be released for legal reasons. Cavalli says Sensity has signed non-disclosure agreements with some of the vendors and, in other cases, fears it may have violated the terms of service of companies by testing their software in this way.
Cavalli also says he was disappointed by the reaction from vendors who did not appear to consider the attacks significant. “We told them‘ look you are vulnerable to this kind of attack ’, and they said‘ we don’t care ’,” he says. “We decided to publish it because we think that at the corporate level and in general, the public should be aware of these threats.”
Sensity-tested vendors sell these lifetime checks to a range of customers, including banks, dating programs, and cryptocurrencies. One vendor was even used to verify the identity of voters in a recent national election in Africa. (Although there is no suggestion in the Sensity report that this process has been compromised by deep fakes.)
Cavalli says such profound false identity parodies are primarily a danger to the banking system, where they can be used to facilitate fraud. “I can create an account; I can move illegal money into digital bank accounts of cryptocurrencies, “says Cavalli.” Or maybe I can apply for a mortgage because today online lending companies are competing with each other to give loans as quickly as possible. ”
This is not the first time there have been deep fakes identified as a danger to facial recognition systems. They are primarily a threat when the attacker can hijack the video stream of a phone or camera, a relatively simple task. However, facial recognition systems that use depth sensors – such as Apple’s Face ID – cannot be fooled by these types of attacks, as they verify identity not only based on visual appearance but also the physical shape of a person’s face.