The popularity of deepfake goes in hand with abuse from hackers, who garner money from tech vulnerabilities.

Biometric security via face identification is constantly gaining in popularity. However, there are little evidence that deepfake – an AI-based technology that can manipulate videos and pictures – can be used for financial fraud, due to high security levels in banks and payment services.

Yet attacks are still feasible as deepfake develops, security experts warned.

Deepfake technology is in constant development. Photo: SCMP.

A survey from biometrics provider iProove in January showed that among 105 leaders from cybersecurity firms, 77% were concerned about the potential impact of deepfake videos and images. The most concerning matter is revealed to be fraud in payment authorization and transaction. Last year, CEO of an energy company in the UK had found themselves tricked using AI. The criminal had apparently impersonated as CEO of a parent company in Germany and asked for a $242,000 transaction to a fake supplier, with voice so real that the victim never suspected a single thing. While not directly related to deepfake, this has proven the likeliness of fraud risks using manipulated pictures, sounds, videos… in the future.

Popularity of any new technology will always go in hand with abuse, for hackers can utilize them for money,” said Kok Tin Gan, tech expert and hacker, said.

The AI-based face-switching application ZAO had stirred quite the storm on China’s internet with its boom last year. In particular, many users had tried to use this application to surpass payment platforms like Ant Finacial’s Alipay, to no success. Regarding the situation, a representative of the app had stated on Weibo that: “There are currently many online face-switching programs. But they won’t be able to surpass our face identification payment system no matter how realistic their results.”

In 2018, a team of 5 had attempted to use personal information and leaked photos on the internet to steal money on Alipay. They made 3D formats of the stolen pictures to bypass Alipay’s face identification, only to be promptly flagged.

Deepfake video of President Barack Obama giving a speech. Photo: AP.

China is currently the most frequent user of face identification systems. The technology can be found practically anywhere on this nation, from banking apps to toilet paper machines. Leaks of face photos had thus caused serious concerns.

Gan, a “grey hat” hacker, disclosed that buying biometrics data had never been easier. He shared that a pack of 5,000 faces can be purchased for only 10 yuan (VND 35,000) or even for free. “Deepfake will make people reconsider providing their faces and other biometrics data. There are certain benefits to biometrics, but you cannot change your face and fingerprints the way you do your passwords,” Gan said.

However, he also believes that people should not grow too concerned. Any technology will go in hand with new forms of cyberattacks, and with them, new counter measures. For example, there’s the multi-factor authentication in banks. “Some banks only allow first step face identification. Then, to use other features, you will be required to enter authentication codes sent via SMS,” he said.

Until now, deepfake is still limited to the making of face-switched pornography and funny videos. However, finance fraud risks are still prevalent. “I believe there will be a large issue that will interrupt the field in due time,” Gan forecast.

Chinese authorities and banks are growing more and more concerned about deepfake risks. According to Financial Times, some fintech firms and banks, including HSBC, are already preparing against this cyber threat that may threaten payment apps and finance institutes. Some, like Microsoft, are gearing up with authentication tools that can detect fake videos made by AI that can slip humans.

In China, the perspective on biometrics are also starting to change. A survey of over 6,000 people in 2019 had revealed that over 80% were afraid of data leaks and 65% were afraid of deepfake frauds. But similar to other novel technologies, experts believe that it will take a couple more years until the deepfake threat is widely recognized.

SCMP

Related posts: