11.11.2022 г.

Stolen passport photos, fraudsters and facial fusion technology pose a threat to national security

Restricting operations with personal data will help prevent leaks

Not all leaks of personal data are equally dangerous. When data from online services is compromised — e.g. email address, username, phone number, delivery address — it is unpleasant. However, it is still not as scary as leaking a passport photo or a person’s biometric personal data.

Getting a quality passport photo is a fortunate thing for a criminal. An animated image can be created from a biometric passport photo and then a deepfake. This is then used to create a false identity. Moreover, it is used to attack internet services, banks, government agencies.

biometric passport
Source www.biometricupdate.com

Facing deepfakes

Today, deepfake attacks occur every day around the world. These incidents are not publicly reported. But many people have heard about the use of deepfake in celebrity videos. One of the most recent high-profile cases occurred in May 2022. Scammers then began using a synthesized image of Elon Musk to promote BitVex, a cryptocurrency trading platform.

Scientists were the first to take an interest in the topic of face synthesis to combat fraud. Back in 2017, specialists from NVIDIA and Aalto University presented a method for training a neural network in which it generates higher and higher resolution photos from low-resolution images.

In 2018, scientists from The Institut de Robòtica i Informàtica Industrial and The Ohio State University created a neural network that can generate facial photos by adding a smile, glasses, and changing the rotation of the head to the original image.

In 2018, scientists at CUHK — SenseTime Joint Lab, The Chinese University of Hong Kong and Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences presented a two-step method for synthesizing faces. In the first stage, the neural network creates facial features and then generates an image. This method proved to be more efficient than the one-step face synthesis.

Researchers from The Autonomous University of Madrid and The University of Beira Interior presented a new face generation architecture to fool neural network counterfeit detectors in 2020.

In the same year, scientists from Peking University and Microsoft Research proposed a method to more plausibly spoof faces in a photo. The neural network created by the scientists learns to extract facial features and attributes from two images of different faces and generate a new face with facial features from the first face and attributes from the second. It is very likely that this is how deepfake is done.

In 2021, scientists at Tel-Aviv University and Penta-AI went further and proposed a neural network that generates faces in a particular style.

Moreover, in July this year, a team of scientists from Samsung AI Center – Moscow, Yandex Armenia and The Skolkovo Institute of Science and Technology unveiled technology to create high-resolution neural human avatars — mega-portraits. It has given dynamics to a static single-frame image. And we see avatars of Russian writer Nikolai Gogol, Mexican artist Frida Kahlo, actor Brad Pitt, actress Angelina Jolie, the woman from The Mona Lisa painting of Leonardo da Vinci, and other famous personalities smiling, turning their heads, looking up, rolling their eyes as if they were alive.

If scientists are so good at synthesizing faces, imagine what crooks can do with passport photos.

The leaks make it easier to use deepfakes for criminal purposes. The more leaked pictures, the better trained are the neural networks that synthesize people’s faces.

The third is not superfluous

The main danger of leaks is not that the data is compromised, but that a sea of information is born in which everything can be found. If a criminal wants to change his identity, all he has to do is find someone who looks like him in the database. The perpetrator may try to fraudulently generate that person’s lost passport application in order to obtain an identity card with a new photo created by morphing. The image is made in such a way that it is not a photograph of one of the people involved, but a new photograph that resembles each of the two people as much as possible.

Morphing is one of the most dangerous technologies devised for counterfeiting in the field of biometrics. It is created solely by leaks. It is a way to fool both humans and robots. If a passport is obtained for a morphed photo, facial recognition systems will tell you that this image does not belong to the criminal with the new identity, but to someone else. This is how someone else’s life story is appropriated. The perpetrator does not need a twin or plastic surgery to do this. But for the leaks, it would be incredibly difficult for him to solve this problem.

A threat to national security

In the event of such an incident, the first thing that needs to be sorted out is what has been leaked. If biometric data, passport photos, are missing, then national security is threatened by the leakage. Because biometrics is used practically everywhere, from banks to nuclear power plants. If the photo is properly morphed, both banks and nuclear power plants will be able to get through to the wrong people. You are lucky if it ends up stealing $5,000 from the bank. However, what if there is an accident at the nuclear power plant?

While recognizing data leakages as a security threat, there should also be an audit of personal data operators. New legal requirements are pushing this forward.

Some countries have banned businesses from refusing services to customers if they do not want to provide biometric personal data when law does not require it. Violators face monetary penalties. This will probably affect the practice of online services that ask users to record a video of their face and head turning. If they refuse, accounts are blocked. The services claim that the recording is needed to verify the authenticity of the account, that a real person and not a robot created it. However, in order to carry out this procedure, the services cannot fail to collect and store videos of users’ faces. Are they confident that this biometric data is secure?

It is worth going further and reviewing data processing practices in the foodtech industry. After all, the service does not need to know the customer’s passport details or even their name in order to deliver food to their home ordered online. It can send the customer a QR-code, which the latter will show to the courier to read and receive the order. The system for processing foodtech orders can be created in blockchain in order to prevent data leaks.

Unless personal data processing practices are reviewed, services will continue to collect as much personal data as they can, thinking it is a treasure like gold. But if businesses really think this way, then the treasure must be protected accordingly — just like a bank or the state does.

Improve your business with Smart Engines technologies



Green AI-powered scanner SDK of ID cards, passports, driver’s licenses, residence permits, visas, and other ids, more than 1856+ types in total. Provides eco-friendly, fast and precise scanning SDK for a smartphone, web, desktop or server, works fully autonomously. Extracts data from photos and scans, as well as in the video stream from a smartphone or web camera, is robust to capturing conditions. No data transfer — ID scanning is performed on-device and on-premise.


Automatic scanning of machine-readable zones (MRZ); all types of credit cards: embossed, indent-printed, and flat-printed; barcodes: PDF417, QR code, AZTEC, DataMatrix, and others on the fly by a smartphone’s camera. Provides high-quality MRZ, barcode, and credit card scanning in mobile applications on-device regardless of lighting conditions. Supports card scanning of 21 payment systems.



Automatic data extraction from business and legal documents: KYC/AML questionnaires, applications, tests, etc, administrative papers (accounting documents, corporate reports, business forms, and government forms — financial statements, insurance policies, etc). High-quality Green AI-powered OCR on scans and photographs taken in real conditions. Total security: only on-premise installation. Automatically scans document data in 2 seconds on a modern smartphone.


Green AI for Tomographic reconstruction and visualization. Algorithmization of the image reconstruction process directly during the X-ray tomographic scanning process. We aspire to reduce the radiation dose received during the exposure by finding the optimal termination point of scanning.


Send Request

Please fill out the form to get more information about the products,pricing and trial SDK for Android, iOS, Linux, Windows.

    Send Request

    Please fill out the form to get more information about the products,pricing and trial SDK for Android, iOS, Linux, Windows.