In this article, we give a brief description of the design of our identity document optical character recognition system and the algorithms that are used at different stages of the process. If you’d like to know how the Smart ID Engine system works, please keep reading.
The idea of using a mobile device for document scanning, digitization, and recognition has been worked on since the appearance of the first camera phone. But for quite a long time, the poor quality of a camera on mobile devices and the low performance of mobile processors didn’t allow developing optical character recognition systems precise enough for practical use. Today smartphones and tablets are considered to be one of the best data entry options – both in terms of the camera quality and the processor. The only thing left to do is to design a decent recognition system.
First of all, let’s outline the requirements we had in mind when we were designing the system. Four main principles behind Smart ID Engine are:
When using a mobile device as a tool for taking pictures, we face a few problems that are not common for the “traditional” optical character recognition systems that work with flat document scans. First of all, image scans usually have a monochromatic uniform background, whereas a picture or a video frame can be taken against any random background. This can make the search for a document and its identification way more complicated. Secondly, when working with photos and videos, we have to deal with different lighting conditions: there might be not enough light, or too much light, there could be a glare on the document, etc. Finally, the images produced by a mobile device can be defocused and smeared.
Perhaps, the key difference of a mobile recognition system is the range of all the possible geometrical positions of a document. On the flat scans a small document can be tilted or rotated (geometric transformations of a document are thus limited to a motion group). But when taking a picture with a mobile phone camera, a document can be moved relative to the optical system at any of the three Euler angles. Sometimes the user moves the image in such a way on purpose – for example, to avoid glare. And that expands the group of expected geometrical distortions to the projective one.
One of the ways to improve the ID document recognition quality on mobile devices (taking into account the above-mentioned problems) is to use not a single image as an input for the system, but a sequence of images: if we have algorithms that are fast enough to process one image, then after we accumulate the data from a set of images, we can not only filter some “bad” input images but improve the final OCR accuracy as well.
So let’s keep in mind the following: everything has to be performed accurately and fast at the same time, using a mobile device only, while taking into account random unpredictable circumstances when taking a picture. But we can use more than one image.
Today the Smart ID Engine system supports a large number of various documents, but when we were designing it, we mostly worked with documents with a fixed structure. Roughly speaking, a document with a fixed structure (or, to be exact, a page of one) is a document where all the static elements are always in the same places – if we were to remove all the personal details from two different copies of the document, the produced blanks would be identical.
During the first stage of image processing with the help of the SmartID Reader system, we detect pages of a document, we call them ‘templates’. Document templates we are looking for are identified beforehand, and originally they are represented as flat rectangles with known physical dimensions, but they can be projectively distorted in the image. Besides, one document can have a few templates: the front and the backside of an ID-card, or the two pages of a passport.
We use three key approaches when searching for templates in an image:
Now that we know the coordinates of the template in the image and the precise type of this template (which means we know its aspect ratio, its actual physical dimensions, etc), one would think that we can simply cut out each necessary text field by their coordinates? It would be possible if text fields in documents with a fixed structure were always located at the exact same spots. Unfortunately, the fact that there is static text printed in a document (indicative of where a certain text field is supposed to be) or even if there are underlines doesn’t mean that text fields will be printed in an identical manner. Both in Russia and in other countries the printing quality of the documents with a fixed structure is far from perfect. Text fields can be printed at an angle, over the static text, over the guiding underlines, or even at the places, they are not supposed to be. This means that we’ll have to use special algorithms designed to search for text fields in a given document.
During the text field search stage, we refer to the concept of a “document zone” (an ill-defined concept) – a region of the template that contains a set of the text fields we need, and these text fields have a very similar structure. For example, the fields are rotated at a certain angle due to the printing defects, but all the fields within a document zone are rotated at the same angle. In order to detect the fields, a document zone image is projectively cut out from the original document image with a minimum resolution that is adequate for the performance of the subsequent processing algorithms.
The text field search is carried out in two stages: first, the noise is removed by the means of morphological changes of an image, and the text fields are transformed into practically monochrome squares.
And after that, the template of the text field geometrical layout is “pulled over” the document zone image using dynamic programming (we keep in mind that we know which document we are looking at, and which text fields are supposed to be there, and approximately where these text fields are located).
You can read more about the text field localization method in the zone with a somewhat certain text field layout template in this article.
Now that we found all the text fields, it’s time for their recognition, right? Not quite yet. First, we can facilitate the task for the text recognizer by pre-processing the images of these text fields based on a priori knowledge about the document and the structure of the recognized text fields. For example, if we know that this specific text field is in the mesh division, or if it’s underlined, we can remove the straight lines using morphological operations. Or if we know that the text fields are printed in italics, or are handwritten, we could fix the slope of lettering using the slant identification with the help of the Fast Hough Transform (you can read about it here ). During the text field pre-processing, a group of several text fields is usually analyzed at once since a few text fields within one zone are usually slanted at the same angle, or written in the same handwriting.
By the time we are ready to perform the text field recognition, we already know what kind of text field it is, and which document it belongs to, which alphabet or non-alphabet script is used in the text field, and we possibly know some font characteristics. We use the two-step framework for text line recognition, and you can find a detailed explanation of it in this article.
First, we search for the baselines in the text field image using the vertical projection analysis. After that, we make a horizontal projection, not just by the image pixels, but with the help of a special network trained to search for the cuts between adjacent characters. Then taking into consideration all the information about the acceptable alphabet (if we have this kind of information), we identify all the character candidates, and all of these candidates are recognized with the help of the second classifying network. And finally, using dynamic programming we determine the optimal path through the cuts that correspond to the text field recognition result.
The networks themselves are being trained on artificially synthesized data (you can read about this issue in an article about the two-step framework ). It means that if we need to add support for recognition of a new document and we don’t yet have the applicable networks, we simply need to set a selection of fonts, similar backgrounds, and the type of script.
We can apply the algorithms for statistical post-processing of the recognition results to some text fields using a priori knowledge about their syntax and semantics. These algorithms allow us to increase the expected result accuracy. We’ve already talked about post-processing algorithms in this article.
Now that we have the recognition results of the text fields in the document image, we can make up the result structure that we need to produce for the user. However, it will be just a recognition result for one image frame.
When using a sequence of images, Smart ID Engine assumes that it works with the same document in the different images within one recognition session (however, it doesn’t mean that it’s the same template – for example, when taking pictures of an ID-card within one session, the document might be flipped to show its backside). The accumulation of information from a few different images could be helpful at virtually any stage of the process of document analysis – from the search of a document (for additional filtering of the set of possible candidates and for the adjustment of geometric positioning) to the specific text field recognition.
The Smart ID Engine system uses a method that combines text field recognition results from multiple pictures, and it allows to considerably improve recognition accuracy in a sequence of images and to apply even the partial recognition data (if, for example, there was glare over the text field in some image). You can read about the methods that use combined results here, in this article.
One more major analysis aspect when working with a sequence of images is the decision on when to stop. Generally speaking, there is no natural stopping point for a video stream recognition, the stopping criterion needs to be thought about and implemented. It is essential to understand that besides the obvious purpose of the stopping rule to be done with the image capturing and to provide the user with the final result, it also serves as an important mechanism for reducing the time needed for document recognition. If one text field “stopped”, we don’t need to waste time for its recognition in the next image, and if all the text fields of the same zone “stopped”, we don’t need to process the whole zone (we don’t need to waste our time for the search of text fields).
The last but not least aspect of work with the video stream is the analysis of holograms and other OVDs (OVD stands for Optical Variable Devices – the security features of a document that change their appearance depending on the lighting and the viewing angle). In order to detect them and verify their presence, a sequence of images is absolutely required by definition. It’s impossible to verify the presence of such an element using only one picture.
In this article, we attempted to describe the core elements and the key principles of our Optical Character Recognition system that we focused on during its development. It goes without saying that we are not able to cover all the components and subtleties of the identity document recognition system in this format. For this reason, the explanation of such components as the copy detector, key object detection (crests, photos, signatures, stamps), fast detection of a machine-readable zone (MRZ) and its usage for helping document location and classification, text field formatting before inter-frame combination, determining the level of certainty of recognition and many others, weren’t included in this article. Even the rough description of the system required dozens of links to the scientific papers, and the solution to a problem, which might seem easy to some people, is achieved only through serious scientific advances of a large group of people. We publish about 40 scientific papers on the document recognition subject every year and file patent applications regularly.
We hope that the reasons for our choice of recognition approaches became clear after reading this article. In order to achieve maximal accuracy and optical character recognition speed, we use not only machine learning methods, but classic computer vision as well, and start the recognition process only when we have ample information about the recognized object, and do our best not to recognize any unnecessary information. We are constantly making progress and improving our methods and algorithms, thereby increasing document processing accuracy and speed, as well as adding new features for our estimated clients.
Recognition of ID cards, passports, driver’s licenses, residence permits, visas, and more. Works on a mobile phone or server, on photos and scans, regardless of their quality, as well as in the video stream from a smartphone or web camera, robust to capturing conditions. No data transfer - scanning is performed on-device and on-premise.
Recognition of data from codified objects. Captures machine-readable zones (MRZ), embossed, indent-printed, and free-template bank cards, PDF417, QR code, AZTEC and other linear and 2D barcodes using a smartphone’s camera, on the fly. Works in mobile applications (on-device) and scans photographs, regardless of lighting conditions.
Automatic extraction of data from documents (KYC questionnaires, applications, tests, etc), administrative papers (accounting documents, corporate reports, business forms), and government forms (financial statements, insurance policies, etc). Recognizes scans and photographs taken in natural conditions. Total security: only on-premise installation.
Green AI for Tomographic reconstruction and visualization. Algorithmization of the image reconstruction process directly during the of X-ray tomographic scanning process. We aspire to reduce the radiation dose received during the exposure by finding the optimal termination point of scanning.
Please fill out the form to get more information about the products,
pricing and trial SDK for Android, iOS, Linux, Windows.