fbpx

October 20, 2020 | Sci-Dev

A few facts about cascade classifiers in optical recognition that are rarely talked about in scientific papers

Today we’d like to talk about optical recognition again. A simple recognizer model of a cascade classifier will be the main subject of our article. The cascade classifier is used in the popular Viola-Jones framework. The sad part is that despite the vast amount of articles, no one studied cascade classifiers thoroughly enough. Although the cascade classifier seems to be simple, it has many pitfalls that need to be avoided and some interesting features that are worth mentioning. We can’t wait to share our knowledge with you. If you’d like to learn more, keep reading!

The cascade classifier has a very simple model. It consists of a few progressive stages, each of which is presented as a binary classifier. The object is entered at the first stage and then keeps “travelling” through all the consecutive stages. If at some stage the classifier discards the object (classifies it as a negative example), it’s not going to be analyzed by the remaining classifiers of this cascade. This simple model became widely popular after the Viola-Jones object detection framework was published [1], where, as it was claimed, it was used to provide high performance. Was it the only reason for its use though? Is the cascade classifier really that simple? Let’s figure it out!

The cascade classifier in the Viola-Jones method is used to simply speed up the object detector

On the first page of the original article [1] there is the following line:

“The third major contribution of this paper is a method for combining successively more complex classifiers in a cascade structure which dramatically increases the speed of the detector by focussing attention on promising regions of the image.”

Indeed, the original Viola-Jones framework was designed for face detection in an image. And the detection problem is solved using the sliding window technique with the help of a binary classifier. The classifier is applied to each point of an image at different scales. The imbalanced nature of data during the detection stage (in each reviewed image there will be millions and even billions times more of the “empty” areas without the object in question than the areas containing the object) led to the use of a cascade – an approach that allows to quickly discard the “empty” areas.

That’s what happens if we have an already trained classifier. Let’s talk about how the classifier is trained. It turns out there is the same problem of imbalanced data here: the number of negative images is much higher (millions and even billions times higher) than the number of positive samples. Due to its architecture, each new stage of the cascade is trained using the AdaBoost method not with all the negative samples, but only with a limited set of errors from the previous stages of the cascade! This allows to run the AdaBoost algorithm on a balanced and limited selection of data!

As you can see, the use of a cascade classifier in the Viola-Jones method has two major advantages:

    1. It makes it possible to train the classifier easily without having to deal with the problem of an “infinite” training dataset.
    2. It ensures rapid discarding of the “empty” areas during the detection process, which leads to a higher performance rate, on average.

Let’s continue reviewing the classic cascade and address the question of the performance.

 

All of the above shows that the cascade classifier is an instrument used to boost up the performance

Let’s return to the subject of the cascade purpose, but we’ll review it from a different angle this time. If we look at the cascading classifier from the mathematical point of view, we can see that a cascade is a conjunctive form of strong classifiers (each of which is presented as a linear combination of features):

    \[Cascade\left(x\right)=\prod_{i=1}^{N}\left[S_i\left(x\right)>0\right],\ \S\left(x\right)=\left[\sum_{t=1}^{T}{\alpha_t\cdot h_t\left(x\right)}>0\right],\]

where \left[\bullet\right] is an indicator function.

When we have a limited number of available features (which is a normal occurrence in practice when we are high performance-oriented), the conjunctive form of strong classifiers possesses greater expressive ability than a single linear classifier. It’s easy to see if we give a simple example of a feature space that consists of two features, and the positive and negative objects, represented in a coordinate space of features, are located the way it’s shown in the image below ( the green ones indicate the positive objects, the red ones – the negative ones). It is obvious that there is no single linear classifier that would be able to divide this sample set correctly. However, it is guaranteed that a four-stage cascade classifier would solve this problem.

So by using cascade classifiers, we not only get a higher performance, but we also get a greater expressive power than when using a single linear classifier with a limited number of features.

 

The cascade classifier provides sustained high levels of performance and can be used in real-time optical recognition systems

As we’ve mentioned earlier, the cascade framework delivers high performance through the fast process of elimination of “empty” regions since their number is way higher than the number of the regions containing the object. The processing time of an “empty” region is several times less than the processing time necessary for a region containing an object (it’s proportional to the cascade length – the number of the stages in the cascade).

Since the number of the regions containing an object varies from image to image, the processing time for each image varies as well. Due to the fact that there are significantly fewer regions containing an object than the regions without an object in any given image, the processing time difference is measured not by dozens of times, but by a double-digit percentage, which is still sizable in the industrial optical recognition systems.

So now we know that the cascade classifier operating time can vary significantly for different images. It means that when we assess the classifier performance, we have to assess the operating time on average and in the worst case. And we have to always be prepared for these “time inconsistencies” when using the cascade classifiers.

In our practice we’ve already dealt with serious issues due to the significant difference in the operating time of the cascade classifier in the average case and in the worst case. As a part of the toll road automation project, we were working on a car type recognition problem, and one of our main subtasks was to count the car’s wheel sets. Of course, in some images we could apply the Viola-Jones method to detect wheels. Due to the huge variety of wheels (see figure below), the trained cascade had to be quite long (20 stages). We have seen the problems happening because of different processing times for each image first-hand, and it definitely interfered with our attempts to build a real-time optical recognition system.

That’s when we developed the concept of a classic cascade classifier into a full decision tree concept, creating the unique method to train such decision trees (we remembered that we needed to propose an algorithm that would allow us to avoid the issue of the “infinite” training dataset). The details of this algorithm can be found in the paper [3]. We’ll just mention a few facts here:

    1. The longest path of the trained decision tree consists of 6 strong classifiers (the scheme of a trained decision tree-like classifier is demonstrated in the image below).
    2. The trained decision tree-like classifier provided better performance than a trained classifier mentioned earlier. It’s logical and can be explained by the fact that the expressive power of a decision tree-like classifier (which is presented as a conjunctive-disjunctive form) is higher than the expressive power of a cascade classifier (which is presented as a conjunctive form).
    3. The trained decision tree-like classifier significantly outperformed the cascade classifier in the worst case, and barely ever lost in the average cases.

There is a quantitative comparison of a cascade classifier and a decision tree-like classifier presented in the table below.

Sensitivity Specificity Time in the worst case, microseconds Time in an average case, microseconds
Cascade classifier 93,55% 99,98% 58159 67432
Decision tree-like classifier 94,02% 99,99% 58717 63552

If you want to use the cascade classifiers in real-time recognition systems, you have to keep in mind the specificities of their performance in the average case and in the worst case.

The cascade classifier training methods are so obvious that you don’t need to bother about them

It’s probably one of the most complex issues related to the cascade classifiers. The thing is, in all the articles that we came across, the training process of the cascade is described so poorly and briefly and without a proper justification of the training algorithm efficiency. As a rule, the algorithm for training a cascade is presented like this:

    1. Determine the values of the false recognition rate (F) for the entire cascade.
    2. Determine the values of the correct recognition rate (d) and the false recognition rate (f < F) for each recognition stage.
    3. Determine the validation set for a fair evaluation of the qualitative indicators of the final classifier.
    4. Train each new stage of the cascade (let’s remind that it is trained on all existing positive examples and on the false-positive errors of the current cascade) that way so its indicators d_i and f_i are not lower than the preset thresholds, i.e. d_i>d and f_i<f. By the way, the process of securing these indicators provokes some interest as well.
    5. Add a newly trained stage to the cascade and evaluate its qualitative indicators using the validation set. If the false recognition rate is lower than the target indicator F, we can stop the training process. Otherwise, we return to step 4 to train a new stage of the cascade.

If we trained K stages of the cascade using the described algorithm, we can evaluate the average correct recognition rate of the cascade like this:

    \[N=n_1+\sum\limits_{i=2}^K\left( n_i\prod\limits_{j=2}^i p_j \right),\\ D=\prod\limits_{i=1}^K d_i,\]

where n_i is the i-th stage complexity, p_i is the probability of the i-th stage computation, and d_i is the correct recognition rate at the i-th stage of the cascade.

As you can see the cascade complexity doesn’t matter for the training algorithm we have been describing. That’s why we can’t consider it to be efficient performance-wise. At the same time we know that the scientists all around the world strongly believe that the algorithms for effective cascade training are extremely important. In order to demonstrate it, we’ll quote from the article written by Paul Viola and Michael Jones [4]:

The overall training process involves two types of tradeoffs. In most cases classifiers with more features will achieve higher detection rates and lower false positive rates. At the same time classifiers with more features require more time to compute. In principle one could define an optimization framework in which

– the number of classifier stages,

– the number of features, n_i, of each stage,

– the threshold of each stage

are traded off in order to minimize the expected number of features N given a target for F and D. Unfortunately finding this optimum is a tremendously difficult problem.

It’s worth mentioning that the review of the up-to-date literature we made in 2016, showed that no effective algorithms for cascade classifier training were created yet. By the way, that was when we were trying to solve this problem at Smart Engines. We studied the following functional that depends on Type I detection errors (E_1), on Type II detection errors (E_2), and on the errors of average complexity (N):

    \[F(E_1, E_2, N) = \beta_1 \cdot E_1 + \beta_2 \cdot E_2 + \beta_3 \cdot N\]

The choice of parameters \beta_1, \beta_2, and \beta_3 sets the relative cost of Type I and Type II errors and the produced detector complexity. As the next step of the training process of the cascade classifier, we used a greedy brute force search algorithm at each stage in order to get the cascades that maximize the selected functional. An in-depth description of this algorithm would be beyond the scope of this article, but we are always ready to share a helpful bibliographic reference [5] so you could learn more, our dear reader.

 

Conclusion

To sum up everything we talked about in this article, we want to draw the following conclusions about the cascade classifier model:

    1. There are very few scientific research papers that thoroughly cover all the necessary details.
    2. It proves to be very useful to re-read scientific papers and analyze the way they present the model performance, its characteristics and its limitations, even when everything seems to be broken down at first glance.
    3. There is always room for a quality scientific research, even if the model was proposed 20 years ago.

References

[1] Paul Viola Michael J. Jones. Robust real-time object detection // International journal of computer vision. – 2001.

[2] Bourdev L., Brandt J. Robust object detection via soft cascade //2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05). – IEEE, 2005. – Т. 2. – С. 236-243.

[3] Minkina A. et al. Generalization of the Viola-Jones method as a decision tree of strong classifiers for real-time object recognition in video stream //Seventh International Conference on Machine Vision (ICMV 2014). – International Society for Optics and Photonics, 2015. – Vol. 9445. – P. 944517.

[4] Paul Viola Michael J. Jones. Robust real-time face detection // International journal of computer vision. – 2004. – Vol. 57. – No. 2. – P. 137-154.

[5] Polyakov I. V., Kuznetsova E. G., Usilin S. A., Nikolaev D. P. Postroenie optimalnykh kaskadov violy–dzhonsa pri pomoshchi “zhadnykh” algoritmov perebora upravlyayushchikh parametrov s promezhutochnym kontrolem po validatsionnoi vyborke [Training optimal viola–jones detectors using greedy algorithms for selecting control parameters with intermediate validation on each level]. Sensornye sistemy [Sensory systems]. 2016. V. 30(3). P. 241-248 (in Russian).

 

 

Send Request 5

More Sci-Dev posts

ID optical character recognition on a mobile phone: simple to complex

ID optical character recognition on a mobile phone: simple to complex

The idea of using a mobile device for document scanning and recognition has been worked on since the appearance of the first camera phone. But for quite a long time, the poor quality of a camera on mobile devices and the low performance of mobile processors didn’t allow developing optical character recognition systems precise enough for practical use. Today smartphones and tablets are considered to be one of the best data entry options – both in terms of the camera quality and the processor.

Image binarization algorithm in computed tomography

Image binarization algorithm in computed tomography

When analyzing porous materials, the binarization process becomes essential because in this case a data model doesn’t involve an intermediate state between a hollow pore and an impenetrable matrix. You will learn about an interesting approach to do so using neural networks without ground truth and will get a glimpse into the world of computed tomography and its related fields.

Test Drive Our Smart Engines

Free demo apps allow you to experience the power of Smart Engines software for intelligent document scanning in a real-world context.

Why not experience the power of Smart Engines for yourself? Our demo apps allow you to test the capabilities of our identity document recognition software on mobile devices in videostream or in a single image (photo, scan).

Simply display any document to the camera in real-time or choose a photo from the gallery, and the app will recognize and capture the necessary data.

Demo apps Privacy Policy

id documents enginge by Smart Engines
Apple App Store Badge
Google Play Badge
id documents enginge by Smart Engines

Send Request

Send request for quotation or more information about products.

Contact Form

Smart Engines is to provide a reply within 2 business days. If you don't receive a message from our representative within 2 business days, please check your spam folder or simply send us an email to sales@smartengines.com

Smart Engines is committed to privacy, we are fully compliant with GDPR and CCPA, all the personal data is intended for internal use only.