07.05.2021 г.

About one funny approach to unimodal signal filtering

This tool can be applied effectively for noisy signal filtering using a priori knowledge about a unimodal signal.

When the expected structure of a signal is identified except for a few unknown parameters, offline signal filtering consists of solving the approximation problem. For example, if we know that the signal is growing in a linear pattern throughout a certain interval, we are going to need to perform a linear regression, and if we assume that it’s the Gaussian noise, the right method to implement would be the least squares method. There was one time when we encountered a problem of the x-ray microprobe (beam) profile analysis, and the only information we had was that the profile was unimodal, i.e. it had exactly one maximum. It turns out that even in this case the best way (for example, if we take the L2 metric) to approximate the experimental signal is to use the function that belongs to some known set of functions (the set of unimodal functions) and that is characterized by an acceptable asymptotic form of computational complexity.

About one funny approach to unimodal signal filtering

In this article we’ll be talking about the problem of approximation of some noisy signal Y presented as a function of a discrete parameter X, and we know that the original signal is unimodal. We also know that in our case the signal asymptotically approaches zero. We have to mention that the method we’ll describe below doesn’t have this assumption as its requirement.

First, for the sake of simplicity, it makes sense to review the same problem in relation to a monotone function. The description given below is for a monotone non strictly increasing function. And of course, all of those steps can be performed for a monotone decreasing function as well.

If we assume that the noise distribution is normal, then the problem consists in minimizing the quadratic functional:

    \[ \sum_i (\widetilde{Y} (X_i) - Y(X_i))^2 given the inequalities like \widetilde{Y}(X_{i+1)) \ge \widetilde{Y}(X_i)$. \]


It’s a quadratic programming problem, and you can find a few ready-to-use solver algorithms for it on the Internet. But, first off, the quadratic programming problem is generally calculated rather slowly, and secondly, the unimodal signal approximation problem is not presented like this if the extremum position is not known in advance.

There is an alternative solution though that doesn’t have the before-mentioned disadvantages. But when it’s used, the scale of the signal \widetilde{Y} will have to be treated as a discrete scale. It’s worth mentioning now that this problem won’t be very challenging if there is not a lot of noise, the regular linear filtering will most likely take care of the practical issues. So what’s the interesting, strong noise then? You can see the model signal with and without such noise. The noise there is created by the normal distribution \mathcal{N}(0, 0.8^2). Under such circumstances, the sampling of the approximation signal won’t be regarded as a considerable loss. Besides, as will be seen later, the computational complexity will be increasing in a linear fashion with an increasing number of counts.

About one funny approach to unimodal signal filtering

Figure 1. a) Model monotone signal without any noise added. b) Model monotone signal with some noise added.

It turns out that during the discrete approximation filtering of a regular noisy monotone signal comes down to dynamic programming with two parameters, and if we apply this solution, we are going to be able to build a computational scheme of the same complexity for a unimodal case as well.

Monotone signal filtering using the dynamic programming method

In our case the dynamic programming table is going to be two-dimensional, its i columns contain the values of the discrete argument X of test points increasing from left to right, and its j lines contain the values of the approximation function \widetilde{Y} increasing from bottom to top. The simplifying composite function is the sum of the squared deviations \widetilde{Y} from Y along the paths with valid transitions. It’s clear that valid transitions will be transitions to 1 to the right with a simultaneous non-negative shifting upwards.

The outer loop is tracking along the coordinate i from left to right. The inner loop is performed for each value X along the coordinate j from bottom to top. The minimum value of the composite function is calculated using the following formula:

    \[ W_{i,j}=\min\limits_{k\le j}(W_{i-1,k}) + (Y_j - \widetilde{Y}_j)^2, \]


and it is considered to be zero outside the table.

And in order to avoid re-calculating the minimum of the lower values of j every time, it would be wise to save it together with the value of the composite function while the table is being filled out. And since the path will have to be reconstructed in the end, it makes sense to save an adequate value of m that was used to attain the above-mentioned minimum:

    \[ m_{i,j}=\left\{\begin{aligned} &m_{i,j-1}, &\quad W_{i,m_{i,j-1}} <W_{i,j}\\&j, &\quad \text{otherwise} \end{aligned} \right. \]


Then the composite function calculations can be performed in O(1) for each unit:

    \[ W_{i,j} = W_{i-1,m_{i-1,j}} + (Y_j - \widetilde{Y}_j)^2. \]


Upon the completion of the forward computation, we just need to obtain an approximation in an explicit form. And to do so, we need to find the minimum among the values of the composite function for the the last right column (the column with the highest coordinate i), and then we need to perform the backward computation for i with a simultaneous re-establishing of the index value of the filtered signal j^* in each column:

    \[ j_i^*=\left\{ \begin{aligned} &\mathrm{argmin}_j(W_{n-1,j}), &\quad i=n-1\\ &m_{i,j^*_{i+1}},&\quad i<n-1\end{aligned} \right. \]


The final values \widetilde{Y} are determined as

    \[ $\widetilde{Y}(X_i)=\widetilde{Y}_{j_i^*}$. \]


Figure 2 shows an example of the result of the described algorithm performance for a monotone signal.

About one funny approach to unimodal signal filtering

Figure 2. Monotonically increasing signal filtering using dynamic programming. The input noisy signal is shown in blue, the result of the algorithm performance — in green, the ideal signal without noise — in red.

And that concludes the review of the additional monotone problem.

Unimodal signal

If we use the produced result, we can solve the unimodal signal problem by filling out 2 dynamic programming tables: one of them is filled out as in the case with a monotone function, the second one — the same way, but invertedly. As a result, 2 composite functions are calculated: W^{lr} and W^{rl}, as well as the values m^{lr} and m^{rl} which are used to reconstruct the paths in the table. Next step would be to find such a pair (i_0,j_0) out of all the i and j values for which the sum of the composite functions will be minimal — those will be the indices of the approximated function maximum. The value of the function at its either end is reconstructed from the corresponding table.

    \[ (i_0,j_0)=\mathrm{argmin}_{i,j}(W_{i,j}^{lr}+W_{i,j}^{rl}) \]


    \[ j_i^*=\left\{ \begin{aligned} &j_0, &\quad i=i_0\\ &m_{i,j_{i+1}^*}^{lr}, &\quad i<i_0\\ &m_{i,j_{i-1}^*}^{rl}, &\quad i>i_0 \end{aligned} \right. \]


    \[ \widetilde{Y}(X_i)=\widetilde{Y}_{j_i^*} \]


The figures 3 and 4 show the example of the original unimodal signal and what it looks like after it was processed by the suggested algorithm.

About one funny approach to unimodal signal filtering

Figure 3. Noisy unimodal signal.

About one funny approach to unimodal signal filtering

Figure 4. Unimodal signal filtering by a means of dynamic programming. The noisy input signal is shown in blue, the result of the algorithm performance — in green, the ideal signal without noise — in red.

Restriction to the derivative

If we know that the signal can’t change faster than at some fixed rate of speed v (which is highly probable), the accuracy of the regression can be improved by introducing the following restriction to the algorithm in an explicit form. All we need to do is to calculate the maximum possible change of the index of the signal value d j_i for each i value and to forbid transitions in the dynamic programming table with their j changes higher than this value:

    \[ d j_i = \lceil v \cdot (X_i -X_{i-1})/(\widetilde{Y}_1 - \widetilde{Y}_0) \rceil \]


    \[ W_{i,j}=\min\limits_{j-d j \le k \le j}(W_{i-1,k}) + (\widetilde{Y}_j-Y_j)^2 \]


However, if it’s important not to decrease the complexity, we need to be able to get the minimal value of the functional in the range [j-d j,j] in constant time.

And this value can be calculated with the help of the van Herk/Gil-Werman algorithm.

The result of applying the algorithm with the restriction to a derivative v=1.2 is shown in Figure 5.

About one funny approach to unimodal signal filtering

Figure 5. Unimodal signal filtering by dynamic programming with the restriction to the rate of change of the signal. The noisy input signal is highlighted in blue, the result of the algorithm performance — in green, the ideal signal without noise — in red.

Exposing “black magic”

Our curious reader might suspect that there could be an easier way to achieve the same or even better results. Obviously, this method can be a little over-the-top when working with certain signal and noise parameters. Besides, the best short cut is the road that has been walked before. That’s why the expert is likely to always have a favorite method for every tricky situation which only he knows how to perform to get the desired result.

In order to clear up the fears of possible cheating, there are the examples of signal processing by Gaussian smoothing and by median filtering with downscaling. The parameters of the filters were selected in order to receive optimum results, but without excessive efforts.

About one funny approach to unimodal signal filtering

Figure 6. Unimodal signal filtering by Gaussian smoothing. The noisy input signal is shown in blue, the filtering result — in green, the ideal signal without noise — in red.

About one funny approach to unimodal signal filtering

Figure 7. Median filtering of the unimodal signal. The noisy input signal is shown in blue, the filtering result — in green, the ideal signal without noise — in red.

As you can see, the dynamic programming method provides better results than the most primitive popular filters when applied to some arbitrary data. And it doesn’t make absolutely any sense to compare them on the material of the model examples comprehensively. The above-mentioned method is valuable for its being the perfect example of “a tailored suit”: the formalized problem statement constitutes the basis for the method itself. In our case it was the requirement of unimodality. In a similar fashion, linear methods are measured by the signals which can be separated from the noise by the frequency. One more instrument in the engineer’s toolbox — what can be better?

Improve your business with Smart Engines technologies

Identity document scanning

Recognition of ID cards, passports, driver’s licenses, residence permits, visas, and more. Works on a mobile phone or server, on photos and scans, regardless of their quality, as well as in the video stream from a smartphone or web camera, robust to capturing conditions. No data transfer - scanning is performed on-device and on-premise.

Credit cards, barcodes, MRZ scanning

Recognition of data from codified objects. Captures machine-readable zones (MRZ), embossed, indent-printed, and free-template bank cards, PDF417, QR code, AZTEC and other linear and 2D barcodes using a smartphone’s camera, on the fly. Works in mobile applications (on-device) and scans photographs, regardless of lighting conditions.

 

Document & Form Reading software

Automatic extraction of data from documents (KYC questionnaires, applications, tests, etc), administrative papers (accounting documents, corporate reports, business forms), and government forms (financial statements, insurance policies, etc). Recognizes scans and photographs taken in natural conditions. Total security: only on-premise installation.

Computational Imaging and Tomography

Green AI for Tomographic reconstruction and visualization. Algorithmization of the image reconstruction process directly during the of X-ray tomographic scanning process. We aspire to reduce the radiation dose received during the exposure by finding the optimal termination point of scanning.

Send Request

Please fill out the form to get more information about the products,pricing and trial SDK for Android, iOS, Linux, Windows.

    Send Request

    Please fill out the form to get more information about the products,pricing and trial SDK for Android, iOS, Linux, Windows.