07.05.2021 г.

This tool can be applied effectively for noisy signal filtering using a priori knowledge about a unimodal signal.

When the expected structure of a signal is identified except for a few unknown parameters, offline signal filtering consists of solving the approximation problem. For example, if we know that the signal is growing in a linear pattern throughout a certain interval, we are going to need to perform a linear regression, and if we assume that it’s the Gaussian noise, the right method to implement would be the least squares method. There was one time when we encountered a problem of the x-ray microprobe (beam) profile analysis, and the only information we had was that the profile was unimodal, i.e. it had exactly one maximum. It turns out that even in this case the best way (for example, if we take the L2 metric) to approximate the experimental signal is to use the function that belongs to some known set of functions (the set of unimodal functions) and that is characterized by an acceptable asymptotic form of computational complexity.

In this article we’ll be talking about the problem of approximation of some noisy signal presented as a function of a discrete parameter , and we know that the original signal is unimodal. We also know that in our case the signal asymptotically approaches zero. We have to mention that the method we’ll describe below doesn’t have this assumption as its requirement.

First, for the sake of simplicity, it makes sense to review the same problem in relation to a monotone function. The description given below is for a monotone non strictly increasing function. And of course, all of those steps can be performed for a monotone decreasing function as well.

If we assume that the noise distribution is normal, then the problem consists in minimizing the quadratic functional:

It’s a quadratic programming problem, and you can find a few ready-to-use solver algorithms for it on the Internet. But, first off, the quadratic programming problem is generally calculated rather slowly, and secondly, the unimodal signal approximation problem is not presented like this if the extremum position is not known in advance.

There is an alternative solution though that doesn’t have the before-mentioned disadvantages. But when it’s used, the scale of the signal will have to be treated as a discrete scale. It’s worth mentioning now that this problem won’t be very challenging if there is not a lot of noise, the regular linear filtering will most likely take care of the practical issues. So what’s the interesting, strong noise then? You can see the model signal with and without such noise. The noise there is created by the normal distribution . Under such circumstances, the sampling of the approximation signal won’t be regarded as a considerable loss. Besides, as will be seen later, the computational complexity will be increasing in a linear fashion with an increasing number of counts.

*Figure 1. a) Model monotone signal without any noise added. b) Model monotone signal with some noise added.*

It turns out that during the discrete approximation filtering of a regular noisy monotone signal comes down to dynamic programming with two parameters, and if we apply this solution, we are going to be able to build a computational scheme of the same complexity for a unimodal case as well.

In our case the dynamic programming table is going to be two-dimensional, its i columns contain the values of the discrete argument of test points increasing from left to right, and its j lines contain the values of the approximation function increasing from bottom to top. The simplifying composite function is the sum of the squared deviations from along the paths with valid transitions. It’s clear that valid transitions will be transitions to 1 to the right with a simultaneous non-negative shifting upwards.

The outer loop is tracking along the coordinate from left to right. The inner loop is performed for each value along the coordinate from bottom to top. The minimum value of the composite function is calculated using the following formula:

and it is considered to be zero outside the table.

And in order to avoid re-calculating the minimum of the lower values of every time, it would be wise to save it together with the value of the composite function while the table is being filled out. And since the path will have to be reconstructed in the end, it makes sense to save an adequate value of m that was used to attain the above-mentioned minimum:

Then the composite function calculations can be performed in for each unit:

Upon the completion of the forward computation, we just need to obtain an approximation in an explicit form. And to do so, we need to find the minimum among the values of the composite function for the the last right column (the column with the highest coordinate ), and then we need to perform the backward computation for with a simultaneous re-establishing of the index value of the filtered signal in each column:

The final values are determined as

Figure 2 shows an example of the result of the described algorithm performance for a monotone signal.

*Figure 2. Monotonically increasing signal filtering using dynamic programming. The input noisy signal is shown in blue, the result of the algorithm performance — in green, the ideal signal without noise — in red.*

And that concludes the review of the additional monotone problem.

If we use the produced result, we can solve the unimodal signal problem by filling out 2 dynamic programming tables: one of them is filled out as in the case with a monotone function, the second one — the same way, but invertedly. As a result, 2 composite functions are calculated: and , as well as the values and which are used to reconstruct the paths in the table. Next step would be to find such a pair out of all the and values for which the sum of the composite functions will be minimal — those will be the indices of the approximated function maximum. The value of the function at its either end is reconstructed from the corresponding table.

The figures 3 and 4 show the example of the original unimodal signal and what it looks like after it was processed by the suggested algorithm.

*Figure 3. Noisy unimodal signal.*

*Figure 4. Unimodal signal filtering by a means of dynamic programming. The noisy input signal is shown in blue, the result of the algorithm performance — in green, the ideal signal without noise — in red.*

If we know that the signal can’t change faster than at some fixed rate of speed (which is highly probable), the accuracy of the regression can be improved by introducing the following restriction to the algorithm in an explicit form. All we need to do is to calculate the maximum possible change of the index of the signal value for each value and to forbid transitions in the dynamic programming table with their changes higher than this value:

However, if it’s important not to decrease the complexity, we need to be able to get the minimal value of the functional in the range in constant time.

And this value can be calculated with the help of the van Herk/Gil-Werman algorithm.

The result of applying the algorithm with the restriction to a derivative is shown in Figure 5.

*Figure 5. Unimodal signal filtering by dynamic programming with the restriction to the rate of change of the signal. The noisy input signal is highlighted in blue, the result of the algorithm performance — in green, the ideal signal without noise — in red.*

Our curious reader might suspect that there could be an easier way to achieve the same or even better results. Obviously, this method can be a little over-the-top when working with certain signal and noise parameters. Besides, the best short cut is the road that has been walked before. That’s why the expert is likely to always have a favorite method for every tricky situation which only he knows how to perform to get the desired result.

In order to clear up the fears of possible cheating, there are the examples of signal processing by Gaussian smoothing and by median filtering with downscaling. The parameters of the filters were selected in order to receive optimum results, but without excessive efforts.

*Figure 6. Unimodal signal filtering by Gaussian smoothing. The noisy input signal is shown in blue, the filtering result — in green, the ideal signal without noise — in red.*

*Figure 7. Median filtering of the unimodal signal. The noisy input signal is shown in blue, the filtering result — in green, the ideal signal without noise — in red.*

As you can see, the dynamic programming method provides better results than the most primitive popular filters when applied to some arbitrary data. And it doesn’t make absolutely any sense to compare them on the material of the model examples comprehensively. The above-mentioned method is valuable for its being the perfect example of “a tailored suit”: the formalized problem statement constitutes the basis for the method itself. In our case it was the requirement of unimodality. In a similar fashion, linear methods are measured by the signals which can be separated from the noise by the frequency. One more instrument in the engineer’s toolbox — what can be better?

**Send Request**

Please fill out the form to get more information about the products,

pricing and trial SDK for Android, iOS, Linux, Windows.