Deepness unscrambles: Principle of wave of Ka Erman filter
[introduction] many rich guests related to wave of Ka Erman filter, paper read on the net, or is to talk about sensibility of academic, lack only, perhaps have perceptual knowledge, lack academic derivation. Can of both of give attention to two or morethings little little, saw a Bo Wen of abroad till me, true Jing admires I, must admire an author the spirit of this kind of meticulous in a subtle way, the interpreter comes over to be shared with everybody.
I must say wave of Ka Erman filter, the thing that can accomplish because of it lets person exclaim simply! Accident is to rarely software engineer and scientist understand it somewhat, this lets me feel depressed, because wave of Ka Erman filter is a so powerful tool, can be in uncertainty shirt-sleeve information, meanwhile, it extracts the ability of accurate information to look mysterious.
What is wave of Ka Erman filter?
The trends that you can be in any containing not to define information uses wave of Ka Erman filter in the system, to the system next trend make well-founded forecast, although accompanying all sorts of interference, wave of Ka Erman filter always is the circumstance that can point out true happening.
Wave of Ka Erman filter is being used in the system of successive change is very ideal, it is had take up the merit with little memory (besides amount of a status, do not need to withhold) of other history data, and rate is very rapid, suit to apply at real time problem to mix very much embedded system.
The great majority that finds on Google looks a bit obscure and difficult about realizing the mathematical formula of wave of Ka Erman filter, this situation is a bit bad. Actually, if with correct means look upon it, filter wave dispute often mixes Ka Erman simply to understand easily, my general uses pretty picture and colour clarity below elaborate it, the knowledge that you need to know a few basic probability and matrix only is OK.
What can we do with wave of Ka Erman filter?
With toy citing: You developed a small robot that can run everywhere in the woods, the navigation of exact position ability that this robot needs to know it is in.

We can say the robot has a state
, show the position and rate:


Noting this position is the one lot number about this system major attribute only, it can be the thing of any other. The position and speed are in this example, it also can be gross of the liquid in a container, the temperature of car engine, user finger is in the positional coordinate on feeling board, or any signal that you need to dog.
This robot contains GPS, precision is 10 meters about, still calculate pretty good, but, its need reachs his place accurately 10 meters of less than. There are a lot of gullies and cliff in the woods, if the robot goes wrong one pace, drop likely cliff, only so GPS is insufficient.

Probably the news that we know how to a few robots move: For example, the robot knows to send the statement of electric machinery, know oneself whether hold office at court a direction moves and intervene without the person, in next condition, the robot is probable forward same way is mobile. Of course, the robot is utterly ignorant to his motion: The influence that it may be blown by wind, wheel direction slanted a bit, perhaps encounter cobbly ground and overset. So, the distance that the length that wheel has turned can not show accurately to the robot walks actually, forecasting also is not very perfect.
GPS sensor told us news of a few condition, our forecast told us the robot is met how move, but it is indirect only, and accompanying a few inaccuracy be decided and inaccuracy. But, if use all is opposite our practicable information, can we get one estimates better outcome than any basis oneself? The answer is YES of course, this is the good of wave of Ka Erman filter.
Wave of Ka Erman filter is the problem that how sees you
We continue below with having the position and rate only the simple case of these two condition becomes an explanation.

We do not know real position and rate, there are a lot of kinds of may right combinations between them, but the possibility of wh some of which should be more than other part:

Wave of Ka Erman filter assumes two variable (the position and speed, in this example) it is random, and obedient gauss distributings. Every variable has to all be worth μ , the center that expresses to distributing randomly (the most likely condition) , and variance
, express uncertainty.


Going up in the graph, the position and speed are irrelvant, this is meant by among them a variable condition cannot figure a value with another potential variable. The example below is more interesting: The position and speed are relevant, the possibility of observation spy emplacement depends on current rate:

This kind of situation produces likely, for example, we are based on old place to estimate reposition. If speed is exorbitant, we may be already mobile very far. If slow shift, be apart from very won’t far. Dog this kind of relation is very important, because it brings us more news: Among them the value that a measured value told us other variable is likely, this is the purpose of wave of Ka Erman filter, information of more of the extraction in data is measured as far as possible in what include uncertainty!
This kind of dependency expresses with covariance matrix, in short, every element in matrix
Express I with J between condition variable relevant degree. (You may have guessed covariance matrix is a symmetrical matrix, this means I of OK and aleatoric exchange and J) . Covariance matrix uses “ normally
” will express, among them element expresses to be “
” .




Use matrix will describe a problem
We are based on gauss to distributing will build condition variable, need two information in hour K so: Optimal estimation
(All be worth namely, elsewhere commonly usedμExpress) , and covariance matrix
.



(Of course, here we used the position and rate only, actually this condition can include many variable, delegate any you think denotive information) . Next, we need a basis current condition (K-1 hour) will forecast below one position (K hour) . Remember, what we do not know pair of below one position is all in forecasting which be “ true ” , but our calculate function and do not care. It undertakes forecasting to all possibility, give out new gauss distributings.

We can use matrix
Will state this forecasts a process:


It us in primitive estimation every the point is floating went to to forecast the position newly, if primitive estimation is right word, this forecasting the position newly is a system the position that next meetings move. How we calculate the position of next hour and rate then again with matrix? A basic kinematic formula uses below will express:

Now, we had to forecast matrix to represent the state of below one hour, but, we still do not know how to update covariance matrix. Right now, we need to introduce another formula, if we will distributing medium every the dot is multiplied with matrix A, so its covariance matrix
How can you change? Very simple, formula gives out below:


Combinative equation (4) and (3) gets:

Exterior control is measured
We did not take all news, outside existing possibly, the element meets the ministry to undertake controlling to the system, bring a few changes that do not have dependency with systematic oneself condition.
It is with the athletic condition model of the train exemple, train driver may operate accelerator, let the train quicken. Identical, in our robot in this example, navigation software may give out an instruction to let wheel changes direction or stop. If know these additional information, we can use a vector
Will express, in adding it to ours to forecast equation, do amend.

Because the setting of accelerator or control command,assume, we knew the acceleration of expectation
, can get according to basic kinematic equation:


Express with the form of matrix even if:



Exterior interference
If these status amounts are the attribute that is based on systematic oneself,foregone perhaps exterior control action changes, won’t appear what problem.
But, be if put,being disturbed sealedly? For example, assume we dog aircraft of a 4 alas coming back, it may get the interference of wind, if we dog round of type robot, wheel may skid, or the small slope on road surface can let its decelerate. Such word we cannot continue to undertake dogging to these condition, if do not have a these exterior interference to consider inside, our forecast with respect to meeting occurrence deviation.
After be forecasted every time, we can add a few new uncertainty to build this to plant with “ outside ”(namely the interference that we did not dog) the uncertainty model between:

After the every condition variable in primitive estimation updates new position, obedient still gauss distributings. We can say
every condition variable moved the area that a new obedient gauss distributings, covariance is
. It is in other words, we regard these interference that were not dogged as covariance is
noise will handle.




This arose to have different covariance (but have identical all be worth) new gauss distributings.

We are added through simple ground
Get patulous covariance, the complete expression that forecasts measure gives out below:


By go up type is knowable, new best estimation is on the basis one best estimation is forecasted get, add the correction of quantity of foregone and exterior control.
And new uncertainty is forecasted by on one uncertainty get, add the interference of external environment.
Good, our possible to the system trend had an ambiguous estimation, with
And
Will express. If how is the data of recombine sensor met?


Will revise estimation cost with measured value
We may have many sensor to measure the condition with current system, which sensor is specific those who measure is which condition variable not important, perhaps one is to measure the place, one is to measure speed, the ground connection between every sensor told us news of a few condition.

Attention, the unit of the condition that the unit that sensor reads extraction data and measure should dog with us likely and measure are different, we use matrix
Will represent the data of sensor.


We can calculate those who give sensor reading to distributing, with the expression before following type place show the method:

The one big advantage of wave of Ka Erman filter can handle sensor noise namely, in other words, our sensor has a place more or less fluky, and every condition in primitive estimation can be mixed the sensor reading correspondence inside certain limits rises.

From inside the sensor data that measures, we can guess a system to be in what condition currently roughly. But because be put in uncertainty, the reading that certain state may get than us is close to true condition more.

We plant this uncertainty (for example: Sensor noise) with covariance
Express, should distributing all be worth is we read the sensor data that takes, say for
.


We had two gauss to distributing now, one is to be in calculate a value around, one is to be near sensor reading.

We must be in calculate a value (pink) with sensor measured value (green) between find optimum solution.
So, what is our most possible condition? To any likely reading
, have two kinds of cases: (The measured value;(2) of 1) sensor calculates a value by what before one condition gets. If we want to know the probability that these two kinds of situations produce possibly, it is OK that these two gausses distributinging photograph is multiplied.


Those who remain is jackknife part, of this jackknife part all be worth is the value with two the most potential estimation, namely given the best estimation in all information.
Look! The area of this jackknife looks like another gauss to distributing.

If what you see, have two differ to all be worth the gauss with variance to distributing to be multiplied, you can get have independence newly to all be worth the gauss with variance to distributing! Explain with formula below.
Shirt-sleeve gauss distributings
Distributing with one dimension gauss first will analyse simpler point, have variance
Andμgauss curve can express with next type:


What can if take the function look that two obedient gauss distributing,you get?

type (9) generation enters type (in 10) (notice new normalization, make total probability can get for 1) :

type (two posture in 11) express identically with K partly:

Below farther type (12) and (the form that 13) writes into matrix, ifΣExpress the covariance that gauss distributings,
Express what every dimension spends to all be worth, criterion:


Matrix
Call Ka Erman gain, will use below. Loosen! We were about to finish!

Will all formulary conformity rises
We have two gauss to distributing, forecast a share
, and measure a part
, put them type (the cipher out in 15) the jackknife part between them:



By type (14) can get Ka Erman gain to be:

type (16) and type (the both sides of 17) at the same time of Zuncheng matrix go against (attention
The face was included in
) make an appointment with its, again type (the both sides of the 2nd equality of 16) at the same time right multiply matrix
go against gain the following equality:




Type gave out to update measure equation thoroughly on.
It is new best estimation, we can mix it
Put in the next is forecasted and updating equation ceaseless iteration.



Summary
Above is all and formulary in, you need to use formula only (7) , (18) , (19) . (If forgot, you are OK according to type (4) and (15) new derivation)
We can use these formula to build mathematical model to any linear systems, to nonlinear for the system, we use patulous Kaerman filter wave, distinction depends on EKF much the process that one one or two is forecasted and measures a part to undertake linearization.
The article is reprinted from electronic project special.