# Adaptive "Recursive Least Squares" Applications

Developing Models for Kalman Filters

This installment is more for completeness than for practicality. Do you remember the two opposite but complementary approaches for improving the performance of a dynamic model?

1. The Kalman approach. Assume that the model is perfect, but unknown random effects appearing in the state variables need to be suppressed.
2. The adaptive systems approach. Assume that the data are perfect, but unknown persistent errors in the system model parameters need corrections to calculate better predictions.

These two ideas can be applied in tandem, because incremental changes to the system generally apply over a much longer time scale than the moment-by-moment state corrections of Kalman Filters.

Constructing a system model has more in common with the adaptive systems approach. So, while on the topic of the RLS method for performing Linear Least Squares updates, let's take a moment to consider what could happen if this is done in a naive manner.

## Influence of constraints in RLS methods

Remember the discussion of the sequential Least Squares updating methods, about how each new constraint added information to the Linear Least Squares constraint matrix? The consequence of this was that the Normal Equations matrix terms grow larger with each new constraint that is added, to the point that eventually new information becomes insignificant compared to all the accumulated history from the past.

Since the original Normal Equations matrix grows larger and larger, the inverse of that matrix must shrink smaller and smaller, to maintain the relationship

```   (MT·M)-1 · (MT·M) = I
```

When this fact is applied to the formula for incremental parameter updates:

$\Delta p\phantom{\rule{1em}{0ex}}=\phantom{\rule{1em}{0ex}}{\left({{M}_{n+1}}^{T}\phantom{\rule{0.4em}{0ex}}{M}_{n+1}\right)}^{-1}\phantom{\rule{0.4em}{0ex}}{{m}_{n+1}}^{T}·\left({y}_{n+1}\phantom{\rule{1em}{0ex}}-\phantom{\rule{1em}{0ex}}{m}_{n+1}·{p}_{n}\right)\phantom{\rule{1em}{0ex}}$ $\phantom{\rule{4em}{0ex}}\to \phantom{\rule{1em}{0ex}}0\phantom{\rule{0.4em}{0ex}}·\phantom{\rule{0.4em}{0ex}}{{m}_{n+1}}^{T}·\left({y}_{n+1}\phantom{\rule{1em}{0ex}}-\phantom{\rule{1em}{0ex}}{m}_{n+1}·{p}_{n}\right)\phantom{\rule{1em}{0ex}}$ $\phantom{\rule{4em}{0ex}}\to \phantom{\rule{1em}{0ex}}0$

Guess what, if parameter adjustments go to zero, things aren't adaptive any more. If you think your model parameters are being adjusted, and they aren't, this could mean anything from a grand waste of time to a total disaster.

## Fading memory in the RLS method

For the case of the sequential Least Squares updates, we found that when scaling the previous Normal Equations matrix by a number ` w == (N-1)/N ` prior to each update, the result after a very large number of update steps was a matrix exhibiting the equivalent of `N` units of influence. The influence of each past update decays away exponentially over time, allowing current new data to have relatively higher influence, and allowing parameter adjustments to continue.

Justify to yourself the claim that the corresponding adjustment in the RLS method is to scale the inverse Normal Equation matrix by `1/(w2)` prior to each update.

Changes will remain slow. About ` 5·N` updates would be required for persistent changes to become fully influential. But this is still a whole lot better than never.

#### Notations:

• K ↔ number of equivalent update contributions to preserve
• w ↔ the derived "forgetting factor weight" `(K-1)/K`
• n ↔ the number of constraints processed
• yn ↔ the new "dependent variable" (observed output) value introduced at step `n`.
• mn ↔ the row vector of new "independent variable" values introduced during update step `n`
• Mn ↔ the non-square matrix of constraint equations for the Least Squares solution to attempt to satisfy, with one column for each parameter to be determined, and `n` rows, one for each constraint.
• Nn ↔ weighted Normal Equations square matrix `MTM` constructed from the Least Squares constraint matrix M after inclusion of `n` constraints and applying "forgetting factors" during updates.

The following equations summarize the RLS updating scheme after applying the after applying "fading memory" weighting adjustments.

${X}_{4}\phantom{\rule{1em}{0ex}}=\phantom{\rule{1em}{0ex}}{N{}_{n}}^{-1}·\left(1}{{w}^{2}}\right)$ ${X}_{1}\phantom{\rule{1em}{0ex}}=\phantom{\rule{1em}{0ex}}{X}_{4}·{{m}_{n+1}}^{T}$ ${X}_{2}\phantom{\rule{1em}{0ex}}=\phantom{\rule{1em}{0ex}}{m}_{n+1}·{X}_{4}$ ${X}_{3}\phantom{\rule{1em}{0ex}}=\phantom{\rule{1em}{0ex}}{m}_{n+1}·{X}_{1}$ $\phantom{\rule{1em}{0ex}}{\left(N{}_{n+1}\right)}^{-1}\phantom{\rule{1em}{0ex}}=\phantom{\rule{1em}{0ex}}{X}_{4}\phantom{\rule{1em}{0ex}}-\phantom{\rule{1em}{0ex}}{X}_{1}·{\left(I+{X}_{3}\right)}^{-1}·{{X}_{2}}^{T}$ $\text{Δ}p\phantom{\rule{1em}{0ex}}=\phantom{\rule{1em}{0ex}}{\left(N{}_{n+1}\right)}^{-1}·{m}_{n+1}·\left({y}_{n+1}\phantom{\rule{1em}{0ex}}-\phantom{\rule{1em}{0ex}}{m}_{n+1}·p\right)$

No explicit numerical example of this is provided. Go ahead and try it for yourself if you wish. The point is that you will know what you need to do if you have special requirements for dynamically changing models. Most of the time, you will not have that, and you won't need this.

Contents of the "Developing Models for Kalman Filters" section of the ridgerat-tech.us website, including this page, are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License unless otherwise specifically noted. For complete information about the terms of this license, see http://creativecommons.org/licenses/by-sa/4.0/. The license allows usage and derivative works for individual or commercial purposes under certain restrictions. For permissions beyond the scope of this license, see the contact information page for this site.