Table of Contents:
Developing Models for Kalman Filters
Chapter | Title |
---|---|
|
|
Section 1 |
|
Chapter 1 | Why Model? |
What is a model? What is a linear model? How is this related to Kalman Filters? For what purpose? |
|
Chapter 2 | Linearity |
How do strict and conventional notions of linearity differ? How is this related to Kalman Filter models? |
|
Chapter 3 | State Models and Kalman Filters |
Why are the linear models discussed so far all unsuitable for purposes of Kalman Filters — what is missing? |
|
Chapter 4 | State Transition Dynamic Models |
Detailing the special form your model must have to be suited for classic Kalman Filtering — and why. |
|
Chapter 5 | Feedback in State Models |
Some adjustments you will need in your model if you choose to — or must — employ feedback stabiliization as your system operates. |
|
Chapter 6 | Linear Least Squares Review |
An essential tool: how to formulate and solve a Linear Least Squares problem to evaluate "best fit" model parameters. |
|
Chapter 7 | Practical Linear Least Squares Calculations |
Introducing the Octave package, and showing how simple it is to perform Linear Least Squares calculations in practice. |
|
Chapter 8 | Least Squares Dynamic Fit |
A lot of ideas come together in an attempt to build a dynamic mode using Least Squares methods. |
|
Chapter 9 | Adapting ARMA Models |
Another approach: fabricate a suitable state transition model from an ARMA model obtained by regression analysis methods. |
|
Chapter 10 | Verifying the Model |
How to numerically simulate and evaluate your proposed model using actual system I/O data. |
|
Chapter 11 | Updating and Sequential Least Squares |
It is not necessary to collect huge data sets and grind them down all at once for Least Squares fitting — you can consolidate as you go. |
|
Chapter 12 | Scaling, Weighting, and Least Squares |
An important feature... a serious hazard... how you present data to a least squares problem affects the solution you will get. |
|
Chapter 13 | Fading Memory in Least Squares Problems |
Critical adjustments are required to allow Least Squares updates to systematically prefer new data over old. |
|
Chapter 14 | Introducing Recursive Least Squares (RLS) |
Begin the search for efficient Least Squares solutions when frequent updates of parameter values are needed. |
|
Chapter 15 | Efficient RLS Computation |
Inverse updates provide the missing piece for the RLS method. |
|
Chapter 16 | Adaptive "Recursive Least Squares" Applications |
Avoiding disaster when using RLS methods for system models that change "adaptively" over time. |
|
Chapter 17 | Total Least Squares Methods |
An alternative to Linear Least Squares when all system inputs and outputs are noisy. |
|
Chapter 18 | LMS: Let the Model Self-tune |
Applying the LMS method to let a state transition model incrementally improve itself, based on test data — and patience. |
|
Chapter 19 | Restructuring State Models, Part 1 |
State transition models are not unique! Introducing transformations that can produce equivalent models. |
|
Chapter 20 | Restructuring State Models, Part 2 |
Introducing Householder transformations, for sparse and efficient state model systems. |
|
Chapter 21 | Restructuring State Models, Part 3 |
Introducing Eigenvector transformations, for producing state model systems with minimal state interaction. |
|
Chapter 22 | Model Order |
Discussing the importance of representing the correct number of internal states in the model. |
|
Chapter 23 | Randomized Test Signals |
How to effectively collect system test data for determining system order. |
|
Chapter 24 | Autocorrelation in Test Signals |
Distinguishing artificial side effects of testing in correlation data. |
|
Chapter 25 | Measuring System Correlation |
How to perform a correlation analysis on system I/O data — it's easy. |
|
Chapter 26 | Recognizing State Effects on Correlation |
Patterns in correlation data that indicate the presence of internal states. |
|
Chapter 27 | Test Case: Identifying States in Correlation |
A numerical example of counting internal states using correlation methods. |
|
Chapter 28 | Transition Matrix for Response Modes |
Exploratory validation of correlation analysis results by constructing a model. |
|
Chapter 29 | Finishing Correlation-Based Model |
Least Squares methods complete a correlation-based model to see if results are credible. |
|
Chapter 30 | LMS: Experiments in Tuning New States |
LMS methods are used to splice an additional behavior onto an existing model. |
|
Chapter 31 | Reducing Overspecified Models |
Reducing model order: removing redundant and undesirable elements from an existing model. |
|
Chapter 32 | Compacting the Reduced Model |
Numerical cleanup after model order reduction, to obtain a compacted equivalent model. |
|
Chapter 33 | Adjusting Time Steps for Discrete Models |
Transforming a state transition model to operate at a different time interval than the original model. |
|
Chapter 34 | State Observers: Theory |
Introducing the other approach: trying to adjust the model's internal state variables rather than the model itself. |
|
Chapter 35 | State Observers: Design |
Exploring how to set the observer parameters — its gains — to tune observer performance. |
|
Chapter 36 | State Observerse with a Weak Model |
Benefits and hazards of observers: making good models work better, hiding the deficiency of a bad model. |
|
Chapter 37 | Reformulating the State Observer |
A mathematical reformulation that combines the functions of state transition prediction and observer. |
|
Chapter 38 | Minimalist Observer: the Alpha Beta Filter |
Observers with a model so weak that it barely qualifies as a model — yet, sometimes completely sufficient. |
|
Chapter 39 | Quantifying Variation |
How variance is used in Kalman Filters to represent the properties of random noise. |
|
Chapter 40 | Initial Variance |
How an interpretation of variance is employed to represent highly uncertain initial system conditions. |
|
Chapter 41 | Variance Propagation |
How initial uncertainty and new random noise sources interact to affect progression of state uncertainty over time. |
|
Chapter 42 | Generating Correlated Random Vectors |
How to produce pseudo-random noise with specified correlation properties, so that Kalman Filters can be simulated. |
|
Chapter 43 | Simulating the Noisy Observer |
Experiments testing observer response to correlated noise. |
|
Chapter 44 | Observer Optimization |
How to determine the "Kalman Gains" that achieve Wiener optimal tracking of the system state by the observer. |
|
Chapter 45 | The Steady State Kalman Filter |
For fixed transition and noise models, how the complicated run-time variance updates can be eliminated. |
|
Chapter 46 | Consistent Covariances |
An accurate noise model might not be possible — but it can at least be consistent with the actual system. |
|
Chapter 47 | The Dreaded Kalman Divergence |
What happens when bad models produce seriously bad results, and why this doesn't need to happen to you. |
|
Chapter 48 | Considerations for Data Smoothing |
How Kalman Filters can be tweaked to produce optimal estimates at past and future times, not just the next step. |