Rank Considerations for the Observer-Kalman System Identification Procedure

Ralf Gerlich

2019/10/13

Last time, we looked in detail at the derivation of the OKID procedure for finding the impulse response of a system from arbitrary input-output data. However, there are some specificities to consider when collecting data, and we can derive these by looking at the rank of the matrices involved.

In this article, we’ll reconsider how the quality and uniqueness of the solution for an ordinary least-squares problem is affected by the ranks of the matrices involved. Based on that, we’ll derive a formula for determining the number of measurements we need to find a good estimate of the Markov parameter matrix.

The Result

Let’s jump to the result real quick here and then look at how it is derived. Assume that we want to determine the Markov parameters

\begin{equation} \label{eq:samples-oversample} \mathbf{M} = \begin{bmatrix} \mathbf{C} \mathbf{A}^{l-1} \mathbf{B} & \ldots \mathbf{C} \mathbf{B} & \mathbf{D} \end{bmatrix} \end{equation}

of order [latex]l[/latex] for a system with [latex]m[/latex] inputs and [latex]p[/latex] outputs.

Schematic of a Multiple-Input, Multiple-Output (MIMO) System

The number of measurement points required is given by

\begin{equation} n = om + \left[o\left(m+p\right)+1\right] l \end{equation}

Here, [latex]o[/latex] is the oversampling factor, which gives the number of data points we have per entry in the Markov parameter matrix. We need this oversampling to average out measurement error due to noise, and thus the larger the oversampling factor, the higher the quality of our estimate for the Markov parameter matrix.

On the other hand, a higher oversampling factor also means that we can either

A lower order of the Markov parameter matrix may decrease the quality of the estimate we get from the Eigensystem Realisation Algorithm. A higher number of measurements possibly increases the measurement effort and time.

Rank Considerations for OKID

Let’s review the central equation of the OKID with observer:

\begin{equation} \label{eq:okid-base} \underbrace{\begin{bmatrix} \mathbf{y}_{l} & \mathbf{y}_{l+1} & \cdots & \mathbf{y}_{n-1} \end{bmatrix}}_{=:\mathbf{Y}} \\ \approx \\ \mathbf{\tilde{M}}_l \underbrace{\begin{bmatrix} \mathbf{u}_0 & \cdots &\mathbf{u}_{n-l-1} \\ \mathbf{y}_0 & \cdots &\mathbf{y}_{n-l-1} \\ \vdots & \ddots & \vdots \\ \mathbf{u}_{l-1} & \cdots &\mathbf{u}_{n-2} \\ \mathbf{y}_{l-1} & \cdots &\mathbf{y}_{n-2} \\ \mathbf{u}_{l} & \cdots &\mathbf{u}_{n} \end{bmatrix}}_{=:\mathbf{\tilde{U}}} \end{equation}

Now have a look at the dimensions of these matrices:

Thus, we have [latex]p\left[m+\left(m+p\right)l\right][/latex] unknowns and [latex]p\left(n-l\right)[/latex] equations. Looking back at basic linear algebra, thus we know that we need

\begin{equation} \label{eq:samples-min} n-l = m+\left(m+p\right)l \end{equation}

to hold for the solution to be uniquely determined. We might even need more than that, as this assumes that the measurement data is rich enough so that Equation \ref{eq:okid-base} does not contain linearly dependent rows. However, in any case, the number of our measurements must be at least as large as given by Equation \ref{eq:samples-min}.

What happens if we have less measurements than this? Well, in this case the equation is not sufficiently determined, and there are arbitrarily many solutions [latex]\mathbf{\tilde{M}}[/latex] to the equation. The ordinary least-squares approach will deliver a solution, but it is not clear whether that solution accurately describes the system we want to identify – although it will describe a system that will provide the same output given the inputs.

Eliminating Noise by Oversampling

However, if we follow Equation \ref{eq:samples-min} exactly, i.e. if we have exactly as many measurements as specified by this equation, we will also have an exact solution. This means that we will exactly incorporate all the measurement noise into our Markov parameters. Usually, we do not want that. Instead, we want to average out that measurement noise by having multiple samples.

That means that we will need more than the number of measurements given by Equation \ref{eq:samples-min} – a lot more. To quantify this number, we’ll consider an oversampling factor [latex]o[/latex]. This factor gives us the number of samples per parameter we want to have. It also gives the factor by which we can diminish the influence of the measurement noise onto our estimate.

Now, if we want to oversample each parameter by factor [latex]o[/latex], we need [latex]o[/latex] equations for each parameter. Thus, the following equation must hold

\begin{equation} n-l = o\left[m+\left(m+p\right)l\right] \end{equation}

Solving that equation for [latex]n[/latex], we get our final result, given in Equation \ref{eq:samples-oversample}.

Impact of Using an Observer

Clearly, this is the result when we use an observer for identifying the Markov parameters. If we have a system which already is sufficiently stable, we do not need the observer approach. In that case, we have a much simpler equation:

\begin{equation} \label{eq:okid-base-simple} \underbrace{\begin{bmatrix} \mathbf{y}_{l} & \mathbf{y}_{l+1} & \cdots & \mathbf{y}_{n-1} \end{bmatrix}}_{=:\mathbf{Y}} \\ \approx \\ \mathbf{M}_l \underbrace{\begin{bmatrix} \mathbf{u}_0 & \cdots &\mathbf{u}_{n-l-1} \\ \vdots & \ddots & \vdots \\ \mathbf{u}_{l} & \cdots &\mathbf{u}_{n} \end{bmatrix}}_{=:\mathbf{U}} \end{equation}

Here we have [latex]m\left(l+1\right)[/latex] rows in [latex]\mathbf{U}[/latex], so that the following relationship must hold for an oversampling factor [latex]o[/latex]:

\begin{equation} n = o m \left(l+1\right) + l \end{equation}

This is smaller than the number required for the observer approach, as we do not have to determine many less entries of the Markov parameters. Thus, we may safe some measurement effort if we have a stable system.

Conclusions

It is quite important that we have a sufficient number of measurements

We can get the number of measurements we need by rank considerations of our basic equations and a simple oversampling approach. If we have a sufficiently stable system, we can avoid using the observer approach and thus reduce the number of measurements required while keeping the order of the Markov parameter set constant.