8.7. Systems of Linear ODEs

Video Resource

In this 19 minute video MIT professor Gilbert Strang explains how eigenvectors and eigenvalues give us the solution to a system of first order, linear ordinary differential equations.

Differential equations that come up in engineering design and analysis are usually systems of equations, rather than a single equation. Fortunately, they are often first order linear equations. As discussed in Symbolic Differential Equations, the Symbolic Math Toolbox can solve many differential equations expressed by a single equation. Higher order and non-polynomial systems of ODEs need numerical methods, such as discussed in Numerical Differential Equations. However, systems of first order linear ODEs may be solved analytically with eigenvalues and eigenvectors.

We use vectors and matrices to describe systems of ODEs. We have have n variables, y_1, y_2, \ldots, y_n, and an equation for the derivative of each as a linear combination of the y variables.

(8.7)\spalignsys{y_1' = a_{11}\,y_1 + a_{12}\,y_2 + \cdots + a_{1n}\,y_n;
            y_2' = a_{21}\,y_1 + a_{22}\,y_2 + \cdots + a_{2n}\,y_n;
                   \.  \vdots;
            y_n' = a_{n1}\,y_1 + a_{n2}\,y_2 + \cdots + a_{nn}\,y_n}

In matrix notation, this is

(8.8)\bm{y'} = \mathbf{A}\,\bm{y}.

We know that ODEs of the form

\frac{dy(t)}{dt} = a\,y(t)

have the solution

y(t) = c\,e^{a\,t}.

Thus it follows that (8.8) has a solution of the form

(8.9)\bm{y} = \bm{c}\,e^{\mathbf{A}\,t}.

This is an unusual equation. What does it mean to have a matrix in the exponent? We show how to put (8.9) into an equation that is easier to manage in A Matrix Exponent and Systems of ODEs in the appendix. A power series expansion gets the matrix out of the exponent and the diagonalization factoring makes (8.9) a matrix equation of the product of two matrices and a vector.

The general solution takes the form

\bm{y(t)} = c_1 e^{\lambda_1\,t}\bm{x}_1 +
c_2 e^{\lambda_2\,t}\bm{x}_2 +
\cdots + c_n e^{\lambda_n\,t}\bm{x}_n

The set of scalar values \{\lambda_1, \lambda_2, \cdots, \lambda_n\} are the eigenvalues of matrix \bf{A}. The vectors \{\bm{x}_1, \bm{x}_2, \cdots, \bm{x}_n\} are the eigenvectors of \bf{A}. The coefficients c_1, c_2, \ldots, c_n are found from the initial conditions, \bm{y(0)}.

Steady State

The steady state of a systems refers to the state of the system when the control variable, which is often t for time, goes to infinity. Because the eigenvalues are in an exponent, they determine the steady state. For the stead state to be stable, meaning that it does not go to infinity and does not oscillate, all eigenvalues must be real and less than or equal to zero. Some ODE systems have complex eigenvalues. When this occurs, the solution will have oscillating terms because of Euler’s complex exponential equation, e^{i\,x} = \cos(x) + i\,\sin(x). See Euler’s Complex Exponential Equation.

8.7.1. ODE Example

Consider the set of ODEs and initial conditions,

\spalignsys{y_1(t)' = -2\,y_1(t) + y_2(t);
            y_2(t)' = y_1(t) - 2\,y_2(t)}
\mbox{, \hspace{0.5in}} \spalignsys{y_1(0) = 6; y_2(0) = 2}.

In matrix notation,

\bm{y'} = \spalignmat[r]{-2 1; 1 -2}\,\bm{y}
\mbox{, \hspace{0.5in}} \bm{y}(0) = \spalignvector{6 2}.

We first use MATLAB to find the eigenvalues and eigenvectors. MATLAB always returns normalized eigenvectors, which can be multiplied by a constant to get simpler numbers if desired.

>> A = [-2 1; 1 -2];
>> [X, L] = eig(A)
X =
    0.7071    0.7071
   -0.7071    0.7071
L =
    -3     0
     0    -1

% The following step is only for having integer valued eigenvectors.
% Don't take this step in the homework.
>> X = X*2/sqrt(2)   % Just scale the eigenvectors - not needed
X =
    1.0000    1.0000
   -1.0000    1.0000

The columns of the X matrix are the eigenvectors. The eigenvalues are on the diagonal of L for Lambda (\Lambda). Our solution has the form

\bm{y}(t) = c_1\,e^{-3t}\spalignvector[r]{1 -1} +
c_2\,e^{-t}\spalignvector{1 1}

At the initial condition, the exponent terms become 1.

\bm{y}(0) = c_1\,\spalignvector[r]{1 -1} +
c_2\,\spalignvector{1 1}
= \spalignmat[r]{1 1; -1 1}\spalignvector{c_1 c_2}
= \mathbf{X}\,\bm{c}

>> y0 = [6;2];
>> c = X\y0
c =
    2.0000
    4.0000

\spalignsys{y_1(t) = 2\,e^{-3t} + 4\,e^{-t};
y_2(t) = -2\,e^{-3t} + 4\,e^{-t}}

8.7.2. Application: Closed Loop Control Systems

Here we consider an important engineering topic. Control systems use linear ODEs as discussed in this section. The coverage in this sub-section is a short and simple introduction to a vast field of study. It is hoped that this introduction will encourage further study. More complete introductions to control systems may be found in [BRUNTON19], [POULARIKAS94], [STRANG07], and [STRANG14]. Detailed coverage of feedback systems can be found in [ASTROM12]. Corke’s text on control of robotic systems [CORKE17] has good coverage of PID controllers.

Automatically controlled systems are part of our everyday experience. Electric thermostats control temperatures in indoor spaces, ovens, and freezers. A cruise control system helps maintain a constant speed when we drive on the highway. Industrial robotic and automation systems control both simple and complex systems in manufacturing plants. The list of control system applications is undoubtedly quite long.

Some systems can use gravity or other natural barriers to maintain stability. We call these systems open-loop because no feedback control system is required. The state-space equation of open-loop controllers take the form \bm{\dot{x}} = \mathbf{A}\,\bm{x}, which is the same general equation as the ODE’s that we previously considered. As before, the stability of the system requires that the real portion of the eigenvalues of \bf{A} be negative.

Most systems, however, require sensors and a feedback control system to maintain accuracy and stability. We call such systems closed–loop controllers. There is some variability in the configuration of closed-loop controllers, but Fig. 8.6 shows the principal components.

../_images/basic_block_ctrl.png

Fig. 8.6 The input to the controller may be either the estimate of the system’s state, \bm{x}, or the difference, \bm{e}, between the state and a reference set-point, \bm{r}. The system plant includes the models of the system constraints, physical environment, and the mechanisms whereby the control signal, \bm{u} can guide the state of the system. The observed output of the system is \bm{y}. The sensors attempt to measure the state of the system.

Closed-loop controllers typically use one of two strategies. Systems with a single control variable, such as thermostats or automobile cruise controls often use a proportional–integral–derivative (PID) controller. These efficient controllers look at the error between the desired set-point and the measured system output. The control variable, u, is the sum of three values. A term that is proportional to the error adjusts the control variable to minimize the error. An integral term guards against long-term differences between the output and set-point. The derivative term suppresses rapid fluctuations of the control variable that the proportional control alone might produce.

The state space alternative to the PID controller models changes to the state of the system. The state of the system is stored in a vector, \bm{x}, having terms for any variables that are significant and can be measured. The system and its control mechanisms use differential equations that track changes (derivatives) to the state. For a linear time-invariant (LTI) system, the state space equations are first-order differential equations expressed with vectors and matrices. Fig. 8.7 shows how the state space equations in equation (8.10) relate in the control system.

(8.10)\begin{array}{rl}
        \bm{\dot{x(t)}} &= \mathbf{A}\,\bm{x}(t) + \mathbf{B}\,\bm{u}(t) \\
        \bm{x}(t) &= \mathbf{C}\,\bm{y}(t) \\
        \bm{u}(t) &= -\mathbf{K}\,\bm{x}(t)
    \end{array}

../_images/LTI.png

Fig. 8.7 Closed-loop linear time-invariant control system.

In the system plant, the matrix \bf{A} models the physics of the system. \bf{B}, which may be a matrix or a column vector, defines the ability of the control variable, \bm{u}, to guide the system towards the desired behavior. Thus \bf{A} and \bf{B} represent the given model of the system. The observable output of the system may define the state of the system (\bm{x} = \bm{y}). But in some cases, it may be challenging to determine the system’s state from the sensors. Kalman filters are sometimes used when the sensors alone cannot determine the system’s state. The variable \bf{K}, which may be scalar, a row vector, or a matrix, is the tunable variable that we use to ensure a stable steady state condition.

Substituting \bm{u} into the state-space equation of (8.10) gives us a combined equation that we can use to test for stability of the controller.

\bm{\dot{x}} = (\mathbf{A} - \mathbf{B\,K\,C})\bm{x}

Thus, the stability is dependant on the eigenvalues of \mathbf{A} - \mathbf{B\,K\,C}.

The following example models a system that is unstable without some help from the controlling \bf{K} matrix. The \bf{K} matrix can move the eigenvalues of the combined state-space equation to values less than zero. There are, however, some subtle issues with choosing \bf{K} such that the system is stable but not over-controlled. A complete discussion of selecting the control matrix is beyond the scope of this text. The MATLAB Control System Toolbox has a function called place that implements the algorithm described in a seminal paper by Kautsky, Nichols, and Van Dooren [KAUTSKY85] to find a suitable control matrix. However, an acceptable control matrix for learning purposes is not difficult to calculate. Starting with negative eigenvalues and simple eigenvectors, use the diagonalization factoring to calculate a target matrix. Then, \bf{K} may be found using a simple linear algebra calculation. Note that in this example, the \bf{C} matrix is an identity matrix, so it is not used in the example.

The example performs a time domain simulation of a control system and also finds the equation for the state of the system based on the eigenvalues and eigenvectors. The state of the system has two values. Plots of the system response to starting state values of (2, 1) are shown in Fig. 8.8 and Fig. 8.9 for unstable and stable systems.

To simulate the time-domain system response as a system sampled at discrete times, we recognize that the derivative of the state space is modeled as follows, which leads to a simple equation for the next \bm{x}_k value.

\bm{\dot{x}} = \frac{\bm{x}_k - \bm{x}_{k-1}}{(\Delta t)}

(8.11)\bm{x}_k = \bm{x}_{k-1} + \bm{\dot{x}}\,(\Delta t)

(8.11) is sufficient for our application here, but it assumes that \bm{\dot{x}} is constant between \bm{x}_{k-1} and \bm{x}_k. More precise algorithms for simulating the response of ODEs is presented in Numerical Differential Equations.

% File: closedLoopCtrl.m
% Simulation and analytic solution of a stable
% and unstable closed-loop control system.

N = 200;
A = [1 1.8; -0.7 2.5];
B = [1 0.5; 0.5 1];
T = 15;
t = linspace(0, T, N);
dt = T/N;
X = zeros(2, N);
X(:,1) = [2; 1];
K_unstable = eye(2);
Atarget = [0.5 -1; 1.5 -2];
K_stable = B\(A - Atarget);
% K = K_unstable;
K = K_stable;
% simulation
for k = 2:N
    x = X(:,k-1);
    u = -K*x;
    x_dot = A*x + B*u;
    X(:,k) = x + x_dot*dt;
end
% analytic solution
x0 = X(:,1);
[ev, L] = eig(A - B*K);
c = ev\x0;
y = real(c(1)*exp(L(1,1)*t).*ev(:,1) + ...
    c(2)*exp(L(2,2)*t).*ev(:,2));
% plot results
hold on
plot(t, X(1,:))
plot(t, X(2,:))
plot(t, y(1,:))
plot(t, y(2,:))
hold off
legend('Simulation y1', 'Simulation y2', ...
    'Analytic y1', 'Analytic y2','Location', 'north')
xlabel('t')
title('Stable Control System')
../_images/unstableCtrl.png

Fig. 8.8 The state of the unstable system quickly goes to infinity.

../_images/stableCtrl.png

Fig. 8.9 The stable system has a steady state of (0, 0).