6. Introduction to Linear Algebra

Linear algebra is a branch of mathematics that is very useful in science and engineering. However, its frequency of use and prominence in undergraduate science and engineering curriculums has not always been what it is today. Although linear algebra has a long history, the computational requirements limited its adoption until recently. Gottfried Wilhelm Leibnitz, one of the inventors of calculus, used linear algebra to solve systems of equations in 1693 [TUCKER93]. As with calculus, linear algebra progressed slowly in the eighteenth century but advanced significantly during the nineteenth century [CHEN12]. Yet even in the middle of the twentieth century, use of linear algebra by scientists and engineers was limited by the lack of efficient tools. Calculations on vectors and matrices require large numbers of simple multiplications and additions, which are tedious for humans. The arrival of fast computers, more efficient algorithms, and powerful software made linear algebra more accessible. Now that we are better able to leverage the assistance of computers, linear algebra has become a prerequisite to studying several application areas such as control systems, signal and image processing, data analysis, communication systems, and artificial intelligence.

There are at least five important applications of linear algebra:

  1. Problems related to spatial vectors and geometry,
  2. Solutions to systems of linear equations,
  3. Vector projections with application to least square regression,
  4. Applications of eigenvalues and eigenvectors to systems of difference and differential equations,
  5. Applications of dimensionality reduction from the singular value decomposition and principal component analysis.

It is highly appropriate to learn linear algebra within the context of computational engineering. We will learn the basics of the mathematics, but we will quickly turn to the computer to do the heavy lifting of computing results.

The first three sections of this chapter provide an introduction to vectors, matrices, and associated mathematical operations. Then the remainder of the chapter is devoted to finding solutions to systems of linear equations of the form \mathbf{A}\,\bm{x} = \bm{b}. The matrix \bf{A} gives us the coefficients of the equations. The \bm{x} vector represents the unknown variables that we wish to find. The \bm{b} vector completes the system of equations with the product of \bf{A} and \bm{x}. We will consider three types of linear algebra equations.

  • Square matrix systems, or critically determined systems [1], have an equal number of equations and unknown variables, so matrix \bf{A} is square (equal number of rows and columns, m = n). Square matrix systems are what first come to mind as a system having a unique, exact answer. Although, some square matrix systems are singular and do not have a solution.
  • Under-determined systems have fewer equations than unknown variables. Matrix \bf{A} has more columns than rows (m < n). These systems do not have a unique solution, but rather have an infinite number of solutions. Since under-determined systems are common in some application domains, we want to go beyond finding the set of solutions and identify which solutions are considered the best solutions.
  • Over-determined systems have more equations than unknown variables. Matrix \bf{A} has more rows than columns (m > n). Over-determined systems may have an exact, unique solution if the equations are all consistent. But the solution is often an approximation. Such is the case for problems requiring least squares regression where we have a large number of equations (rows), but random uncertainty requires an approximation solution.

After a quick introduction to vectors and matrices, we will review the elimination method for solving square matrix systems of equations. Elimination based linear algebra notation gives us the tools to write pencil and paper solutions to problems involving vectors and matrices. We will see how software such as MATLAB are able to achieve numerically accurate results using elimination on square matrix systems. A review of applications of linear systems of equations will show a few examples of how linear algebra is used in science and engineering. To compute fast and accurate solutions to problems with large, rectangular matrices, orthogonal matrix decompositions (factoring a matrix into a product of sub-matrices) are needed. With the QR factoring and the singular value decomposition (SVD) in hand, we will be ready to properly consider rectangular systems of equations. We conclude the chapter with a closer look at MATLAB’s left-divide, or backslash, operator that gives us an easy solution to systems of equations. Understanding the strategies and how to avoid the lurking problems with complex systems gives us the foundation to apply linear algebra to real science and engineering problems.

If you’d like more introductory information on vectors, matrices, matrix multiplication, and transforming vectors, the following Khan Academy videos may help.

Contents:

Footnote

[1]The term critically determined is occasionally used in linear algebra literature to describe square matrix systems of equations. The term is used here to make a clear distinction between under-determined and over-determined systems of equations.