6. Introduction to Linear Algebra¶
Linear algebra is an important branch of mathematics for science and engineering applications. However, its frequency of use and prominence in undergraduate science and engineering curricula have not always been what it is today. Although linear algebra has a long history, the computational requirements limited its adoption until recently. Gottfried Wilhelm Leibniz, one of the inventors of calculus, used linear algebra to solve systems of equations in 1693 [TUCKER93]. As with calculus, linear algebra progressed slowly in the eighteenth century but advanced significantly during the nineteenth century [CHEN12]. Yet even in the middle of the twentieth century, the lack of efficient tools limited the use of linear algebra by scientists and engineers. Calculations on vectors and matrices require many simple multiplications and additions, which are tedious for humans. The arrival of fast computers, more efficient algorithms, and powerful software made linear algebra more accessible. Now that we can better leverage computers’ assistance, linear algebra has become a prerequisite to studying several application areas such as control systems, signal and image processing, data analysis, communication systems, and artificial intelligence.
There are at least five important applications of linear algebra:
Problems related to spatial vectors and geometry ( Working with Vectors to Geometric Transforms and Over-determined Systems and Vector Projections),
Solutions to systems of linear equations (Systems of Linear Equations to the end of this chapter),
Least squares regression (Least Squares Regression),
Applications of eigenvalues and eigenvectors to systems of difference and differential equations (Application of Eigenvalues and Eigenvectors),
Applications of dimensionality reduction from the singular value decomposition and principal component analysis (Singular Value Decomposition (SVD), and Principal Component Analysis (PCA)).
Learning linear algebra within the context of computational engineering is highly appropriate. We will learn the basic mathematics, but we will quickly turn to the computer to do the heavy lifting of computing results.
The first three sections of this chapter provide an introduction to vectors, matrices, and associated mathematical operations. Then the remainder of the chapter is devoted to finding solutions to systems of linear equations of the form \(\mathbf{A}\,\bm{x} = \bm{b}\). The matrix \(\bf{A}\) gives us the coefficients of the equations. The \(\bm{x}\) vector represents the unknown variables we wish to find. The \(\bm{b}\) vector completes the system of equations with the product of \(\bf{A}\) and \(\bm{x}\). We will consider three types of linear algebra equations.
Square matrix systems, or critically determined systems [1], have an equal number of equations and unknown variables, so matrix \(\bf{A}\) is square (equal number of rows and columns, \(m = n\)). Square matrix systems first come to mind as a system with a unique, exact answer. Although some square matrix systems are singular and do not have a solution.
Over-determined systems have more equations than unknown variables. Matrix \(\bf{A}\) has more rows than columns (\(m > n\)). Over-determined systems may have an exact, unique solution if the equations are consistent. But the solution is often an approximation. Such is the case for problems requiring least squares regression, where we have many equations (rows).
Under-determined systems have fewer equations than unknown variables. Matrix \(\bf{A}\) has more columns than rows (\(m < n\)). These systems do not have a unique solution, but rather have an infinite number of solutions. Since under-determined systems are common in some application domains, we want to go beyond finding the set of solutions and identify which solutions are considered the best solutions.
After a quick introduction to vectors and matrices, we will review the elimination method for solving square matrix systems of equations. Elimination-based linear algebra notation allows us to write pencil and paper solutions to problems involving vectors and matrices. We will see how software such as MATLAB can achieve numerically accurate results using elimination on square matrix systems. A review of applications of linear systems of equations will show a few examples of how linear algebra is used in science and engineering. To compute fast and accurate solutions to problems with large, rectangular matrices, orthogonal matrix decompositions (factoring a matrix into a product of sub-matrices) are needed. With the QR factoring and the singular value decomposition (SVD) in hand, we will be ready to consider rectangular systems of equations properly. We conclude the chapter with a closer look at MATLAB’s left-divide, or backslash, operator that gives us an easy solution to systems of equations.
Contents:
- 6.1. Working with Vectors
- 6.2. Working with Matrices
- 6.3. Geometric Transforms
- 6.4. Systems of Linear Equations
- 6.5. Elimination
- 6.6. LU Decomposition
- 6.7. Cholesky Decomposition for Positive Definite Systems
- 6.8. Linear System Applications
- 6.9. Numerical Stability
- 6.10. Orthogonal Matrix Decompositions
- 6.11. Over-determined Systems and Vector Projections
- 6.12. Under-determined Systems
- 6.13. Left-Divide Operator
Footnote