(Recall from Primer 2 that a matrix times its inverse, e.g., A A \ equals the identity matrix I, and that multiplying a matrix by I has no effect, so that MAA' = M.)

Equation (9.9) gives us a way of writing the matrix M in terms of its right eigenvectors (the columns of A), its eigenvalues (the entries of D), and the inverse matrix A-1. Interestingly, the rows of matrix A"1 are nothing other than left eigenvectors of the matrix M. To see this, first recall the equation (P2.12) that defines the left eigenvectors: f'/M = A,- v,T. Now, if we start by defining A-1 as a d x d matrix whose rows are the left eigenvectors of M, then all d of these equations can be written in matrix notation as

The long-term dynamics of a linear model with multiple variables is dominated by its leading eigenvalue.

If we multiply both sides of equation (9.10) on the left by A, we once again get equation (9.9). Thus, we arrive at the same result if and only if the columns of A contain the right eigenvectors and the rows of A"1 contain the left eigenvectors of the original matrix M. (Although the lengths of eigenvectors are typically arbitrary, setting the left eigenvectors to the rows of A 1 constrains their lengths in a manner that depends on the lengths chosen for the right eigenvectors in the columns of A.)

By rewriting the matrix M in terms of its eigenvalues and their right and left eigenvectors, we can greatly simplify the general solution (9.5). Substituting (9.9) into (9.5), we obtain n(t) = M(«(0) = (ADA"1)'^). (9.11)

Expanding (A D A ')f is straightforward. Multiplying A D A" 1 by itself once, we get A D A~lA D A-1 = A D I D A"1 = A D2 A"1. We can repeat this operation any number of times, and each time the exponent of D just increases by one. Thus, we can rewrite (9.11) as n{t) = ADfA 1 #5(0). (9.12)

The tremendous advantage of (9.12) is that, while Mf is difficult to compute, Df is easy to compute; it is just the diagonal matrix with each eigenvalue raised to the fth power:

Was this article helpful?

## Post a comment