Section 3.6 Changing Coordinates
ΒΆSubsection 3.6.1 Linear Maps
ΒΆA linear map or linear transformation on R2 is a function T:R2βR2 that is defined by a matrix. That is,Theorem 3.6.1.
A linear map T is invertible if and only if detTβ 0.
Proof.
If \(\det T = 0\text{,}\) then there are infinitely many nonzero vectors \({\mathbf x}\) such that \(T {\mathbf x} = {\mathbf 0}\text{.}\) Suppose that \(T^{-1}\) exists and \({\mathbf x} \neq {\mathbf 0}\) such that \(T {\mathbf x} = {\mathbf 0}\text{.}\) Then
which is a contradiction. On the other hand, we can certainly compute \(T^{-1}\text{,}\) at least in the \(2 \times 2\) case, if the determinant is nonzero.
Subsection 3.6.2 Changing Coordinates
ΒΆSuppose that we consider a linear system- A linear map T converts solutions of yβ²=(Tβ1AT)y to solutions of xβ²=Ax.
- The inverse of a linear map T takes solutions of xβ²=Ax to solutions of yβ²=(Tβ1AT)y.
Subsection 3.6.3 Distinct Real Eigenvalues
ΒΆConsider the system xβ²=Ax, where A has two real, distinct eigenvalues Ξ»1 and Ξ»2 with eigenvectors v1 and v2, respectively. Let T be the matrix with columns v1 and v2. If e1=(1,0) and e2=(0,1), then Tei=vi for i=1,2. Consequently, Tβ1vi=ei for i=1,2. Thus, we haveExample 3.6.2.
Suppose dx/dt=Ax, where
The eigenvalues of A are Ξ»1=5 and Ξ»2=β1 and the associated eigenvectors are (1,2) and (1,β1), respectively. In this case, our matrix T is
If e1=(1,0) and e2=(0,1), then Tei=vi for i=1,2. Consequently, Tβ1vi=ei for i=1,2, where
Thus,
The eigenvalues of the matrix
are Ξ»1=5 and Ξ»2=β1 with eigenvectors (1,0) and (0,1), respectively. Thus, the general solution of
is
Hence, the general solution of
is
The linear map T converts the phase portrait of the system yβ²=(Tβ1AT)y (Figure 3.6.3) to the phase portrait of the system xβ²=Ax (Figure 3.6.4).


Subsection 3.6.4 Complex Eigenvalues
ΒΆSuppose the matrixProposition 3.6.5.
If Ξ»=Ξ±+iΞ² is an eigenvalue of a real matrix A with Ξ²β 0 and eigenvector the form
where v1 and v2 are real vectors, then the vectors v1 and v2 are linearly independent.
Proof.
If \({\mathbf v}_1\) and \({\mathbf v}_2\) are not linearly independent, then \({\mathbf v}_1 = c {\mathbf v}_2\) for some \(c \in \mathbb R\text{.}\) On one hand, we have
However,
In other words, \(A {\mathbf v}_2 = (\alpha + i \beta) {\mathbf v}_2\text{.}\) However, this is a contradiction since the left-side of the equation says that we have real eigenvector while the right-side of the equation is complex. Thus, \({\mathbf v}_1\) and \({\mathbf v}_2\) are linearly independent.
Proposition 3.6.6.
Let A be a real matrix with eigenvalue Ξ»=Ξ±+iΞ², where Ξ²β 0. If
is an eigenvector for Ξ», then there exists a matrix T such that
Proof.
Since \({\mathbf v}_1 + i {\mathbf v}_2\) is an eigenvector associated to the eigenvalue \(\alpha + i \beta\text{,}\) we have
Equating the real and imaginary parts, we find that
If \(T\) is the matrix with columns \({\mathbf v}_1\) and \({\mathbf v}_2\text{,}\) then
Thus, we have
Similarly,
Therefore, we can write the matrix \(T^{-1}A T\) as
Example 3.6.7.
Suppose that we wish to find the solutions of the second order equation
This particular equation might model a damped harmonic oscillator. If we rewrite this second-order equation as a first-order system, we have
or equivalently \mathbf x' = A \mathbf x\text{,} where
The eigenvalues of A are
The eigenvalue \lambda = (1 + i)/2 has an eigenvector
respectively. Therefore, we can take T to be the matrix
Consequently,
which is in the canonical form
The general solution to {\mathbf y}' = (T^{-1} A T) {\mathbf y} is
The phase portrait of {\mathbf y}' = (T^{-1} A T) {\mathbf y} is given in Figure 3.6.8.

The general solution of {\mathbf x}' = A {\mathbf x} is
The phase portrait for this system is given in Figure 3.6.9.

Remark 3.6.10.
Of course, we have a much more efficient way of solving the system {\mathbf x}' = A {\mathbf x}\text{,} where
Since A has eigenvalue \lambda = (-1 + i)/2 with an eigenvector \mathbf v = (2, -1 + i)\text{,} we can apply Euler's formula and write the solution as
Taking the real and the imaginary parts of the last expression, the general solution of {\mathbf x}' = A {\mathbf x} is
which agrees with the solution that we found by transforming coordinates.
Subsection 3.6.5 Repeated eigenvalues
ΒΆNow suppose that A has a single real eigenvalue \lambda\text{.} Then the characteristic polynomial of A is p(\lambda) = \lambda^2 - (a + d)\lambda + (ad - bc)\text{,} then A has an eigenvalue \lambda = (a + d)/2\text{.}Proposition 3.6.11.
If A has a single eigenvalue and a pair of linearly independent eigenvectors, then A must be of the formProof.
Suppose that \({\mathbf u}\) and \({\mathbf v}\) are linearly indeendent eigenvectors for \(A\text{,}\) and let \(T\) be the matrix whose first column is \({\mathbf u}\) and second column is \({\mathbf v}\text{.}\) That is, \(T {\mathbf e}_1 = {\mathbf u}\) and \(T{\mathbf e}_2 = {\mathbf v}\text{.}\) Since \({\mathbf u}\) and \({\mathbf v}\) are linearly independent, \(\det(T) \neq 0\) and \(T\) is invertible. So, it must be the case that
or
Proposition 3.6.12.
Suppose that A has a single eigenvalue \lambda\text{.} If \mathbf v is an eigenvector for \lambda and any other eigenvector for \lambda is a multiple of \mathbf v\text{,} then there exists a matrix T such that
Proof.
If \({\mathbf w}\) is another vector in \({\mathbb R}^2\) such that \({\mathbf v}\) and \({\mathbf w}\) are linearly independent, then \(A \mathbf w\) can be written as a linear combination of \(\mathbf v\) and \(\mathbf w\text{,}\)
We can assume that \(\alpha \neq 0\text{;}\) otherwise, we would have a second linearly independent eigenvector. We claim that \(\beta = \lambda\text{.}\) If this were not the case, then
and \(\beta\) would be an eigenvalue distinct from \(\lambda\text{.}\) Thus, \(A {\mathbf w} = \alpha {\mathbf v} + \lambda {\mathbf w}\text{.}\) If we will let \({\mathbf u} = (1/ \alpha) {\mathbf w}\text{,}\) then
We now define \(T {\mathbf e}_1 = {\mathbf v}\) and \(T{\mathbf e}_2 = {\mathbf u}\text{.}\) Since
we have
Therefore, \({\mathbf x}' = A {\mathbf x}\) is in canonical form after a change of coordinates.
Example 3.6.13.
Consider the system \mathbf x' = A \mathbf x\text{,} where
The characteristic polynomial of A is \lambda^2 - 6 \lambda + 9 = (\lambda - 3)^2\text{,} we have only a single eigenvalue \lambda = 3 with eigenvector \mathbf v = (1, -2)\text{.} Any other eigenvector for \lambda is a multiple of \mathbf v\text{.} If we choose \mathbf w = (1, 0)\text{,} then \mathbf v and \mathbf w are linearly independent. Furthermore,
So we can let \mathbf u = (1/2) \mathbf w = (1/2, 0)\text{.} Therefore, the matrix that we seek is
and
From Section 3.3, we know that the general solution to the system
is
Therefore, the general solution to
is
This solution agrees with the solution that we found in Example 3.5.5.
Subsection 3.6.6 Important Lessons
ΒΆ- A linear map T is invertible if and only if \det T \neq 0\text{.}
- A linear map T converts solutions of {\mathbf y}' = (T^{-1} A T) {\mathbf y} to solutions of {\mathbf x}' = A {\mathbf x}\text{.}
- The inverse of a linear map T takes solutions of {\mathbf x}' = A {\mathbf x} to solutions of {\mathbf y}' = (T^{-1} A T) {\mathbf y}\text{.}
- A change of coordinates converts the system {\mathbf x}' = A {\mathbf x} to one of the following special cases,\begin{equation*} \begin{pmatrix} \lambda & 0 \\ 0 & \mu \end{pmatrix}, \begin{pmatrix} \alpha & \beta \\ -\beta & \alpha \end{pmatrix}, \begin{pmatrix} \lambda & 0 \\ 0 & \lambda \end{pmatrix}, \begin{pmatrix} \lambda & 1 \\ 0 & \lambda \end{pmatrix}. \end{equation*}
Reading Questions 3.6.7 Reading Questions
ΒΆ1.
Explain what it means to be a change of coordinates.
2.
Given a 2 \times 2 linear system, what are the possible types of solutions?
Exercises 3.6.8 Exercises
ΒΆ1.
Consider the one-parameter family of linear systems given by
- Sketch the path traced out by this family of linear systems in the trace-determinant plane as a varies.
- Discuss any bifurcations that occur along this path and compute the corresponding values of a\text{.}
2.
Consider the two-parameter family of linear systems
Identify all of the regions in the ab-plane where this system possesses a saddle, a sink, a spiral sink, and so on.