Loading [MathJax]/jax/element/mml/optable/GeneralPunctuation.js
Skip to main content

Section 3.6 Changing Coordinates

In the beginning sections of this chapter, we outlined procedures for solving systems of linear differential equations of the form

(dx/dtdy/dt)=(abcd)(xy)=A(xy)

by determining the eigenvalues of A, but we have only justified the case where A has distinct real eigenvalues. However, we have considered the following special cases for A

(Ξ»00ΞΌ),(Ξ±Ξ²βˆ’Ξ²Ξ±),(Ξ»00Ξ»),(Ξ»10Ξ»).

Although it may seem that we have limited ourselves by attacking only a very small part of the problem of finding solutions for xβ€²=Ax, we are actually very close to providing a complete classification of all solutions. We will now show that we can transform any 2Γ—2 system of first-order linear differential equations with constant coefficients into one of these special systems by using a change of coordinates.

Subsection 3.6.1 Linear Maps

A linear map or linear transformation on R2 is a function T:R2β†’R2 that is defined by a matrix. That is,

T(xy)=(abcd)(xy).

When there is no confusion, we will think of the linear map T:R2β†’R2 and the matrix

(abcd)

as interchangeable.

We will say that T:R2β†’R2 is an invertible linear map if we can find a second linear map S such that T∘S=S∘T=I, where I is the identity transformation. In terms of matrices, this means that we can find a matrix S such that

TS=ST=I,

where

I=(1001).

is the 2Γ—2 identity matrix. We write Tβˆ’1 for the inverse matrix of T. It is easy to check that the inverse of

T=(abcd)

is

Tβˆ’1=1detT(dβˆ’bβˆ’ca).

If \(\det T = 0\text{,}\) then there are infinitely many nonzero vectors \({\mathbf x}\) such that \(T {\mathbf x} = {\mathbf 0}\text{.}\) Suppose that \(T^{-1}\) exists and \({\mathbf x} \neq {\mathbf 0}\) such that \(T {\mathbf x} = {\mathbf 0}\text{.}\) Then

\begin{equation*} {\mathbf x} = T^{-1} T {\mathbf x} = T^{-1} {\mathbf 0} = {\mathbf 0}, \end{equation*}

which is a contradiction. On the other hand, we can certainly compute \(T^{-1}\text{,}\) at least in the \(2 \times 2\) case, if the determinant is nonzero.

Subsection 3.6.2 Changing Coordinates

Suppose that we consider a linear system

yβ€²=(Tβˆ’1AT)y

where T is an invertible matrix. If y(t) is a solution of (3.6.1), we claim that x(t)=Ty(t) solves the equation xβ€²=Ax. Indeed,

xβ€²(t)=(Ty)β€²(t)=Tyβ€²(t)=T((Tβˆ’1AT)y(t))=A(Ty(t))=Ax(t).

We can think of this in two ways.

  1. A linear map T converts solutions of yβ€²=(Tβˆ’1AT)y to solutions of xβ€²=Ax.
  2. The inverse of a linear map T takes solutions of xβ€²=Ax to solutions of yβ€²=(Tβˆ’1AT)y.

We are now in a position to solve our problem of finding solutions of an arbitrary linear system

(xβ€²yβ€²)=(abcd)=(xy).

Subsection 3.6.3 Distinct Real Eigenvalues

Consider the system xβ€²=Ax, where A has two real, distinct eigenvalues Ξ»1 and Ξ»2 with eigenvectors v1 and v2, respectively. Let T be the matrix with columns v1 and v2. If e1=(1,0) and e2=(0,1), then Tei=vi for i=1,2. Consequently, Tβˆ’1vi=ei for i=1,2. Thus, we have

(Tβˆ’1AT)ei=Tβˆ’1Avi=Tβˆ’1(Ξ»ivi)=Ξ»iTβˆ’1vi=Ξ»iei

for i=1,2. Therefore, the matrix Tβˆ’1AT is in canonical form,

Tβˆ’1AT=(Ξ»100Ξ»2).

The eigenvalues of the matrix Tβˆ’1AT are Ξ»1 and Ξ»2 with eigenvectors (1,0) and (0,1), respectively. Thus, the general solution of

yβ€²=(Tβˆ’1AT)y

is

y(t)=Ξ±eΞ»1t(10)+Ξ²eΞ»2t(01).

Hence, the general solution of

xβ€²=Ax

is

Ty(t)=T(Ξ±eΞ»1t(10)+Ξ²eΞ»2t(01))=Ξ±eΞ»1tT(10)+Ξ²eΞ»2tT(01)=Ξ±eΞ»2tv1+Ξ²eΞ»2tv2.
Example 3.6.2.

Suppose dx/dt=Ax, where

A=(1243).

The eigenvalues of A are Ξ»1=5 and Ξ»2=βˆ’1 and the associated eigenvectors are (1,2) and (1,βˆ’1), respectively. In this case, our matrix T is

(112βˆ’1).

If e1=(1,0) and e2=(0,1), then Tei=vi for i=1,2. Consequently, Tβˆ’1vi=ei for i=1,2, where

Tβˆ’1=(1/31/32/3βˆ’1/3).

Thus,

Tβˆ’1AT=(1/31/32/3βˆ’1/3)(1243)(112βˆ’1)=(500βˆ’1).

The eigenvalues of the matrix

(500βˆ’1)

are Ξ»1=5 and Ξ»2=βˆ’1 with eigenvectors (1,0) and (0,1), respectively. Thus, the general solution of

yβ€²=(Tβˆ’1AT)y

is

y(t)=Ξ±e5t(10)+Ξ²eβˆ’t(01).

Hence, the general solution of

xβ€²=Ax

is

Ty(t)=(112βˆ’1)(Ξ±e5t(10)+Ξ²eβˆ’t(01))=Ξ±e5t(12)+Ξ²eβˆ’t(1βˆ’1)

The linear map T converts the phase portrait of the system yβ€²=(Tβˆ’1AT)y (Figure 3.6.3) to the phase portrait of the system xβ€²=Ax (Figure 3.6.4).

a direction field of slope arrows and solution curves in each quadrant with the solution curves approaching the horizontal and vertical axes for large values
Figure 3.6.3. Phase portrait for yβ€²=(Tβˆ’1AT)y
a direction field of slope arrows and solution curves that approach the straightline solutions for large values
Figure 3.6.4. Phase portrait for xβ€²=Ax

Subsection 3.6.4 Complex Eigenvalues

Suppose the matrix

A=(abcd)

in system xβ€²=Ax has complex eigenvalues. In this case, the characteristic polynomial p(Ξ»)=Ξ»2βˆ’(a+d)Ξ»+(adβˆ’bc) will have roots Ξ»=Ξ±+iΞ² and Β―Ξ»=Ξ±βˆ’iΞ², where

Ξ±=a+d2Ξ²=√4bcβˆ’(aβˆ’d)22.

The eigenvalues Ξ» and Β―Ξ» are complex conjugates. Now, suppose that the eigenvalue Ξ»=Ξ±+iΞ² has an eigenvector of the form

v=v1+iv2,

where v1 and v2 are real vectors. Then Β―v=v1βˆ’iv2 is an eigenvector for Β―Ξ», since

A¯v=¯Av=¯λv=¯λ¯v.

Consequently, if A is a real matrix with complex eigenvalues, one of the eigenvalues determines the other.

If \({\mathbf v}_1\) and \({\mathbf v}_2\) are not linearly independent, then \({\mathbf v}_1 = c {\mathbf v}_2\) for some \(c \in \mathbb R\text{.}\) On one hand, we have

\begin{equation*} A ({\mathbf v}_ 1 + i {\mathbf v}_2) = A (c {\mathbf v}_2 + i {\mathbf v}_2) = (c + i) A {\bf v}_2. \end{equation*}

However,

\begin{align*} A ({\mathbf v}_ 1 + i {\mathbf v}_2) & = (\alpha + i \beta) ( {\mathbf v}_ 1 + i {\mathbf v}_2)\\ & = (\alpha + i \beta) ( c + i) {\mathbf v}_2\\ & = ( c + i) (\alpha + i \beta) {\mathbf v}_2 \end{align*}

In other words, \(A {\mathbf v}_2 = (\alpha + i \beta) {\mathbf v}_2\text{.}\) However, this is a contradiction since the left-side of the equation says that we have real eigenvector while the right-side of the equation is complex. Thus, \({\mathbf v}_1\) and \({\mathbf v}_2\) are linearly independent.

Since \({\mathbf v}_1 + i {\mathbf v}_2\) is an eigenvector associated to the eigenvalue \(\alpha + i \beta\text{,}\) we have

\begin{equation*} A ( {\mathbf v}_1 + i {\mathbf v}_2) = (\alpha + i \beta) ({\mathbf v}_1 + i {\mathbf v}_2). \end{equation*}

Equating the real and imaginary parts, we find that

\begin{align*} A {\mathbf v}_1 & = \alpha {\mathbf v}_1 - \beta {\mathbf v}_2\\ A {\mathbf v}_2 & = \beta {\mathbf v}_1 + \alpha {\mathbf v}_2. \end{align*}

If \(T\) is the matrix with columns \({\mathbf v}_1\) and \({\mathbf v}_2\text{,}\) then

\begin{align*} T {\mathbf e}_1 & = {\mathbf v}_1\\ T {\mathbf e}_2 & = {\mathbf v}_2. \end{align*}

Thus, we have

\begin{equation*} (T^{-1} A T) {\mathbf e}_1 = T^{-1} (\alpha {\mathbf v}_1 - \beta {\mathbf v}_2) = \alpha {\mathbf e}_1 - \beta {\mathbf e}_2. \end{equation*}

Similarly,

\begin{equation*} (T^{-1} A T) {\mathbf e}_2 = \beta {\mathbf e}_1 + \alpha {\mathbf e}_2. \end{equation*}

Therefore, we can write the matrix \(T^{-1}A T\) as

\begin{equation*} T^{-1} AT = \begin{pmatrix} \alpha & \beta \\ - \beta & \alpha \end{pmatrix}. \end{equation*}

The system yβ€²=(Tβˆ’1AT)y is in one of the canonical forms and has a phase portrait that is a spiral sink (Ξ±<0), a center (Ξ±=0), or a spiral source (Ξ±>0). After a change of coordinates, the phase portrait of xβ€²=Ax is equivalent to a sink, center, or source.

Example 3.6.7.

Suppose that we wish to find the solutions of the second order equation

2xβ€³

This particular equation might model a damped harmonic oscillator. If we rewrite this second-order equation as a first-order system, we have

\begin{align*} x' & = y\\ y' & = - \frac{1}{2} x - y, \end{align*}

or equivalently \mathbf x' = A \mathbf x\text{,} where

\begin{equation*} A = \begin{pmatrix} 0 & 1 \\ - 1/2 & - 1 \end{pmatrix}. \end{equation*}

The eigenvalues of A are

\begin{equation*} - \frac{1}{2} \pm i \frac{1}{2}. \end{equation*}

The eigenvalue \lambda = (1 + i)/2 has an eigenvector

\begin{equation*} \mathbf v = \begin{pmatrix} 2 \\ -1 + i \end{pmatrix} = \begin{pmatrix} 2 \\ -1 \end{pmatrix} + i \begin{pmatrix} 0 \\ 1 \end{pmatrix}, \end{equation*}

respectively. Therefore, we can take T to be the matrix

\begin{equation*} T = \begin{pmatrix} 2 \amp 0 \\ -1 \amp 1 \end{pmatrix}. \end{equation*}

Consequently,

\begin{equation*} T^{-1} A T = \begin{pmatrix} 1/2 & 0 \\ 1/2 & 1 \end{pmatrix} \begin{pmatrix} 0 & 1 \\ -1/2 & -1 \end{pmatrix} \begin{pmatrix} 2 & 0 \\ -1 & 1 \end{pmatrix} = \begin{pmatrix} -1/2 & 1/2 \\ -1/2 & -1/2 \end{pmatrix}, \end{equation*}

which is in the canonical form

\begin{equation*} \begin{pmatrix} \alpha & \beta \\ - \beta & \alpha \end{pmatrix}. \end{equation*}

The general solution to {\mathbf y}' = (T^{-1} A T) {\mathbf y} is

\begin{equation*} {\mathbf y}(t) = c_1 e^{-t/2} \begin{pmatrix} \cos(t/2) \\ -\sin(t/2) \end{pmatrix} + c_2 e^{-t/2} \begin{pmatrix} \sin(t/2) \\ \cos(t/2) \end{pmatrix}. \end{equation*}

The phase portrait of {\mathbf y}' = (T^{-1} A T) {\mathbf y} is given in Figure 3.6.8.

a direction field of slope arrows and solution curves that spiral towards the origin
Figure 3.6.8. Phase portrait for {\mathbf y}' = (T^{-1} A T) {\mathbf y}

The general solution of {\mathbf x}' = A {\mathbf x} is

\begin{align*} T {\mathbf y}(t) & = \begin{pmatrix} 2 & 0 \\ -1 & 1 \end{pmatrix} \left[ c_1 e^{-t/2} \begin{pmatrix} \cos(t/2) \\ -\sin(t/2) \end{pmatrix} + c_2 e^{-t/2} \begin{pmatrix} \sin(t/2) \\ \cos(t/2) \end{pmatrix} \right]\\ & = c_1 e^{-t/2} \begin{pmatrix} 2 & 0 \\ -1 & 1 \end{pmatrix} \begin{pmatrix} \cos(t/2) \\ -\sin(t/2) \end{pmatrix} + c_2 e^{-t/2} \begin{pmatrix} 2 & 0 \\ -1 & 1 \end{pmatrix} \begin{pmatrix} \sin(t/2) \\ \cos(t/2) \end{pmatrix}\\ & = c_1 e^{-t/2} \begin{pmatrix} 2 \cos(t/2) \\ - \cos(t/2) - \sin(t/2) \end{pmatrix} + c_2 e^{-t/2} \begin{pmatrix} 2 \sin(t/2) \\ - \sin(t/2) + \cos(t/2) \end{pmatrix}. \end{align*}

The phase portrait for this system is given in Figure 3.6.9.

a direction field of slope arrows and solution curves that spiral towards the origin
Figure 3.6.9. Phase portrait of {\mathbf x}' = A {\mathbf x}
Remark 3.6.10.

Of course, we have a much more efficient way of solving the system {\mathbf x}' = A {\mathbf x}\text{,} where

\begin{equation*} A = \begin{pmatrix} 0 & 1 \\ - 1/2 & - 1 \end{pmatrix}. \end{equation*}

Since A has eigenvalue \lambda = (-1 + i)/2 with an eigenvector \mathbf v = (2, -1 + i)\text{,} we can apply Euler's formula and write the solution as

\begin{align*} \mathbf x(t) \amp = e^{(-1 + i)t/2} \mathbf v \amp\\ \amp = e^{-t/2} e^{it/2} \begin{pmatrix} 2 \\ -1 + i \end{pmatrix}\\ \amp = e^{-t/2} (\cos(t/2) + i \sin(t/2)) \begin{pmatrix} 2 \\ -1 + i \end{pmatrix}\\ \amp =e^{-t/2} \begin{pmatrix} 2 \cos(t/2) \\ - \cos(t/2) - \sin(t/2) \end{pmatrix} + i e^{-t/2} \begin{pmatrix} 2 \sin(t/2) \\ -\sin(t/2) + \cos(t/2) \end{pmatrix}. \end{align*}

Taking the real and the imaginary parts of the last expression, the general solution of {\mathbf x}' = A {\mathbf x} is

\begin{equation*} \mathbf x(t) = c_1 e^{-t/2} \begin{pmatrix} 2 \cos(t/2) \\ - \cos(t/2) - \sin(t/2) \end{pmatrix} + c_2 e^{-t/2} \begin{pmatrix} 2 \sin(t/2) \\ - \sin(t/2) + \cos(t/2) \end{pmatrix}, \end{equation*}

which agrees with the solution that we found by transforming coordinates.

Subsection 3.6.5 Repeated eigenvalues

Now suppose that A has a single real eigenvalue \lambda\text{.} Then the characteristic polynomial of A is p(\lambda) = \lambda^2 - (a + d)\lambda + (ad - bc)\text{,} then A has an eigenvalue \lambda = (a + d)/2\text{.}

Suppose that \({\mathbf u}\) and \({\mathbf v}\) are linearly indeendent eigenvectors for \(A\text{,}\) and let \(T\) be the matrix whose first column is \({\mathbf u}\) and second column is \({\mathbf v}\text{.}\) That is, \(T {\mathbf e}_1 = {\mathbf u}\) and \(T{\mathbf e}_2 = {\mathbf v}\text{.}\) Since \({\mathbf u}\) and \({\mathbf v}\) are linearly independent, \(\det(T) \neq 0\) and \(T\) is invertible. So, it must be the case that

\begin{equation*} AT = (A {\mathbf u}, A {\mathbf v}) = (\lambda {\mathbf u}, \lambda {\mathbf v}) = \lambda ({\mathbf u}, {\mathbf v}) = \lambda IT, \end{equation*}

or

\begin{equation*} A = \begin{pmatrix} \lambda & 0 \\ 0 & \lambda \end{pmatrix}. \end{equation*}

In this case, the system is uncoupled and is easily solved. That is, we can solve each equation in the system

\begin{align*} x' \amp = \lambda x\\ y' \amp = \lambda y \end{align*}

separately to obtain the general solution

\begin{align*} x \amp = c_1 e^{\lambda t}\\ y' \amp = c_2 e^{\lambda t}. \end{align*}

If \({\mathbf w}\) is another vector in \({\mathbb R}^2\) such that \({\mathbf v}\) and \({\mathbf w}\) are linearly independent, then \(A \mathbf w\) can be written as a linear combination of \(\mathbf v\) and \(\mathbf w\text{,}\)

\begin{equation*} A {\mathbf w} = \alpha {\mathbf v} + \beta {\mathbf w}. \end{equation*}

We can assume that \(\alpha \neq 0\text{;}\) otherwise, we would have a second linearly independent eigenvector. We claim that \(\beta = \lambda\text{.}\) If this were not the case, then

\begin{align*} A \left( {\mathbf w} + \left( \frac{\alpha}{\beta - \lambda} \right) {\mathbf v} \right) \amp = A {\mathbf w} + \left( \frac{\alpha}{\beta - \lambda} \right) A {\mathbf v}\\ \amp = \alpha {\mathbf v} + \beta {\mathbf w} + \lambda \left( \frac{\alpha}{\beta - \lambda} \right) {\mathbf v}\\ \amp = \beta {\mathbf w} + \alpha \left(1 + \frac{\lambda}{\beta - \lambda} \right) {\mathbf v}\\ \amp = \beta {\mathbf w} + \alpha \left( \frac{\beta - \lambda + \lambda}{\beta - \lambda} \right) {\mathbf v}\\ \amp = \beta \left( {\mathbf w} + \left( \frac{\alpha}{\beta - \lambda} \right) {\mathbf v} \right) \end{align*}

and \(\beta\) would be an eigenvalue distinct from \(\lambda\text{.}\) Thus, \(A {\mathbf w} = \alpha {\mathbf v} + \lambda {\mathbf w}\text{.}\) If we will let \({\mathbf u} = (1/ \alpha) {\mathbf w}\text{,}\) then

\begin{equation*} A {\mathbf u} = {\mathbf v} + \frac{\lambda}{\alpha} {\mathbf w} = {\mathbf v} + \lambda {\mathbf u}. \end{equation*}

We now define \(T {\mathbf e}_1 = {\mathbf v}\) and \(T{\mathbf e}_2 = {\mathbf u}\text{.}\) Since

\begin{align*} AT \amp = A\mathbf u + A \mathbf v = \mathbf v + \lambda \mathbf u + \lambda \mathbf v\\ T\begin{pmatrix} \lambda & 1 \\ 0 & \lambda \end{pmatrix} \amp = T (\lambda \mathbf e_1) + T \mathbf e_1 + T (\lambda \mathbf e_2) = \mathbf v + \lambda \mathbf u + \lambda \mathbf v, \end{align*}

we have

\begin{equation*} T^{-1} A T = \begin{pmatrix} \lambda & 1 \\ 0 & \lambda \end{pmatrix}. \end{equation*}

Therefore, \({\mathbf x}' = A {\mathbf x}\) is in canonical form after a change of coordinates.

Example 3.6.13.

Consider the system \mathbf x' = A \mathbf x\text{,} where

\begin{equation*} A = \begin{pmatrix} 5 & 1 \\ -4 & 1 \end{pmatrix}. \end{equation*}

The characteristic polynomial of A is \lambda^2 - 6 \lambda + 9 = (\lambda - 3)^2\text{,} we have only a single eigenvalue \lambda = 3 with eigenvector \mathbf v = (1, -2)\text{.} Any other eigenvector for \lambda is a multiple of \mathbf v\text{.} If we choose \mathbf w = (1, 0)\text{,} then \mathbf v and \mathbf w are linearly independent. Furthermore,

\begin{equation*} A \mathbf w = \begin{pmatrix} 5 \\ - 4 \end{pmatrix} = 2 \begin{pmatrix} 1 \\ -2 \end{pmatrix} + \lambda \begin{pmatrix} 1 \\ 0 \end{pmatrix} = 2 \begin{pmatrix} 1 \\ -2 \end{pmatrix} + 3 \begin{pmatrix} 1 \\ 0 \end{pmatrix}. \end{equation*}

So we can let \mathbf u = (1/2) \mathbf w = (1/2, 0)\text{.} Therefore, the matrix that we seek is

\begin{equation*} T = \begin{pmatrix} 1 \amp 1/2 \\ -2 \amp 0 \end{pmatrix}, \end{equation*}

and

\begin{equation*} T^{-1} A T = \begin{pmatrix} -1/2 & 2 \\ 1 & 1 \end{pmatrix} \begin{pmatrix} 5 & 1 \\ -4 & 1 \end{pmatrix} \begin{pmatrix} 1 \amp 1/2 \\ -2 \amp 0\end{pmatrix} = \begin{pmatrix} 3 & 1 \\ 0 & 3 \end{pmatrix}. \end{equation*}

From Section 3.3, we know that the general solution to the system

\begin{equation*} \begin{pmatrix} dx/dt \\ dy/dt \end{pmatrix} = \begin{pmatrix} 3 & 1 \\ 0 & 3 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} \end{equation*}

is

\begin{equation*} \mathbf y(t) = c_1 e^{3t} \begin{pmatrix} 1 \\ 0 \end{pmatrix} + c_2 e^{3t} \begin{pmatrix} t \\ 1 \end{pmatrix}. \end{equation*}

Therefore, the general solution to

\begin{equation*} \begin{pmatrix} dx/dt \\ dy/dt \end{pmatrix} = \begin{pmatrix} 5 & 1 \\ -4 & 1 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} \end{equation*}

is

\begin{align*} \mathbf x(t) \amp = T \mathbf y(t)\\ \amp = c_1 e^{3t} T \begin{pmatrix} 1 \\ 0 \end{pmatrix} + c_2 e^{3t} T \begin{pmatrix} t \\ 1 \end{pmatrix}\\ \amp = c_1 e^{3t} \begin{pmatrix} 1 \\ -2 \end{pmatrix} + c_2 e^{3t} \begin{pmatrix} 1/2 + t \\ -2t \end{pmatrix}. \end{align*}

This solution agrees with the solution that we found in Example 3.5.5.

In practice, we find solutions to linear systems using the methods that we outlined in Sections 3.2–3.4. What we have demonstrated in this section is that those solutions are exactly the ones that we want.

Subsection 3.6.6 Important Lessons

  • A linear map T is invertible if and only if \det T \neq 0\text{.}
  • A linear map T converts solutions of {\mathbf y}' = (T^{-1} A T) {\mathbf y} to solutions of {\mathbf x}' = A {\mathbf x}\text{.}
  • The inverse of a linear map T takes solutions of {\mathbf x}' = A {\mathbf x} to solutions of {\mathbf y}' = (T^{-1} A T) {\mathbf y}\text{.}
  • A change of coordinates converts the system {\mathbf x}' = A {\mathbf x} to one of the following special cases,
    \begin{equation*} \begin{pmatrix} \lambda & 0 \\ 0 & \mu \end{pmatrix}, \begin{pmatrix} \alpha & \beta \\ -\beta & \alpha \end{pmatrix}, \begin{pmatrix} \lambda & 0 \\ 0 & \lambda \end{pmatrix}, \begin{pmatrix} \lambda & 1 \\ 0 & \lambda \end{pmatrix}. \end{equation*}

Reading Questions 3.6.7 Reading Questions

1.

Explain what it means to be a change of coordinates.

2.

Given a 2 \times 2 linear system, what are the possible types of solutions?

Exercises 3.6.8 Exercises

1.

Consider the one-parameter family of linear systems given by

\begin{equation*} \begin{pmatrix} x' \\ y' \end{pmatrix} = \begin{pmatrix} a & \sqrt{2} + a/2 \\ \sqrt{2} - a/2 & 0 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix}. \end{equation*}
  1. Sketch the path traced out by this family of linear systems in the trace-determinant plane as a varies.
  2. Discuss any bifurcations that occur along this path and compute the corresponding values of a\text{.}
2.

Consider the two-parameter family of linear systems

\begin{equation*} \begin{pmatrix} x' \\ y' \end{pmatrix} = \begin{pmatrix} a & b \\ b & a \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix}. \end{equation*}

Identify all of the regions in the ab-plane where this system possesses a saddle, a sink, a spiral sink, and so on.

Subsection 3.6.9 Project