Section2.3Numerical Techniques for Systems¶ permalink

If we are unable to find an analytic solution to a first-order differential equation \(y' = f(t, y)\text{,}\) there is no reason to expect that it will be any easier to solve a system of equations. However, many of the numerical techniques that are used to solve a first-order equation can be extended to solve a system.

Large mobile cranes can reach up to 700 feet (Figure 2.18). This is about the same height as a 50 story building. At such heights, cranes are susceptible to winds and the end of the boom can move back and forth several feet even in moderate winds. We can model the motion as a harmonic oscillator if the side-to-side motion is not too great. However, if the wind becomes too strong and the crane moves too much, gravity will become a factor and the crane may topple due to its weight.

Before we can construct a model that might best describe the swaying of a large crane, we should remind ourselves of how harmonic oscillators work (see Section 1.1). A simple mass-spring system can be modeled by the equation
\begin{equation*}
m \frac{d^2 x}{dt^2} = - kx,
\end{equation*}
where \(x = x(t)\) is the displacement of the mass at time \(t\text{.}\) Given such a system, the mass will oscillate forever with constant amplitude. If we want to be a bit more realistic, we can introduce a damping force,
\begin{equation*}
-b \frac{dx}{dt},
\end{equation*}
where \(b \gt 0\text{.}\) Our equation now becomes
\begin{equation*}
m \frac{d^2 x}{dt^2} = - kx - b \frac{dx}{dt}.
\end{equation*}
If we let \(p = b/m\) and \(q = k/m\text{,}\) we can rewrite this last equation as
\begin{equation*}
\frac{d^2 x}{dt^2} + p \frac{dx}{dt} + q x = 0.
\end{equation*}
If we set \(v = x'\text{,}\) we can rewrite this second-order equation as a system of first-order equations,
\begin{align*}
\frac{dx}{dt} & = v,\\
\frac{dv}{dt} & = - qx - pv.
\end{align*}

In the case of our swaying crane, we will let \(x = x(t)\) be the displacement of the end of the boom from the perfect vertical position. When \(x(t) \neq 0\text{,}\) the boom is bent, and the structure of the boom supplies a strong restorative force to bring everything back to a true vertical position. Thus, the swaying of our crane's boom might be described by an equation such as
\begin{equation*}
\frac{d^2 x}{dt^2} + 0.1 \frac{dx}{dt} + 0.3 x = 0
\end{equation*}
or the equivalent system,
\begin{align*}
\frac{dx}{dt} & = v,\\
\frac{dv}{dt} & = - 0.3 x - 0.1 v.
\end{align*}
We will learn how to solve such systems later, but it is easy to check that
\begin{equation*}
x(t) = c_1 e^{-t/20} \cos \left(\frac{\sqrt{119}}{20} t\right) + c_2 e^{-t/20} \sin \left(\frac{\sqrt{119}}{20} t\right)
\end{equation*}
is a solution to our equation. The Sage code for solving this system is given below.

The constants \(c_1\) and \(c_2\) can be determined if we know the initial position and initial velocity of the end of the crane's boom. We show several solutions to our equation in Figure 2.19.

Modeling the swaying motion of a crane with a harmonic oscillator might work only if the side-to-side motion is small. If the motion is larger, we must account for the effect of gravity in our model. When \(x(t)\) is large, part of the crane will not be above any other part of the crane. Thus, gravity will pull downward on that part of the crane will cause the crane to bend even further. We can model this effect by setting the equation for our harmonic oscillator equal to a factor of \(x^3\text{,}\)
\begin{equation}
\frac{d^2 x}{dt^2} + 0.1 \frac{dx}{dt} + 0.3 x = 0.02 x^3.\label{equation-systems03-duffing}\tag{2.13}
\end{equation}
When \(x\) is small, this forcing term will not contribute much to the motion of the building. However, when \(x\) is large, the term will contribute a great deal. The equation \(x'' + 0.1 x' + 0.3 x = 0.02 x^3\) is an example of Duffing's equation.

We can rewrite equation (2.13) as a system of first-order differential equations by letting \(dx/dt = v\text{,}\)
\begin{align}
\frac{dx}{dt} \amp = v\label{equation-systems03-duffing-system-1}\tag{2.14}\\
\frac{dv}{dt} \amp = - 0.1 v - 0.3 x + 0.02 x^3.\label{equation-systems03-duffing-system-2}\tag{2.15}
\end{align}
Thus, one of our main objectives should be finding and analyzing solutions to such a system.

Subsection2.3.2Euler's Method for Systems¶ permalink

The system of equations (2.14)–(2.14) is nonlinear due the \(x^3\) term. There is little hope to finding an analytic solution to such as system. We can use software to plot the phase plane in order to learn something about the solutions (Figure 2.20). However, we still need to numerically generate solutions in order to plot the phase plane.

Let us see how we might find a numerical solution for a system. Consider the system
\begin{align*}
\frac{dx}{dt} & = f(t, x, y)\\
\frac{dy}{dt} & = g(t, x, y),
\end{align*}
with initial conditions \(x(t_0) = x_0\) and \(y(t_0) = y_0\text{.}\) We can rewrite our system in vector form
\begin{align*}
\frac{d {\mathbf x}}{dt} & = {\mathbf f}(t, {\mathbf x})\\
{\mathbf x}(t_0) & = {\mathbf x}_0
\end{align*}
where \({\mathbf x} = (x, y)\text{,}\) \(d{\mathbf x}/dt = (dx/dt, dy/dt)\text{,}\) \({\mathbf f} = (f, g)\text{,}\) and \({\mathbf x}_0 = (x_0, y_0)\text{.}\) We wish to find approximate values \(x_1, x_2, \ldots, x_n\) and \(y_1, y_2, \ldots, y_n\) for the solution \(x(t)\) and \(y(t)\) at the points
\begin{equation*}
t_k = t_0 + kh, \qquad k = 1, 2, \ldots, n,
\end{equation*}
where \(h\) is the step size. If we let \({\mathbf x}_k = (x_k, y_k)\) and \({\mathbf f}_k = (f(x_k, y_k ), g(x_k, y_k ))\text{,}\) then Euler's method now becomes
\begin{equation*}
{\mathbf x}_{k + 1} = {\mathbf x}_k + h {\mathbf f}_k
\end{equation*}
or
\begin{align*}
x_{k + 1} & = x_k + h f(t_k, x_k, y_k)\\
y_{k + 1} & = y_k + h g(t_k, x_k, y_k).
\end{align*}
The initial conditions are used to determine \({\mathbf f}_0\text{,}\) which is the tangent vector to the graph of the solution \(\mathbf x(t)\) in the \(xy\)-plane (Figure 2.20). We can move in the direction of this tangent vector for time \(h\) in order to find the next point \({\mathbf x}_1\text{.}\) We then calculate a new tangent vector \({\mathbf f}_1\) and then move along this new vector for a time step \(h\) to find \({\mathbf x}_2\text{.}\) We can repeat this technique to generate an approximate solution curve in the phase plane.

Example2.22Numerical Solutions for Duffing's Equation

We are now in a position to calculate some solutions to Duffing's equation. Suppose that
\begin{align*}
\frac{dx}{dt} & = v,\\
\frac{dv}{dt} & = - 0.3 x + 0.02x^3 - 0.1 v,
\end{align*}
with initial conditions \((t_0, x_0, v_0) = (0, 0, 1.7652)\text{.}\) Using a numerical algorithm, we can generate enough points to generate a graph of the solution (Figure 2.23).

The surprising thing about Duffing's equation is that it is so sensitive to initial conditions. A slight change in the initial conditions can yield dramatically different solutions If we change the initial velokcity to \(v(0) = 1.7653\text{,}\) we obtain a very different graph of the solution (Figure 2.23 ). For small initial velocities, solutions spiral towards the origin. However, a larger initial velocity will send the solution in the phase plane away from the origin. If the crane sways too violently, we will have a disaster.

Example2.25

Let us consider the system
\begin{align*}
x' & = x - y + t\\
y' & = -x + y^2 - t^2\\
x(0) & = 1\\
y(0) & = 0.
\end{align*}
Then \((x_0, y_0) = (1, 0)\text{.}\) Letting the step size be \(h = 0.1\text{,}\) we obtain and
\begin{align*}
\begin{pmatrix}
x_1 \\ y_1
\end{pmatrix}
& =
\begin{pmatrix}
x_0 \\ y_0
\end{pmatrix}
+ h
\begin{pmatrix}
f(t_0, x_0, y_0) \\ g(t_0, x_0, y_0)
\end{pmatrix}\\
& =
\begin{pmatrix}
1 \\ 0
\end{pmatrix}
+
(0.1)
\begin{pmatrix}
1 \\ -1
\end{pmatrix}\\
& =
\begin{pmatrix}
1.1 \\ -0.1
\end{pmatrix}.
\end{align*}
Similarly,
\begin{align*}
\begin{pmatrix}
x_2 \\ y_2
\end{pmatrix}
& =
\begin{pmatrix}
x_1 \\ y_1
\end{pmatrix}
+ h
\begin{pmatrix}
f(t_1, x_1, y_1) \\ g(t_1, x_1, y_1)
\end{pmatrix}\\
& =
\begin{pmatrix}
1 \\ 0
\end{pmatrix}
+
(0.1)
\begin{pmatrix}
1 \\ -1
\end{pmatrix}\\
& =
\begin{pmatrix}
1.1 \\ -0.1
\end{pmatrix}.
\end{align*}
Using this procedure, we can generate a list of values that will approximate the solution to our system (Table 2.26).

\(k\)

\(t_k\)

\(x_k\)

\(hf(t_k, x_k, y_k)\)

\(y_k\)

\(hg(t_k, x_k, y_k)\)

\(0\)

\(0.0\)

\(1.0000\)

\(0.1000\)

\(0.0000\)

\(-0.1000\)

\(1\)

\(0.1\)

\(1.1000\)

\(0.1300\)

\(-0.1000\)

\(-0.0990\)

\(2\)

\(0.2\)

\(1.2300\)

\(0.1629\)

\(-0.1990\)

\(-0.0990\)

\(3\)

\(0.3\)

\(1.3930\)

\(0.1991\)

\(-0.2980\)

\(-0.1004\)

\(4\)

\(0.4\)

\(1.5920\)

\(0.2390\)

\(-0.3984\)

\(-0.1033\)

\(5\)

\(0.5\)

\(1.8310\)

\(0.2833\)

\(-0.5018\)

\(-0.1079\)

\(6\)

\(0.6\)

\(2.1143\)

\(0.3324\)

\(-0.6097\)

\(-0.1143\)

\(7\)

\(0.7\)

\(2.4467\)

\(0.3871\)

\(-0.7240\)

\(-0.1223\)

\(8\)

\(0.8\)

\(2.8338\)

\(0.4480\)

\(-0.8462\)

\(-0.1318\)

\(9\)

\(0.9\)

\(3.2818\)

\(0.5160\)

\(-0.9780\)

\(-0.1425\)

\(10\)

\(1.0\)

\(3.7978\)

\(0.5918\)

\(-1.1205\)

\(-0.1542\)

Table2.26Euler's approximation for a system of equations

Notice that our system is not autonomous and depends on time. Therefore, we cannot graph the phase plane of this system; however, we can graph a solution curve (Figure 2.21).

Just as in the case of a single first-order differential equation, we can think of Euler's approximation as the first two terms of the Taylor series expansion,
\begin{align*}
\begin{pmatrix}
x_{k + 1} \\ y_{k + 1}
\end{pmatrix}
& =
\begin{pmatrix}
x_{k} \\ y_{k}
\end{pmatrix}
+ h
\begin{pmatrix}
x'_{k} \\ y'_{k}
\end{pmatrix}
+
\frac{h^2}{2}
\begin{pmatrix}
x''_{k} \\ y''_{k}
\end{pmatrix}
+ \cdots\\
& =
\begin{pmatrix}
x_{k} \\ y_{k}
\end{pmatrix}
+ h
\begin{pmatrix}
f(t_k, x_k, y_k) \\ g(t_k, x_k, y_k)
\end{pmatrix}
+
\frac{h^2}{2}
\begin{pmatrix}
x''_{k} \\ y''_{k}
\end{pmatrix}
+ \cdots.
\end{align*}
To get a more accurate approximation, we can take the first two terms of the Taylor series. In this case, we must compute \({\mathbf x}''\text{.}\) If we return our example,
\begin{align*}
x' & = x - y + t\\
y' & = -x + y^2 - t^2\\
x(0) & = 1\\
y(0) & = 0.
\end{align*}
we can see how this is done. First, note that
\begin{align*}
{\mathbf x}''
& =
\begin{pmatrix}
x'' \\ y''
\end{pmatrix}\\
& =
\begin{pmatrix}
x' - y' + 1 \\ -x' + 2yy' - 2t
\end{pmatrix}\\
& =
\begin{pmatrix}
(x - y + t) - (-x^2 + y^2 - t^2) \\
-(x - y + t) + 2y(-x +y^2 - t^2) - 2t
\end{pmatrix}\\
& =
\begin{pmatrix}
2x - y - y^2 + t + t^2 \\
-x + y - 2xy + 2y^3 - 2t^2y - 3t
\end{pmatrix}.
\end{align*}
Our algorithm now becomes clear,
\begin{align*}
\begin{pmatrix}
x_{k + 1} \\ y_{k + 1}
\end{pmatrix}
& =
\begin{pmatrix}
x_{k} \\ y_{k}
\end{pmatrix}
+ h
\begin{pmatrix}
f(t_k, x_k, y_k) \\ g(t_k, x_k, y_k)
\end{pmatrix}
+
\frac{h^2}{2}
\begin{pmatrix}
x''_{k} \\ y''_{k}
\end{pmatrix}\\
& =
\begin{pmatrix}
x_k + h(x_k - y_k + t_k) + h^2(2x_k - y_k - y_k^2 + t_k + t_k^2)/2 \\
y_k + h(-x_k + y_k^2 - t_k^2) + h^2(-x_k + y_k - 2x_ky_k + 2y_k^3 - 2t_k^2y_k - 3t_k)/2
\end{pmatrix}.
\end{align*}
Of course, this algorithm requires us to compute some derivatives.

Subsection2.3.4A Word about Existence and Uniqueness¶ permalink

Theorem2.27

Let
\begin{equation*}
\frac{d {\mathbf x}}{dt} = {\mathbf f}(t, {\mathbf x})
\end{equation*}
be a system of differential equations such that \({\mathbf f}\) is continuously differentiable. If \({\mathbf x}_0\) is the initial value at time \(t_0\text{,}\) there exists a unique solution for the initial value problem on some open interval \(t_0 - \epsilon \lt t \lt t_0 + \epsilon\) for some \(\epsilon > 0\text{.}\)

The consequences of the Existence and Uniqueness Theorem are much the same as they were for first-order differential equations.

If we are interested in a certain system of differential equations, it is very nice to know that a unique solution exists.

Uniqueness tells us that two solutions cannot start at the same place. Geometrically, this implies that solution curves cannot cross.

If we have an autonomous system \({\mathbf x}' = {\mathbf f}({\mathbf x})\text{,}\) our solution does not depend on time. Thus, we obtain the same solution curve if we start at the same point even though we might start at different times.

For a proof of existence and uniqueness for systems of differential equations, see [12].

A damped harmonic oscillator can be described by the second-order equation
\begin{equation*}
\frac{d^2 x}{dt^2} + p \frac{dx}{dt} + q x = 0.
\end{equation*}
We can rewrite this equation as a first-order system,
\begin{align*}
\frac{dx}{dt} & = v,\\
\frac{dv}{dt} & = - qx - pv.
\end{align*}

Duffing's equation,
\begin{equation*}
\frac{d^2 x}{dt^2} + p \frac{dx}{dt} + q x = x^3,
\end{equation*}
is an example of a differential equation that is very sensitive to initial conditions.

Many systems of equations cannot be solved analytically. However, we can use numerical techniques such as Euler's Method to find approximate solutions for the system. Given the system
\begin{align*}
\frac{dx}{dt} & = f(x, y)\\
\frac{dy}{dt} & = g(x, y),
\end{align*}
with initial condition \((x_0, y_0)\) and step size \(h\text{,}\) we can approximate a solution to the system by
\begin{align*}
x_{k+1} & = x_k + f(x_k, y_k) h\\
y_{k+1} & = y_k + g(x_k, y_k) h.
\end{align*}
We can extend this method to the improved Euler's method, Taylor series methods, or Runge-Kutta methods.

Provided certain conditions are met, we are guaranteed unique local solutions to any system of first-order differential equations. Some of the following are consequences of existence and uniqueness.

Uniqueness tells us that two distinct solutions cannot start at the same place. Geometrically, this implies that solution curves cannot cross.

If we have an autonomous system \({\mathbf x}' = {\mathbf f}({\mathbf x})\text{,}\) our solution does not depend on time. Thus, we obtain the same solution curve if we start at the same point even though we might start at different times.

Consider the system
\begin{align*}
x' & = 2x\\
y' & = y
\end{align*}
with initial conditions \(x(0) = 1\) and \(y(0) = 3\text{.}\)^{ 13 }You will find Sage very useful for this exercise

Show that \({\mathbf x}(t) = (e^{2t}, 3e^t)\) satisfies the initial value problem.

Use Euler's method with step size \(\Delta t = 0.5\) to approximate this solution, and check how close the approximation is to the real solution when \(t = 2\text{,}\) \(t = 4\text{,}\) and \(t = 6\text{.}\)

Use Euler's method with step size \(\Delta t = 0.1\) to approximate this solution, and check how close the approximation is to the real solution when \(t = 2\text{,}\) \(t = 4\text{,}\) and \(t = 6\text{.}\)

Discuss how and why the Euler approximations differ from the real solution.