Section 1.4 Analyzing Equations Numerically
Subsection 1.4.1 Euler's Method
Suppose that we wish to solve the initial value problemk | tk | Yk | yk | |ykโYk| | Percent Error |
0 | 0.0 | 1.0000 | 1.0000 | 0.0000 | 0.00% |
1 | 0.1 | 1.1000 | 1.1103 | 0.0103 | 0.93% |
2 | 0.2 | 1.2200 | 1.2428 | 0.0228 | 1.84% |
3 | 0.3 | 1.3620 | 1.3997 | 0.0377 | 2.69% |
4 | 0.4 | 1.5282 | 1.5836 | 0.0554 | 3.50% |
5 | 0.5 | 1.7210 | 1.7974 | 0.0764 | 4.25% |
6 | 0.6 | 1.9431 | 2.0442 | 0.1011 | 4.95% |
10 | 1.0 | 3.1875 | 3.4366 | 0.2491 | 7.25% |

tk | h=0.1 | h=0.02 | h=0.001 | Exact Solution |
0.1 | 1.1000 | 1.1082 | 1.1102 | 1.1103 |
0.2 | 1.2200 | 1.2380 | 1.2426 | 1.2428 |
0.3 | 1.3620 | 1.3917 | 1.3993 | 1.3997 |
0.4 | 1.5282 | 1.5719 | 1.5831 | 1.5836 |
0.5 | 1.7210 | 1.7812 | 1.7966 | 1.7974 |
0.6 | 1.9431 | 2.0227 | 2.0431 | 2.0442 |
0.7 | 2.1974 | 2.2998 | 2.3261 | 2.3275 |
0.8 | 2.4872 | 2.6161 | 2.6493 | 2.6511 |
0.9 | 2.8159 | 2.9757 | 3.0170 | 3.1092 |
1.0 | 3.1875 | 3.3832 | 3.4238 | 3.4366 |
Activity 1.4.1. Euler's Method and Error.
Consider the initial value problem
(a)
Use separation of varibles to solve the initial value problem.
(b)
Compute y(x) for x=0,0.2,0.4,โฆ,1.
(c)
Use Euler's method to approximate solutions to the initial value problem for x=0,0.2,0.4,โฆ,1.
(d)
Compare the exact values of the solution (Task 1.4.1.b) to the approximate values of the solution (Task 1.4.1.b) and comment on what happens as x varies from 0 to 1.
Subsection 1.4.2 Finding an Error Bound
To fully understand Euler's method, we will need to recall Taylor's theorem from calculus.Theorem 1.4.4.
If x>x0, then
where \xi \in (x_0, x)\text{.}
Theorem 1.4.5.
Let y be the unique solution to the initial value problem
where t \in [a, b]\text{.} Suppose that f is continuous and there exists a constant L \gt 0 such that
whenever (t, y_1) and (t, y_2) are in D = [a, b] \times {\mathbb R}\text{.} Also assume that there exists an M such that
for all t \in [a, b]\text{.} If Y_0, \ldots, Y_N are the approximations generated by Euler's method for some positive integer N\text{,} then
k | t_k | Y_k | y_k = y(t_k) | |y_k - Y_k| | Estimated Error |
0 | 0.0 | 1.0000 | 1.0000 | 0.0000 | 0.0000 |
1 | 0.1 | 1.1000 | 1.1103 | 0.0103 | 0.0286 |
2 | 0.2 | 1.2200 | 1.2428 | 0.0228 | 0.0602 |
3 | 0.3 | 1.3620 | 1.3997 | 0.0377 | 0.0951 |
4 | 0.4 | 1.5282 | 1.5836 | 0.0554 | 0.1337 |
5 | 0.5 | 1.7210 | 1.7974 | 0.0764 | 0.1763 |
6 | 0.6 | 1.9431 | 2.0442 | 0.1011 | 0.2235 |
7 | 0.7 | 2.1974 | 2.3275 | 0.1301 | 0.2756 |
8 | 0.8 | 2.4872 | 2.6511 | 0.1639 | 0.3331 |
9 | 0.9 | 2.8159 | 3.1092 | 0.2033 | 0.3968 |
10 | 1.0 | 3.1875 | 3.4366 | 0.2491 | 0.4671 |
Subsection 1.4.3 Improving Euler's Method
If we wish to improve upon Euler's method, we could add more terms of Taylor series. For example, we can obtain a more accurate approximation by using a quadratic Taylor polynomial,Subsection 1.4.4 Important Lessons
- We can use Euler's method to find an approximate solution to the initial value problem\begin{align*} y' & = f(t, y),\\ y(a) & = \alpha \end{align*}on an interval [a, b]\text{.} If we wish to find approximations at N equally spaced points t_1, \ldots, t_N\text{,} where h = (b-a)/N and t_i = a + i h\text{,} our approximations should be\begin{align*} Y_0 & = \alpha,\\ Y_1 & = Y_0 + h f(\alpha, Y_0),\\ Y_2 & = Y_1 + h f(t_1, Y_1,)\\ & \vdots\\ Y_{k+1} & = Y_k + h f(t_k, Y_k),\\ Y_N & = Y_{N-1} + h f(t_{N-1}, Y_{N-1}). \end{align*}In practice, no one uses Euler's method. The Runge-Kutta methods are better algorithms.
- Taylor's Theorem theorem is a very useful tool for studying differential equations. If x \gt x_0\text{,} then\begin{equation*} f(x) = f(x_0) + f'(x_0)(x - x_0) + \frac{f''(x_0)}{2!} (x - x_0)^2 + \cdots + \frac{f^{(n)}(\xi)}{n!}(x - x_0)^n, \end{equation*}where \xi \in (x_0, x)\text{.}
- Error analysis rate of convergence is very important for any numerical algorithm. Our approximation is more accurate for smaller values of h\text{.} Under reasonable conditions we can also bound the error by\begin{equation*} |y(t_i) - Y_i| \leq \frac{hM}{2L} [ e^{L(t_i - a)} - 1], \end{equation*}where y is the unique solution to the initial value problem\begin{align*} y' & = f(t, y),\\ y(a) & = \alpha. \end{align*}
- The condition that there exists a constant L \gt 0 such that\begin{equation*} |f(t, y_1) - f(t, y_2)| \leq L |y_1 - y_2|, \end{equation*}whenever (t, y_1) and (t, y_2) are in D = [a, b] \times {\mathbb R} is called a Lipschitz condition.
- Using Taylor series, we can develop better numerical algorithms to compute solutions of differential equations. The Runge-Kutta methods are an important class of these algorithms.
- The improved Euler's method is given by\begin{equation*} y(t + h) = y(t) + \frac{h}{2} f(t, h) + \frac{h}{2} f(t + h, y + h f(t, y)) \end{equation*}with the error bound depending on h^2\text{.}
- The Runge-Kutta method of order 4 is given by\begin{equation*} y(t + h) = y(t) + \frac{1}{6} (F_1 + 2 F_2 + 2 F_3 + F_4), \end{equation*}where\begin{align*} F_1 & = h f(t, y)\\ F_2 & = hf\left( t + \frac{1}{2} h, y + \frac{1}{2} F_1 \right)\\ F_3 & = hf\left( t + \frac{1}{2} h, y + \frac{1}{2} F_2 \right)\\ F_4 & = hf(t + h, y + F_3) \end{align*}with the error bound depending on h^4\text{.}
Exercises 1.4.5 Exercises
Finding Solutions
For each of the initial value problem
in Exercise Group 1.4.5.1โ6,
Write the Euler's method iteration Y_{k+1} = Y_k + h f(t_k, Y_k) for the given problem, identifying the values t_0 and y_0\text{.}
Using a step size of h = 0.1\text{,} compute the approximations for Y_1\text{,} Y_2\text{,} and Y_3\text{.}
Solve the problem analytically if possible. If it is not possible for you to find the analytic solution, use Sage.
Use the results of (c) and (d), to construct a table of errors for Y_i - y_i for i = 1, 2, 3\text{.}
1.
y' = -2y\text{,} y(0) = 0
2.
y' = ty\text{,} y(0) = 1
3.
y' = y^3\text{,} y(0) = 1
4.
y' = y\text{,} y'(0) = 1
5.
y' = y + t\text{,} y'(0) = 2
This equation is a first-order linear equation (Section 1.5), but it is possible to find the analytic solution using Sage (Subsection 1.2.9).
6.
y' = 1/y\text{,} y'(0) = 2
7.
Consider the differential equation
with initial value y(0) = 2\text{.}
- Find the exact solution of the initial value problem.
- Use Euler's method with step size h = 0.5 to approximate the solution to the initial value problem on the interval [0,2] Your solution should include a table of approximate values of the dependent variable as well as the exact values of the dependent variable. Make sure that your approximations are accurate to four decimal places.
- Sketch the graph of the approximate and exact solutions.
- Use the error bound theorem (Theorem 1.4.5) to estimate the error at each approximation. Your solution should include a table of approximate values of the dependent variable the exact values of the dependent variable, the error estimates, and the actual error. Make sure that your approximations are accurate to four decimal places.
8.
In this series of exercises, we will prove the error bound theorem for Euler's method (Theorem Theorem 1.4.5).
- Use Taylor's Theorem to show that for all x \geq -1 and any positive m\text{,}\begin{equation*} 0 \leq (1 + x)^m \leq e^{mx}. \end{equation*}
- Use part (1) and geometric series to prove the following statement: If s and t are positive real numbers, and \{ a_i \}_{i = 0}^k is a sequence satisfying\begin{align*} a_0 & \geq -\frac{t}{s}\\ a_{i+1} & \leq (1 + s) a_i + t, \text{ for }i = 0, 1, 2, \ldots, k, \end{align*}then\begin{equation*} a_{i + 1} \leq e^{(i + 1)s} \left( \frac{t}{s} + a_0 \right) - \frac{t}{s}. \end{equation*}
- When i = 0\text{,} y(t_0) = Y_0 = \alpha and the theorem is true. Use Euler's method and Taylor's Theorem to show that\begin{equation*} |y_{i + 1} - Y_{i + 1} | \leq |y_i - Y_i| + h |f(t_i, y_i) - f(t_i, Y_i)| + \frac{h^2}{2} |y''(\xi_i)| \end{equation*}where \xi_i \in (t_i, t_{i+1}) and y_k = y(t_k)\text{.}
- Show that\begin{equation*} |y_{i + 1} - Y_{i + 1} | \leq |y_i - Y_i| (1 + hL) + \frac{Mh^2}{2}. \end{equation*}
- Let a_j = |y_j - Y_j| for j = 0, 1, \ldots, N\text{,} s = hL\text{,} and t = M h^2/2 and apply part (2) to show\begin{equation*} |y_{i + 1} - Y_{i + 1} | \leq e^{(1 + i)hL} \left( |y_0 - Y_0| + \frac{M h^2 }{2hL} \right) - \frac{M h^2}{2hL}. \end{equation*}and derive that\begin{equation*} |y_{i + 1} - Y_{i + 1} | \leq \frac{M h }{2L} \left( e^{(t_{i+ 1} - a)L} - 1\right) \end{equation*}for each i = 0, 1, \ldots, N-1\text{.}
Hints for part (2):
- For fixed \(i\) show that\begin{align*} a_{i + 1} & \leq (1 + s)a_i + t\\ & \leq (1 + s)[(1 + s)a_{i - 1}+ t] + t\\ & \leq (1 + s)\{ (1 + s)[(1 + s) a_{i - 2} + t]+ t\} + t\\ & \vdots\\ & \leq (1 + s)^{i+1}a_0 + [1 + (1 + s) + (1+s)^2 + \cdots + (1 + s)^i]t. \end{align*}
- Now use a geometric sum to show that\begin{equation*} a_{i + 1} \leq (1 + s)^{i + 1} a_0 + \frac{t}{s}[(1 + s)^{i + 1} - 1] = (1 + s)^{i + 1} \left( \frac{t}{s} + a_0 \right) - \frac{t}{s}. \end{equation*}
- Apply part (1) to derive\begin{equation*} a_{i + 1} \leq e^{(i + 1)s} \left( \frac{t}{s} + a_0 \right) - \frac{t}{s}. \end{equation*}
Subsection 1.4.6 SageโNumerical Routines for solving ODEs
Not all differential equations can be solved using algebra and calculus even if we are very clever. If we encounter an equation that we cannot solve or use Sage to solve, we must resort to numerical algorithms like Euler's method or one of the Runge-Kutta methods, which are best implemented using a computer. Fortunately, Sage has some very good numerical solvers. Sage will need to know the following to solve a differential equation:- An abstract function
- A differential equation, including an initial condition
- A Sage command to solve the equation.
xxxxxxxxxx
y = function('y')(x)
de = diff(y,x) == x + y
solution = desolve(de, y, ics = [0,1])
solution.show()
plot(solution, x, 0, 5)
xxxxxxxxxx
x,y = PolynomialRing(RR, 2, "xy").gens() #declare x, y as generators of a polynomial ring
eulers_method(x + y, 0,1, 0.5, 5)
eulers_method
for the inital value problem
eulers_method(f, x0, y0, h, x1)
Notice that we obtained a table of values. However, we can use the line
command from Sage to plot the values (x, y)\text{.}
xxxxxxxxxx
x,y = PolynomialRing(RR, 2, "xy").gens()
pts = eulers_method(x + y, 0, 1, 1/2, 4.5, algorithm="none")
p = list_plot(pts, color="red")
p += line(pts)
p
eulers_method
is not very sophisticated. We have to use a very small step size to get good accuracy, and the method can generate errors if we are not careful. Fortunately, Sage has much better algorithms for solving initial value problems. One such algorithm is desolve_rk4
, which implements the fourth order Runge-Kutta method.
xxxxxxxxxx
x,y = var('x y')
desolve_rk4(x + y, y, ics=[0, 1], end_points = 5, step = 0.5)
desolve_rk4
has some nice built-in graphing utilities.
xxxxxxxxxx
x,y = var('x y')
p = desolve_rk4(x + y, y, ics = [0,1], ivar = x, output = 'slope_field', end_points = [0, 5], thickness = 2, color = 'red')
p
eulers_method
and desolve
with the exact solution.
xxxxxxxxxx
โ
desolve_rk4
more accurate, it is much easier to use. For more information, see http://www.sagemath.org/doc/reference/calculus/sage/calculus/desolvers.html
.