This makes sense since the pendulum should not move if the bob is initially hanging downward (θ=2πn) or is at the very top or the very bottom of a swing (θ=(2n+1)π). Since our first goal is to determine the nature of each equilibrium solution, we will compute the Jacobian of the system (5.3.1)–(5.3.2). This is just
Now let us consider the type of equilibrium solutions that we will obtain when the pendulum is standing upright. These solutions will occur at (θ,v)=(±π,0),(±3π,0),(±5π,0),…. The characteristic polynomial of the Jacobian matrix J2 at these points is
We can now devise a strategy for sketching the phase plane of the damped pendulum. If b/m and v are both small, the value of H decreases slowly along the solutions (Figure 5.3.1).
The function H in the case of the damped pendulum is an example of a Lyapunov function. Specifically, a function L(x,y) is called a Lyapunov function for the system
is a Hamiltonian function for our system. Recall that we also call H the energy function of the system. However, if p>0 and (y(t),v(t)) is a solution for our system, we have
Consequently, H(y(t),v(t)) decreases at a nonzero rate (except when v=0), and H is a Lyapunov function. The level sets of H are ellipses in the yv-plane. As H decreases, the energy dissipates and the ellipses become spiral sinks.
are ±i, the linearization has a center at the origin. The phase plane consists of circles about the origin (Figure 5.3.2). Notice that the linearization does not depend on α.
Now let us consider what happens to system (5.3.5)–(5.3.6) if we consider different values of α. If α=5, the situation is quite different than the linearization of our system. A solution curve spirals out from the origin as t→∞ (Figure 5.3.3). As t→−∞, the solution curve spirals back into the origin, but it seems to stop before actually reaching the origin. If α=−5 on the other hand, we seem to have the opposite behavior with the solution curves spiraling into the origin as t→∞. As before, the solutions do not seem to reach the origin (Figure 5.3.4).
is the distance of a point on the solution curve to the origin in the xy-plane. To see how r changes as t→±∞, we can compute the derivative of r. Actually, it is easier to work with the equation r(t)2=x(t)2+y(t)2. Thus,
However, we do not need to know this solution to determine the nature of the equilibrium solution at the origin. If α>0 and t→−∞, equation (5.3.7) tells us that r(t)→0. Thus, any solution to the system (5.3.5)–(5.3.6) we have a spiral sink at the origin if α=−5. Even though linearization fails to tell us the nature of the equilibrium solution at the origin, we were able to determine the nature of the equilibrium solution with further analysis.
We will now try to exploit what we have learned from our last example and from Hamiltonian systems to see if it is possible to analyze more general systems. If we consider solutions, (x(t),y(t)), of the system
we might ask how a function V(x,y) varies along the solution curve. We already have an answer if our system is Hamiltonian, and V is the corresponding Hamiltonian function. In this case dV/dt=0. In general, we know that
Thus, V is increasing along a solution curve if ˙V(x,y)>0 and decreasing along a solution curve if ˙V(x,y)<0. Our example suggests that we can determine this information without finding the solution.
Let us use this new information about V to obtain information about equilibrium solutions of our system. We do know that V(x,y) graphs as a surface in R3 and
gives the contour lines or level curves of the surface in the xy-plane. 31  We also know that the gradient of V points in the direction that V is increasing the fastest and that the gradient is orthogonal to the level curves of V. Thus, if ˙V(x,y)>0, we know that V is increasing in the direction of the vector field F and the elevation of the solution curve through (x,y) in R3 is increasing. That is, the solution curve is traveling uphill. Similarly, if ˙V(x,y)<0, we know that the solution curve at (x,y) is going downhill. 32 
See Figures 1 and 2 in John Polking, Albert Boggess, and David Arnold. {\it Differential Equations}. Prentice Hall, Upper Saddle River, NJ, 2001, p. 611.
The argument that we have made here also works in higher dimensions.
Now suppose that V is a real-valued function defined on a set S in the xy-plane, where the point x0=(x0,y0) is in S. We say that V is positive definite if V(x)>0 for all x in S, where x≠x0, and V is positive semidefinite if V(x)≥0. Similarly, we say that V is negative definite and negative semidefinite if V(x)<0 and V(x)≤0, respectively.
has an equilibrium solution at (x0,y0). Let V be a continuously differentiable function defined on a neighborhood U of (x0,y0) that is positive definite with minimum at (x0,y0).
If ˙V is negative semidefinite on U, then (x0,y0) is a stable equilibrium solution. That is, any solution that starts near the equilibrium solution will stay near the equilibrium solution.
If ˙V is negative definite on U, then (x0,y0) is an asymptotically stable equilibrium solution or a sink.
The function V in Theorem 5.3.5 is called a Lyapunov function. If we compare Theorem 1 to using linearization to determine stability of an equilibrium solution, we will find that we can apply this result where linearization fails. Also, Lyapunov functions are defined on a domain U, where linearization only tells us what happens on a small neighborhood around the equilibrium solution. Unfortunately, there are no general ways of finding Lyapunov functions.
Thus, S increases at the point on the solution curve where the gradient of S is nonzero. That is, S increases at every point on the solution curve except at the equilibrium points.
The equation for the nonlinear pendulum with damping
dθdt=vdvdt=−bmv−gLsinθ.
can be analyzed by examining dH/dt, where H is the Hamiltonian function for the ideal pendulum. The function H for this system is an example of a Lyapunov function.
Let V be a real-valued function defined on a set S in the xy-plane such that the point x0=(x0,y0) is in S and V(x0)=0.
We say that V is positive definite if V(x)>0 for all x in S, where x≠x0.
We say that V is positive semidefinite if V(x)≥0 for all x in S.
We say that V is negative definite if V(x)<0 for all x in S, where x≠x0.
We say that V is negative semidefinite if V(x)≤0 for all x in S.
Suppose that the system
dxdt=f(x,y)dydt=g(x,y)
has an equilibrium solution at (x0,y0). Let V be a continuously differentiable function defined on a neighborhood U of (x0,y0) that is positive definite with minimum at (x0,y0).
If ˙V is negative semidefinite on U, then (x0,y0) is a stable equilibrium solution. That is, any solution that starts near the equilibrium solution will stay near the equilibrium solution.
If ˙V is negative definite on U, then (x0,y0) is an asymptotically stable equilibrium solution or a sink.
We can use these results to analyze the behavior of equilibrium solutions where linearization fails. The function V is called a Lyapunov function. We have no general methods for finding Lyapunov functions.
The system
dxdt=f(x,y)dydt=g(x,y)
is a gradient system if
f(x,y)=∂S∂xg(x,y)=∂S∂y,
where S is a real-valued function on the xy-plane. Since
dSdt(x(t),y(t))=(∂S∂x)2+(∂S∂y)2≥0,
S increases on every solution to the system except at the critical points of S(x(t),y(t)). Since the eigenvalues of a gradient system are real, a gradient system has no spiral sources, spiral sinks, or centers.