Numerical methods for ordinary differential equations butcher pdf
The three main methods for analyzing round-off accumulation are the analytical method, the probabilistic method and the interval arithmetic method, each of which has both advantages and disadvantages. Our aim is to define further one-step methods having similar good properties.
Let us define instead of T1,u t some other, first order polynomial P1 t , for which the estimate 2. The polynomial T1,u t is the tangent line at the point ti , u ti to the exact solution. Let us apply 2. Corollary 2. The numerical method defined by 2. Remark 2. Then the formulas 2. Definition 2. The one-step method 2. Therefore the explicit Euler method is the same as the local Taylor method of the first order, defined in 2.
The method 2. We can characterize the explicit Euler method 2. He made important discoveries in fields as diverse as infinitesimal calculus and graph theory. He introduced much of the modern mathematical terminology and notation, particularly for mathematical analysis, such as the notion of a mathematical function. He is also renowned for his work in mechanics, fluid dynamics, optics, and astronomy. Euler spent most of his adult life in St. Petersburg, Russia, and in Berlin, Prussia.
He is considered to be the preeminent mathematician of the 18th century, and one of the greatest of all time. He is also one of the most prolific mathematicians ever; his collected works fill between 60 and 80 quarto volumes.
As the time interval of compounding, h, gets smaller and smaller, the amount in the savings account approaches an exponential. In Remark 2. More precisely, we wonder whether by increasing the step-size of the mesh to zero the difference of the numerical solution and the exact solution tends to zero. In the sequel, we consider this question for the explicit Euler method.
As before, we assume that the function f satisfies a Lipschitz condition in its second variable, and the solution is sufficiently smooth. First we analyze the question on a sequence of refined uniform meshes.
In the sequel we analyze en by decreasing h, i. Substituting this expression into the formula of the explicit Euler method of the form 2. Let us briefly analyze the two expressions in the notations 2. The expression gi shows how exactly the solution of the differential equation satisfies the formula of the explicit Euler method 2. Hence, based on 2. Using the estimations 2. The convergence of the explicit Euler method on some suitably chosen, non- uniform mesh can be shown, too.
Further we will show it. In the sequel we assume that with increasing the number of mesh-points the grid becomes finer everywhere, i. Then the estimation 2. This proves the following statement. The explicit Euler method is convergent, and the rate of con- vergence is one.
For this case the formulas 2. The one-step numerical method defined under 2. The Euler method of the form 2. Hence for gi , defined in 2. In the following we give an elementary proof on a uniform mesh, i.
Using the Lipschitz property of the function f , from the error equation 2. Then 2. Lemma 2. Hence, we have Corollary 2. Using the expression for gmax in 2. Let us apply the estimate 2. Theorem 2. The implicit Euler method is convergent, and the rate of convergence is one. Finally, we make two comments. We remark that the trapezoidal method is a combination of the explicit Euler method and the implicit Euler method. Clearly, this method is implicit, like the implicit Euler method.
We also note that this method is sometimes also called Crank—Nicolson method, due to the application of this method to the numerical solution of the semi-discretized heat conduction equation. By combining the error equation for the explicit Euler method and the im- plicit Euler method, we can easily obtain the error equation for the trapezoidal method in the form 2.
Hence, using the relations 2. These results show that on some fixed mesh the explicit Euler method and the implicit Euler method give approximately the same accuracy, while the trapezoidal method is more accurate. On the refining meshes we can observe that the error function of the trapezoidal method has the order O h2 , while for the explicit Euler method and the implicit Euler method the error function is of O h.
These results completely correspond to the theory. One possibility is the following. Let u t be the solution of the Cauchy problem 1. The simplest integration formulas are using the end-points of the interval, only. The different integration formulas result in the above methods.
In order to re-obtain the numerical formulas, another approach is the fol- lowing. We consider the identity 2. These two formulas together generate the explicit Euler method. In the sequel, the numerical method, defined by 2. As before, u t denotes the exact solution of the Cauchy problem 2. The order of the convergence to zero in 2. Based on the estimations 2.
According to the Remark 2. We assume that it satisfies the Lipschitz condition w. Hence, using the condition 2. The inequality 2. Let us estimate the first term on the right side of 2. Using the relations 2. With the notations 2. Using the notation 2. Using Remark 2. In this part the convergence was proven from the relation 2.
This property shows that passing from some time level to the next one, with increasing the number of the steps i. This property is called stability of the method. Hence, Theorem 2. The stability can be guaranteed by the Lipschitz condition for the function f on right side of the differential equation in the Cauchy problem 2.
However, from a practical point of view, this accuracy is not enough: typically we require the construc- tion of numerical methods with higher order of accuracy. The accuracy of the Taylor method is higher, but, in this case the realization of this method re- quires rather complicated preliminary analysis. Determination of all partial derivatives and its evaluation at given points.
In this section we show that, by use of some simple idea, the task of determining the partial derivatives can be omitted. In order to introduce the Runge—Kutta methods, first of all we define a one-step method of second order accuracy, which is different from the trapezoidal method.
Let us define the first terms of the Taylor series of the function u t in the form 2. Using the derivatives 2. The one-step, explicit numerical method 2.
Based on 2. Heun — , who was a German mathematician. Obviously, the parameterized form of the Heun method 2. Let us write the equation 2. It is subservient to write the parameters of the method 2.
By developing the right side of 2. Let us use Remark 2. Then we conclude that the numerical method, defined by 2. Rewriting the formulas 2. The numerical method 2. Let us examine the system of algebraic equations 2. For the four unknowns we have only three equations, therefore the solution is not unique. The second order numerical method of the form 2. The value of the parameters of the above second order methods in the form of table 2.
For the Heun method it has the form 0 1 1 2. The answer to this question is negative, which can be seen from the following example. In the Cauchy problem 2. Therefore, the method 2. We substitute the exact solution u t into the formula 2. Our aim is to construct higher order accurate methods on the basis of formula 2. We re-consider the numerical method written in the form 2. As we have shown, this method is exact at most in second order.
The method has second order accuracy when the condition 2. Therefore, to get higher order accurate numerical methods, we have to introduce new parameters and give some conditions for their choice. To get this condition, we have to define again the local approximation error, as it was done before. After some long but not difficult calculation we obtain the following result.
From the possible solution we give two cases. This can be done in the following form, which is the natural generalization of the methods 2. As before, we can write the parameters in a table: 0 a2 b21 a3 b31 b32 2.
In the sequel, when we specify some explicit Runge—Kutta method, then we list the lower triangular part of the matrix B only. For some fixed explicit Runge—Kutta method the order of the consistency can be defined by some simple, however usually cumbersome computation.
This can be done in the following steps. Executing these steps for an explicit Runge—Kutta method, we get the condi- tion by which the given explicit Runge—Kutta method is of p-th order. Before giving the condition of consistency for a general ex- plicit Runge—Kutta method, we note the following. In the condition of the second order 2. Let us write the condition of consistency of explicit Runge—Kutta methods in a compact way.
The explicit Runge—Kutta method defined by the Butcher tableau 2. The algorithm of this method i. What is the connection between the number of the stages m and the order of consistency p for an explicit Runge—Kutta method? As we have seen, for the one-stage explicit Euler method the method is of first order, for the two-stage methods Heun method, modified Euler method the methods are of second order, for the three-stage method 2. The image of this program on the screen can be seen in Figure 2.
Alternatively, this program suggests three initial-value problems as test problems. We can select the discretization step size by giving the parameter n, which is the number of the partitions of the interval. The result is given graphically, and the error in maximum norm is also indicated.
By increasing n the order of the convergence is also shown. A further and quite natural generalization is obtained by neglecting the requirement for the matrix B. Namely, we introduce the following notion. The numerical method defined by these parameters, i. When B is not strictly lower triangular, i. When B is not a strictly lower triangular matrix, but it is lower triangular, i.
The main difference between them is as follows. For the explicit Runge—Kutta method we can compute the intermediate values ki di- rectly, by substituting the previously defined values k1 , k2 ,. For the diagonally implicit Runge—Kutta method we cannot compute ki directly, be- cause, according to 2. Therefore, for its computation we have to solve an algebraic equation. Typically, for the implicit Runge—Kutta method the situation is the same, however, the computation of ki becomes more difficult, because it is contained not only in the formula for the definition of ki but also in all equations of 2.
This means that for the definition of the values ki we have to solve a system of nonlinear algebraic equations for m unknown values. However, these methods have good stability properties, which make them very beneficial for solving some spe- cial problems like stiff ones. The other benefit of the implicit Runge—Kutta methods is that usually they are more accurate than the explicit ones. We will see that to write these methods in the usual form of one-step methods i.
The realization of this method means the following. In the first step we solve the equation for the unknown k1 16 , then we substitute this value into the second formula.
Hence, the method with 2. As we already know, this is a first order method. Let us define the order in h of gi. This means that the implicit midpoint rule is a second order method. From the third expression we get 0. Hence, the numerical method with the Butcher tableau 2.
For the following two-stage methods we give only their Butcher tableau. We can show that the order of accuracy of this implicit Runge—Kutta method is 3. During the consideration of the accuracy of the above implicit Runge—Kutta methods we have seen that the accuracy of the methods p can be greater then the number of the stages m.
For instance, the accuracy of the one- stage trapezoidal method has second order accuracy, the two-stage 2. Hence, the implicit Euler method, the implicit midpoint rule, and the method 2. We enclose an interactive animation, called onestep imp. Figure 2. The chosen numerical methods can be the theta-method, where the value of theta can be chosen, the 3rd order Runge—Kutta method and the 4th order Runge—Kutta method.
It appears that we should not keep the step size h in the Runge—Kutta methods as a constant. Rather, we should only use a small h when the solution rapidly changes with time. A numerical method that automatically selects the step size in each step is an adaptive method. A class of adaptive methods for solving differential equations is the so-called embedded Runge—Kutta methods. An embedded Runge—Kutta method uses two ordinary Runge—Kutta methods for comparing the numerical solutions and selecting the step size.
Therefore, the required computational effort is minimized. This requires three evaluations of the function f. See formula 2. If their difference is too large, we reject the solution and use a smaller step size h to repeat the calculation. But we also use this information to suggest a step size for the next step. In these formulas the constants C1 and C2 are unknown, as we have already seen before.
Comparing 2. The above embedded Runge—Kutta method can be summarized as follows. Algorithm 2. That is, whether the calculation is accepted or not, h will always be changed. We also note that not every Runge—Kutta method has its embedded pair.
In this section we classify them by another point of view, giving new charac- terization of the methods. We solve this problem with explicit and implicit Euler methods.
The errors are listed in Table 2. This means that by decreasing h the error is of first order. By the further decrease of h, both methods well approximate the exact solution, and the order also corresponds to the theory first order.
Therefore, in some cases, the required computational time grows dramatically. In several situations the error even tends to infinity. The reason of this phenomenon is as follows. Let us examine the numerical solution at some fixed time level. For the numerical solution of the test problem 2. On the other hand, for the exact solution of the test problem 2. Hence, only such a numerical method can approximate the solution adequately which also possesses this property.
This means the following requirement for the method 2. Hence, in case of convergence we only know that for sufficiently small values of h the numerical solution is close to the exact solution. However, it does not say anything about the solution on some fixed mesh. Therefore, numerical methods with stability function R z for which the numerical solution well approximates the exact solution are of special interest.
This is the motivation of the following notion. One can easily show that the implicit Euler method is absolutely stable. However, this property is not true for the explicit Euler method, since the condition 2. The stability domains of the explicit Euler method, the implicit Euler method, and the trapezoidal method are shown in Figures 2. For the explicit Runge—Kutta methods the stability domains are illustrated in Figures 2. The Figure 2. Hence, it is absolutely stable, too.
This yields that on a mesh of such a mesh-size the sign of the numerical solution is changing at each step. Therefore the values of the numerical solution yi are decreasing in absolute value, but they are oscillating, too. This property contradicts to the fact that the exact solution is strictly monotonically decreasing.
In the sequel we generalize this approach in such a way that the new value of the approximation is defined by not only one but several previous approximations.
Such methods are called multistep methods. Such a multistep method is called m-step method. Next we show on two simple examples how can these methods derived by developing the solution of the Cauchy problem 2.
Example 3. Definition 3. For instance, the numerical method 3. These methods will be considered as the same method. Substituting these expressions into 3. Using 3. Theorem 3.
The linear multistep method of the form 3. Corollary 3. Remark 3. Obviously, the condition of consistency 3. Particularly, this means that an m-steps linear multistep method is consistent in second order when besides the condition of consistency 3.
One can easily verify that for the numerical method 3. What is the maximum accuracy order of consistency of a linear multistep method, given in the form 3. When the method is explicit, i. Summarizing, we have the following statement. Instead of the condition 3.
From 3. For the unknown parameters a1 , a2 ,. Then, from the conditions 3. From the condition 3. The other initial conditions to the linear multistep method y1 ,. This means that the accuracy of the one-step method coincides with the accuracy of the applied linear multistep method.
Typically, this one-step method is some Runge—Kutta method with sufficiently high accuracy. We emphasize that the orders of the linear multistep method and the applied one-step method should be equal, because if the one-step method has lower order, we lose the order of accuracy of the linear multistep method. As we have already seen in the case of one-step methods, consistency in itself is not enough for the convergence. In the sequel without proof we give the conditions under which the convergence holds.
See condition 3. The following definition tells us what other roots are allowed. The next theorem shows that the root criterion for a linear multistep method means its stability.
Assume that a linear multistep method is consistent, and the root criterion is valid. Then the method is convergent, i. The following example illustrates the role of the root cri- terion: when it is destroyed, then the method is not stable and hence, not convergent. We can easily check that the method has maximum accuracy, i. We solve numerically this problem using the above method.
When we compute the first approximation y1 by some one-step method, then the result will be a good approximation to the exact solution, which means that it will be close to zero. We can observe that the numerical results are increasing, and the values of the approximations are not bounded.
Hence, the considered numerical method is not convergent. However, the question of its sufficiency is yet to be answered. The answer to this question is negative, i. This means that even under the root criterion some numerical methods result in bad approximations due to the errors arising in the computations. In these methods the problem is that their characteristic equation has more than one different roots with absolute values equal to one.
To this aim, we introduce the following notion. Hence, the root criterion is true, but the method is not strongly stable. Therefore, the usage of this method is not recommended. For a strongly stable linear multistep method we recall the famous result, given by G. Dahlquist, which shows that the maximum order of such methods are rather restrictive.
For one-step methods we have already shown that the convergence does not give any information about its adequate behavior on some fixed mesh. The convergence ensures that for sufficiently small mesh-size the numerical result is close to the exact solution. However, as we have seen, this small step size can be unrealistic. To avoid this situation, we have defined the absolute stable methods. See Definition 2. For a linear multistep methods the following question is quite natural: Which linear multistep methods are absolutely stable?
The answer to this question shows that for these methods it is difficult to guarantee the absolute stability. These barriers yield a serious problem in the application of linear multistep methods. For instance, the method given in 3. In the Adams methods the parameters b0 , b1 ,. The condition of consistency and the order of consistency of an Adams method can be defined directly from the conditions 3. The Adams method is convergent with the order, equal to the order of the consistency.
In Table 3. For the easier interpretation, in the table we write the coefficient bk not in fraction form. The Adams—Bashforth methods are not A-stable, since they are explicit see the first Dahlquist barrier.
Moreover, their stability domain is relatively small. With an increase of m this domain gets narrower. Hence, these methods are not suitable for solving problems where absolute stability is required. This motivates the usage of the Adams—Moulton methods. For the Adams—Bashforth method we enclose an interactive animation, called multistep exp. The image of this program on the screen can be seen in Figure 3. Figure 3. The chosen numerical methods can be the explicit Adams-Bashfort method of first, second, third and fourth order.
Anyone you share the following link with will be able to read this content:. Sorry, a shareable link is not currently available for this article. Provided by the Springer Nature SharedIt content-sharing initiative.
Skip to main content. Search SpringerLink Search. Abstract Variable stepsize stability results are found for three representative multivalue methods. Reference [1] N. Heard Authors J. Butcher View author publications. This is done by taking a crude approximation to a solution and feeding it into the equation to be solved to obtain a better approximation to the exact solution.
This procedure is done repeatedly as shown in [RCF02]. The proof of this theorem is broken into five steps. From equation 2. To obtain equation 2. Hence by the above, we have proved that equation 2. Also since f is a continuous function in the closed and bounded region R then it is also uniformly continuous in the same region. By equation 2. Assuming that equation 2.
Hence this establishes equation 2. Therefore we have shown that equation 2. Thus, the question of the initial value problem 2. In the above equation 2. Thus our main aim is to show that the unique solution of the initial value problem depends continuously on the initial value y0 [HS99].
So, in this section, we will discuss the continuous dependence of solution on data as presented by [HS99]. So, the question of the initial value problem 2. Maximum Interval Existence of Solutions Having specified through the choice of the real number h which is the interval on which a unique solution exists, and that the true interval of existence of the unique solution is usually larger, we are motivated by the question: what is the maximal interval on which the solution can be extended [HS99]?.
Therefore, we have clearly shown that the initial value problem of the first-order differential equation has a unique solution which depends continuously on the initial conditions in a given maximum interval in which the solution can be extended.
Numerical Methods for solving ordinary differential equations Numerical methods have their roots from the Taylor Series.
If the differential equation is presented to a computer using a given procedure and form, the computer program will perform the evaluation for the given values. With the availability of such facilities, we can come up with many methods which have their roots from the Taylor Series[Kre10]. Thus, by successive differentiation, functions such as f1 x, y , f2 x, y ,. Suppose the step size h has a uniform value for all n , then f xn , yn is denoted by fn , simplifying the equation 3.
In each step, we obtain results and use this result to execute the next step that follows. By doing this, we get a sequence of values y0 , y1 , y2 ,.
Step 1: define f x, y Step 2: input initial values x0 and y0 Step 3: input h the step size and n the number of steps. On the other hand, the Improved Euler method uses the trapezoidal rule for its approximate values. Section 3. Improved Euler Method Page 14 Considering equation 3. Since Euler method does not give the best and accurate approximate values, we come up with a better method, the Improved Euler method.
This obtained by replacing the integrand of equation 3. Computer algorithm for improved Euler method The following is the computer algorithm for the improved Euler method as shown in [BD12]. Fourth order Runge—Kutta Method Page 15 Step 2: input initial values x0 and y0 Step 3: input h the step size and n the number of steps. The Runge Kutta method originates from the Euler and improved Euler method and is considered to be a member of the family of Runge Kutta methods.
This method is preferred because of its relative simplicity ,sufficiency,accuracy and efficiency when handling a given number of problems. In this discussion, we assume that the step size h is constant [BD12]. Adams—Bashforth Method Page 17 3.
Adams—Bashforth Method Page 18 Description of Adams—Bashforth Method Adams—Bashforth methos is more accurate compared to other one-step formulas can be obtained by using a polynomial of a higher degree with more corresponding data points.
Suppose we choose a polynomial P3 x of degree three. Thus, the fourth order formula has a local truncation error proportional to O h5 [BD12]. Adams—Moulton Method Page 19 3. By using an approximating polynomial of a higher degree and correspondingly more data points, we can obtain more accurate higher order formulas. This combination is done so as to obtain accurate and simple results since the pairs are of the same order.
Computer algorithm for Predictor-Corrector method The following is the computer algorithm for the predictor-corrector method as shown in [BD12]. Predictor-Corrector Method Page 21 Step 2: input initial values x0 and y0 Step 3: input h the step size and n the number of steps.
Applications of the Numerical Methods on an illustrative example In this chapter, we perform numerical experiments using the different numerical methods on an illustra- tive example of the initial value problem of a first order ordinary differential equation.
Let us consider the following differential equation as shown in [BD12]. Example Page 23 The availability of the exact solution easily helps us to determine the accuracy, efficiency and convergence of any numerical method used in this problem. Therefore, this problem will be used throughout the chapter to illustrate and compare the approximate solution of the different numerical methods to the exact solution. Note that there was no round off error during the analysis of the figures used in the tables below.
They are written exactly as they were given as an output by the computer program. Table 4. Example Page 24 Table 4. The approximate solution and the exact solution were combined to generate results that would enable us compare the accuracy and sufficiency of the two using the Euler numerical method.
The obtained results from this numerical experiment are illustrated in the graph 4. Also, the accuracy of the method is not impressive. Hence the Euler method is less accurate since the approximate solution and the exact solution are not the same for the given initial value problem.
Example Page 25 Variation of y with time. This shows that when compared to the Euler method, the improved Euler method is more accurate and efficient since it yields better and substantial results as it also requires much less effort to compute.
It is also clear that the graphs of the different step sizes also vary in terms of convergence of the approximate solution to the exact solution. The graph 4. Also, the accuracy of the method was a bit impressive. Hence the improved Euler method is more efficient than the Euler method.
Section 4.
0コメント