Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
afeiguin
GitHub Repository: afeiguin/comp-phys
Path: blob/master/01_02_higher_order_methods.ipynb
374 views
Kernel: Python 3

Higher order: Taylor’s series

We can go a step beyond Euler’s method keeping up to second order terms in the expansion around x0x_0. Doing so we obtain y(x+Δx)=y(x)+y(x)Δx+12y(x)(Δx)2+O(Δx3)y(x+\Delta x)=y(x)+y'(x)\Delta x+\frac{1}{2}y''(x)(\Delta x)^2+O(\Delta x^3) from the ODE we get y(x)=f(x,y),y(x)=dfdx=fx+fydydx=fx+fyf\begin{darray}{rcl} y'(x)&=&f(x,y), \\ y''(x)&=&\frac{df}{dx}=\frac{\partial f}{\partial x}+\frac{\partial f}{\partial y}\frac{dy}{dx}=\frac{\partial f}{\partial x}+\frac{\partial f}{\partial y} f \end{darray}

Substituting in the Taylor expansion we obtain

yn+1=yn+fΔx+12(Δx)2[fx+ffy]+O(Δx3),y_{n+1}=y_n+f\Delta x+\frac{1}{2}(\Delta x)^2[\frac{\partial f}{\partial x}+f\frac{\partial f}{\partial y}]+O(\Delta x^3),

where all the functions and derivatives are evaluated in (xn,yn)(x_n,y_n).

Multistep or Predictor-Corrector methods

We can achieve higher accuracy by relating yn+1y_{n+1} not only to yny_n, but also to points further in the past yn1,yn2,...y_{n-1},y_{n-2},... To derive such formulas we can formally integrate exactly the equation of motion to obtain: yn+1=yn+xnxn+1f(x,y)dxy_{n+1}=y_n+\int_{x_n}^{x_{n+1}}f(x,y)dx

The problem is that we don’t know f(x,y)f(x,y) over the interval (xn,xn+1)(x_n,x_{n+1}). However, we can use the values of yy at xnx_n and xn1x_{n-1} to provide a linear extrapolation: f=(xxn1)Δxfn(xxn)Δxfn1+O(Δx2),f=\frac{(x-x_{n-1})}{\Delta x}f_n-\frac{(x-x_n)}{\Delta x} f_{n-1}+O(\Delta x^2), with fn=f(xn,yn)f_n=f(x_n,y_n). Inserting into the integral we obtain yn+1=yn+Δx(32fn12fn1)+O(Δx3)y_{n+1}=y_n+\Delta x(\frac{3}{2}f_n-\frac{1}{2}f_{n-1})+O(\Delta x^3) Note that the value of y0y_0 is not sufficient information to get this algorithm started. The value of y1y_1 has to be obtained first by some other procedure, like the ones described previously. This means that the method is not "self starting".

Runge-Kutta methods

2nd order Runge-Kutta

Euler’s method rests on the idea that the slope at one point can be used to extrapolate to the next. A plausible idea to make a better estimate of the slope is to extrapolate to a point halfway across the interval, and then to use the derivative at this point to extrapolate across the whole interval. Thus,

k=Δxf(xn,yx),yn+1=yn+Δxf(x+Δx/2,yn+k/2)+O(Δx3).\begin{darray}{rcl} k&=&\Delta x f(x_n,y_x), \\ y_{n+1}&=&y_n+\Delta x f(x+\Delta x/2, y_n+k/2) + O(\Delta x^3).\end{darray}

It has the same accuracy as the Taylor series. It requires the evaluation of ff twice for each step.

4th order Runge-Kutta

Similar ideas can be used to derive a 3rd or 4th order Runge-Kutta method. It has been found by experience that the best balance between accuracy and computational effort is given by a fourth-order algorithm. Such a method would require evaluating ff four times at each step, with a local accuracy of O(Δx5)O(\Delta x^5). It can be written as follows: k1=Δxf(xn,yn),k2=Δxf(xn+Δx/2,yn+k1/2),k3=Δxf(xn+Δx/2,yn+k2/2),k4=Δxf(xn+Δx,yn+k3),yn+1=yn+16(k1+2k2+2k3+k4)+O(Δx5).\begin{darray}{rcl} k_1&=&\Delta x f(x_n,y_n), \\ k_2&=&\Delta x f(x_n+\Delta x/2,y_n+k_1/2), \\ k_3&=&\Delta x f(x_n+\Delta x/2,y_n+k_2/2), \\ k_4&=&\Delta x f(x_n+\Delta x,y_n+k_3), \\ y_{n+1}&=&y_n+\frac{1}{6}(k_1+2k_2+2k_3+k_4)+O(\Delta x^5).\end{darray}

Runge-Kutta method are self-starting, meaning that they can be used to obtain the first few iterations for a non self-starting algorithm.

Challenge 1.2

Repeat the calculation in Challenge 1.1 using 4th order Runge-Kutta