Norwegian University of Science and Technology
Department of structural engineering





Contact person: Leif Rune Hellevik
Tlf: 73594535/98283895

Exam in the course TKT4140 Numerical Methods

Date: May 23. 2017

Problem 1

In this problem we will look at the following ordinary differential equation (ODE): $$ \begin{align} y^{\prime\prime}(x) = 2 y(x)\, \left(1 - \frac{3}{2}y(x)\right), \label{eq:ODE} \end{align} $$

with \( x \in [x_{min},x_{max}] \) and the following boundaries: $$ \begin{align} y\left(x_{\min}\right) = y\left(0\right) = 1 \qquad \text{and} \qquad y\left(x_{\max}\right) = y\left(1\right) = \beta = \frac{1}{cosh^2\left(\frac{1}{\sqrt{2}}\right)} \approx 0.63. \label{eq:BCs} \end{align} $$

a) With respect to problem \eqref{eq:ODE}-\eqref{eq:BCs}, state which is the order of the ODE, whether it is linear or nonlinear and whether it is an initial or a boundary value problem. Justify your statements.

b) Choose a second order method for solving \eqref{eq:ODE} with boundary conditions \eqref{eq:BCs}. Provide a complete description of the method, including the general algorithm, discretized equations, special treatment of boundary conditions, etc, as appropriate for the method.

c) Show that the method is second order accurate by computing the local truncation error of the discretized differential equation/equations. Moreover, show whether the method is consistent or not.

Hint.

If a shooting method was chosen in b the task should be performed for each of the first order ODEs that follows from transforming eq. \eqref{eq:ODE} into a system of first order ODEs. However in this problem it is sufficient to show that the method is second order when applied to the first order ODE for \( y \).

d) Write briefly how you could solve the problem using a different method. If a shooting method was chosen in b, describe a method that directly discretizes \eqref{eq:ODE} and vice-versa (maximum 5 short statements).

Solution.

a)
\eqref{eq:ODE} is a second order ODE since the highest derivative is of that order. Morever, the ODE is non-linear, because of the term \( 3y^2(x) \). Equations \eqref{eq:ODE} with boundary conditions \eqref{eq:BCs} consitutes a boundary value problem, since the solution is prescribed at both ends of the domain.

b)
Shooting method: We need to transform eqs. \eqref{eq:ODE} and \eqref{eq:BCs} into a system of first order ODEs: $$ \begin{align} & y^{\prime}(x)=g(x), \label{eq:ODESYS1}\\ & g^{\prime}(x)=2 y(x)\, \left(1 - \frac{3}{2}y(x)\right) = f(x,y), \label{eq:ODESYS2} \end{align} $$

and solve an initial value problem by guessing \( y^{\prime} \left( 0 \right) = g \left( 0 \right) =s \): $$ \begin{align} y\left(0\right) = 1 \qquad \text{and} \qquad g \left( 0 \right)=s. \label{eq:InitialValues} \end{align} $$

We want to find \( s \) such that our solution of the initial value problem is in agreement with the original boundary values, and do so by introducing a boundary value error function \( \phi \): $$ \begin{align} \phi \left(s\right) = y\left(1;s \right) - \beta, \label{eq:beta} \end{align} $$

with the property that \( \phi=0 \) for the correct initial guess \( s^* \). In order to find the correct value \( s \), we guess the two first values, \( s_1 \) and \( s_2 \), solve the two initial value problems and evaluate the corresponding boundary error functions \( \phi_1 \) and \( \phi_2 \). Next we use the secant method iteratively to find better guesses of \( s \): $$ \begin{equation} s^{m+1}=s^m+\Delta s \label{eq:Snew} \end{equation} $$

where $$ \begin{equation*} \Delta s=s^{m+1}-s^m=-\phi (s^m)\cdot \left[ \frac{s^m- s^{m-1}}{\phi (s^m)-\phi (s^{m-1})} \right],\ m=1,2,\dots \end{equation*} $$

until our \( \phi \) function is sufficiently close to zero. In this case, where we are using a second order method a suitable stop-criteria could be \( \phi \leq h^3 \), where \( h \) is the step size. Heun's method is a second order explicit ODE initial value solver and could be a suitable scheme: $$ \begin{align} & \mathbf{y}^p_{i+1}=\mathbf{y}_i+h\cdot \mathbf{f}(x_i,\mathbf{y}_i) \label{eq:1404a}\\ & \mathbf{y}_{i+1}=\mathbf{y}_i +\frac{h}{2}\cdot [\mathbf{f}(x_i,\mathbf{y}_i)+\mathbf{f}(x_{i+1},\mathbf{y}^p_{i+1})]. \label{eq:Heun} \end{align} $$

where \( \mathbf{y}=[y,g]^T \) and \( \mathbf{f}(x,\mathbf{y})=[g,f]^T \).

Direct method: eqs. \eqref{eq:ODE} and \eqref{eq:BCs} can be solved directly by employing e.g. central differences, linearizing and solving the corresponding algebraic system iteratively. Discretization and rearrangement yields: $$ \begin{align} y_{i+1} - 2y_i + y_{i-1} - 2h^2 \,y_i + 3h^2 \, y_i^2 = 0. \label{eq:discretizedODE} \end{align} $$ Introducing the iteration index \( m \) and gathering terms we get: $$ \begin{align} y_{i-1}^{m+1} + \left( 3h^2 \, y_i^{m+1} -2h^2 -2 \right)y_i^{m+1} + y_{i+1}^{m+1} = 0, \label{_auto1} \end{align} $$

which may be linearized by approximating \( 3h^2 \, y_i^{m+1} \) with it's previous value \( 3h^2 \, y_i^{m} \). This leads to the following system of linear equations for \( y^{m+1}_i,\,i=1, \ldots,N-1 \):


Figure 1: Discretization of domain

$$ \begin{align} \left( 3h^2 \, y_1^{m} -2h^2 -2 \right)y_1^{m+1} + y_{2}^{m+1} = - y_{0}, \label{_auto2}\\ y_{i-1}^{m+1} + \left( 3h^2 \, y_i^{m} -2h^2 -2 \right)y_i^{m+1} + y_{i+1}^{m+1} = 0, i=2, \ldots,N-2 \label{_auto3}\\ y_{N-2}^{m+1} + \left( 3h^2 \, y_{N-1}^{m} -2h^2 -2 \right)y_{N-1}^{m+1} = -y_{N}, \label{_auto4} \end{align} $$ which may be solved by a direct or iterative solver depending on the size of the problem for every iteration \( m \). Once again, to preserve the order of the method the iteration process should continue until the error between one iteration and the next is below e.g. \( h^3 \). NB! slow iterative convergence could be an issue.

c)
Shooting method:

The task can be performed for each first order ODE separately. For this exam, we are only asked to compute the LTE for \( y \). First we define our numerical operator for \( y \) as $$ \begin{align} \label{eq:ShootingOperator} L_y(y_{i}) = y_{i+1} - y_i - \frac{h}{2} [g(x_i) + g(x_{i+1})] \end{align} $$ and compute the local truncation error as $$ \begin{align} \tau_y = \frac{1}{h} L_y(y(x_i)) \label{eq:ShootingLTE}, \end{align} $$ where \( L_y(y(x_i)) \) stands for the evaluation of the numerical operator by the exact solution of the problem. Note that an equivalent formulation of the local truncation error is derived by defining the numerical operator as the discretization of the differential equation $$ \begin{align} \label{eq:ShootingOperator2} L^*_y(y_{i}) = \frac{y_{i+1} - y_i}{h} - \frac{1}{2} [g(x_i) + g(x_{i+1})] \end{align} $$ and then the local truncation error is $$ \begin{align} \tau_y = L_y(y(x_i)) \label{eq:ShootingLTE2}. \end{align} $$

Here we will work with \eqref{eq:ShootingOperator} and \eqref{eq:ShootingLTE}. Evaluating \eqref{eq:ShootingLTE} we obtain $$ \begin{align} \begin{split} \tau_y = & \frac{1}{h} \left[ y(x_i) + h y'(x_i) + \frac{h^2}{2}y''(x_i) + \frac{h^3}{6}y'''(x_i) + O(h^4) - \frac{h}{2} \{g(x_i) + g(x_i) + h f(x_i,y_i)\}\right], \label{_auto5}\\ = & \frac{1}{h} \left[ h \{y'(x_i) - g(x_i)\} + \frac{h^2}{2} \{y''(x_i) - f(x_i,y_i)\}+ \frac{h^3}{6}y'''(x_i) + O(h^4) \right], \label{_auto6}\\ = & \frac{h^2}{6}y'''(x_i) + O(h^3), \end{split} \label{_auto7} \end{align} $$ which shows that the scheme is second order accurate for \( y \). The procedure for \( g \) is similar.

The numerical scheme is consistent because, under regularity assumptions, both local truncation error \( \tau_y \) vanishes for \( h \to 0 \).

Direct method:

In a similar fashion to what exposed previously, we define the discretization operator $$ \begin{align} \label{eq:DirectOperator} L_y(y_{i}) = \frac{y_{i+1} - 2 y_i + y_{i-1}}{h^2} - f(x_i,y_i), \end{align} $$ and compute the local truncation error as $$ \begin{align} \tau_y = L_y(y(x_i)) \label{eq:DirectLTE}, \end{align} $$ which, using Taylor series expansions, yields $$ \begin{align} \begin{split} \tau_y = & \frac{1}{h^2} \left[ h^2 y''(x_i) + \frac{h^4}{12}y''''(x_i) +O(h^5) \right] - f(x_i,y_i) , \label{_auto8}\\ = & y''(x_i) - f(x_i,y_i) + \frac{h^2}{12}y''''(x_i) + O(h^3), \label{_auto9}\\ = & \frac{h^2}{12}y''''(x_i) + O(h^3), \end{split} \label{_auto10} \end{align} $$ showing that the scheme is second order accurate. Moreover, the numerical scheme is consistent because, under regularity assumptions, the local truncation error \( \tau_y \) vanishes for \( h \to 0 \).

d)
Shooting method:

  • Transform problem into system of first order ODEs and an initial value problem by guessing \( y'(0) = s \).
  • Guess two values of s, solve with explicit ODE scheme (e.g. Heun's method) and compare numerical solution at boundary with the actual value; \( \phi \left(s\right) = y\left(1;s \right) - \beta \).
  • Use secant method iteratively to get better guesses of s. Stop when \( \phi \) is sufficiently close to zero.
Direct method:
  • Discretize eq. \eqref{eq:ODE} using central differences.
  • Linearize nonlinear term.
  • Guess initial solution, then solve linearized algebraic system iteratively until convergence.

Problem 2

Here we are going to find the stationary solution of the temperature field in a square beam cross-section. In the top and left side of the cross-section the temperature is known (Dirichlet boundary condition), and on the right and bottom side the derivative is known (Neumann boundary condition), see Figure 2.


Figure 2: Cross-section with boundary conditions

If we assume isotropic heat conduction we can write the stationary heat conduction equation as: $$ \begin{align} \frac{\partial^2T}{\partial x^2} + \frac{\partial^2T}{\partial y^2} = 0. \label{eq:heat} \end{align} $$

a) Write the general discretized version of \eqref{eq:heat}, using centered finite differences for a grid node that is not influenced by boundary conditions. Use equal spacing in x and y directions (\( \Delta x = \Delta y \)).

b) Using \( \Delta x = \Delta y = \frac{1}{3} \) and ghost nodes for the Neumann boundary conditions, write out the resulting discretized equations for all mesh nodes where the temperature is unknown. Draw a sketch of the corresponding grid (also including treatment of boundary conditions). Furthermore, write out the matrix \( \boldsymbol{A} \) and right hand side \( \boldsymbol{b} \) of the corresponding system of linear algebraic equations, \( \boldsymbol{A}\cdot \boldsymbol{T}=\boldsymbol{b} \), where \( \boldsymbol{T} \) is a vector holding the unknown temperatures. Number the components of \( \boldsymbol{T} \) by starting in the lower left part of the grid and increase index in the \( x \)-direction first.

c) Now choose \( \Delta x = \Delta y = \frac{1}{N} \), where N may be chosen arbitrary, and briefly show/explain the pattern of the corresponding \( \boldsymbol{A} \) matrix; which diagonals are non-zero, and what pattern do they follow?

d) Use the code skeleton provided as attachment for this problem to make a python function that takes N as input and returns the \( \boldsymbol{A} \) matrix.
NB! Fill in where indicated in the skeleton and hand in the entire page as part of your exam.

Solution.

a) $$ \begin{align} T_{i-1,j} + T_{i,j-1} - 4\,T_{i,j} + T_{i+1,j} + T_{i,j+1} = 0 \label{_auto11} \end{align} $$ b)


Figure 3: Sketch of discretization

$$ \begin{equation} \boldsymbol{A} = \left[\begin{matrix}-4 & 1 & 0 & 2 & 0 & 0 & 0 & 0 & 0\\1 & -4 & 1 & 0 & 2 & 0 & 0 & 0 & 0\\0 & 2 & -4 & 0 & 0 & 2 & 0 & 0 & 0\\1 & 0 & 0 & -4 & 1 & 0 & 1 & 0 & 0\\0 & 1 & 0 & 1 & -4 & 1 & 0 & 1 & 0\\0 & 0 & 1 & 0 & 2 & -4 & 0 & 0 & 1\\0 & 0 & 0 & 1 & 0 & 0 & -4 & 1 & 0\\0 & 0 & 0 & 0 & 1 & 0 & 1 & -4 & 1\\0 & 0 & 0 & 0 & 0 & 1 & 0 & 2 & -4\end{matrix}\right], \qquad \boldsymbol{b} = \left[\begin{matrix}-50\\0\\0\\-50\\0\\0\\-150\\-100\\-100\end{matrix}\right] \label{_auto12} \end{equation} $$

c)

  • Main diagonal: all (\( NxN \)) elements are \( -4 \)
  • Super diagonal 1: all (\( NxN - 1 \)) elements are \( 1 \). Exceptions:
    • Starting from python index \( N-1 \), every \( N \)-th element is \( 0 \)
  • Sub diagonal 1: all (\( NxN - 1 \)) elements are \( 1 \). Exceptions:
    • Starting from python index \( N-2 \), every \( N \)-th element is \( 2 \)
    • Starting from python index \( N-1 \), every \( N \)-th element is \( 0 \)
  • Super diagonal N: all (\( NxN - N \)) elements are \( 1 \). Exceptions:
    • The first N elements are \( 2 \)
  • Sub diagonal N: all (\( NxN - N \)) elements are \( 1 \).
d)

from scipy.sparse import diags
import numpy as np

def createAmatrix(N):
    
    mainDiag = -4.*np.ones(N*N)
    
    superDiag1 = np.ones(N*N - 1)
    superDiag1[N - 1::N] = 0
    
    subDiag1 = np.ones(N*N - 1)
    subDiag1[N - 2::N] = 2
    subDiag1[N - 1::N] = 0
    
    superDiagN = np.ones(N*N - N)
    superDiagN[0:N] = 2
    
    subDiagN = np.ones(N*N - N)
    
    A = diags([mainDiag, superDiag1, subDiag1, superDiagN, subDiagN], [0, 1, -1, N, -N])
    
    return A

Problem 3

The linear advection equations takes the form: $$ \begin{align} \frac{\partial u}{\partial t} + a \, \frac{\partial u}{\partial x} = 0, \label{eq:advection} \end{align} $$

where \( a \) is the wave speed.

a) Discretize eq. \eqref{eq:advection} using central differences for both the temporal and spatial derivative, and show that the corresponding scheme (called the Leapfrog scheme) may be written: $$ \begin{align} u_j^{n+1} = u_j^{n-1} - c \left(u_{j+1}^n - u_{j-1}^n\right), \label{eq:leapfrog} \end{align} $$

where \( c=a\frac{\Delta t}{\Delta x} \) is the Courant number.

b) Write out the first four terms of the Taylor expansion of \( u_j^{n \pm 1} \), and \( u_{j \pm 1}^n \).

c) Find an expression for the local truncation error of the scheme in eq. \eqref{eq:leapfrog}, and show that the scheme is second order in time and space.

d) Find an expression for the stability limit of the scheme in eq. \eqref{eq:leapfrog} using von Neumann method.

e) The python skeleton given as attachment for this problem uses the leapfrog scheme to solve the linear advection equation \eqref{eq:advection} with a \( sine^2 \) curve as initial condition. In the script the forward in time backward in space ftbs scheme is also involved: $$ \begin{align} u_j^{n+1} = u_j^{n} - c \left(u_{j}^n - u_{j-1}^n\right), \label{eq:ftbs} \end{align} $$

Fill in to complete the functions leapfrog and ftbs, so they return the solution Unew for all interior nodes.
NB! Fill in where indicated in the skeleton and hand in the entire page as part of your exam.

f) Why is the call to 'ftbs' in line 54 necessary for the Leapfrog scheme?

Attachment for problem 2

from scipy.sparse import diags
import numpy as np

def createAmatrix(N):
    # Fill in to compute non-zero diagonals
    mainDiag =
    
    
    
    
    superDiag1 = 
    
    
    
    
    subDiag1 =
    
    
    
    
    superDiagN = 
    
    
    
    
    subDiagN =
    
    
    
    
    # Fill in between the commas below
    A = diags([          ,         ,         ,         ,         ], [    ,   ,   ,   ,   ])
    
    return A

Attachment for problem 3

import numpy as np
import matplotlib.pyplot as plt
from matplotlib import animation
from scipy import interpolate

def f(x):
    """A smooth sine^2 function between x_left and x_right"""
    f = np.zeros_like(x)
    x_left = 0.1
    x_right = 0.2
    xm = (x_right-x_left)/2.0
    f = np.where((x>x_left) & (x<x_right), np.sin(np.pi*(x-x_left)/(x_right-x_left))**2.0,f) 
    return f

def ftbs(Uold, c):
    """ Forward in time backward in space scheme"""
    # Fill in below
    
    Unew = 
    
    
    return Unew

def leapfrog(Uold, Uolder, c):
    """ Leapfrog scheme"""
    # Fill in below
    
    Unew = 
    
    
    return Unew


a = 1.0 # linear wave speed
tmin, tmax = 0.0, 1.2 # start and stop time of simulation
xmin, xmax = 0.0, 1.0 # start and end of spatial domain
Nx = 400 # number of spatial points
c = 0.9 # courant number, 

# discretize
x = np.linspace(xmin, xmax, Nx+1) # discretization of space
dx = float((xmax-xmin)/Nx) # spatial step size
dt = c/a*dx # stable time step calculated from stability requirement
Nt = int((tmax-tmin)/dt) # number of time steps
time = np.linspace(tmin, tmax, Nt) # discretization of time

# initialize variables

u_initial = f(x)
u = np.zeros_like(x)
un = np.zeros((len(time), len(x))) # holds the numerical solution
u1 = np.zeros_like(u_initial)

u1[1:-1] = ftbs(u_initial, c)
un[0, :] = u_initial
un[1, :] = u1

for n in range(1, Nt - 2):
        
    u_bc = interpolate.interp1d(x[-2:], u[-2:]) # interplate at right boundary
    u[1:-1] = leapfrog(un[n, :], un[n - 1, :], c) # calculate numerical solution of interior
    u[-1] = u_bc(x[-1] - a*dt) # interpolate along a characteristic to find the boundary value
    
    un[n + 1,:] = u # storing the solution for plotting

Solution.

a)

Using centered finite differences for time and space derivatives in \eqref{eq:advection} yields $$ \begin{align} \frac{u_j^{n+1} - u_j^{n-1}}{2 \Delta t} = - a \frac{u_{j+1}^n - u_{j-1}^n}{2 \Delta x}, \label{eq:leapfrog2} \end{align} $$ that can be rearranged to $$ \begin{align} u_j^{n+1} = u_j^{n-1} - a\frac{\Delta t}{\Delta x} \left(u_{j+1}^n - u_{j-1}^n\right), \label{eq:leapfrog3} \end{align} $$ and finally yield $$ \begin{align} u_j^{n+1} = u_j^{n-1} - c \left(u_{j+1}^n - u_{j-1}^n\right). \label{eq:leapfrog4} \end{align} $$

b)
Taylor series expansion in time read $$ \begin{align} u_j^{n \pm 1} = u_j^{n} \pm \Delta t \left. \frac{\partial u}{\partial t}\right|_j^n + \frac{{\Delta t}^2}{2} \left. \frac{\partial^2 u}{\partial t^2}\right|_j^n \pm \frac{{\Delta t}^3}{6} \left. \frac{\partial^3 u}{\partial t^3}\right|_j^n + O(h^4), \label{eq:taylor3b1} \end{align} $$ while expansions in space are given by $$ \begin{align} u_{j \pm 1}^{n} = u_j^{n} \pm \Delta x \left. \frac{\partial u}{\partial x}\right|_j^n + \frac{{\Delta x}^2}{2} \left. \frac{\partial^2 u}{\partial x^2}\right|_j^n \pm \frac{{\Delta x}^3}{6} \left. \frac{\partial^3 u}{\partial x^3}\right|_j^n + O(h^4). \label{eq:taylor3b2} \end{align} $$

c)

The numerical operator for scheme \eqref{eq:leapfrog} is $$ \begin{align} \label{eq:LeapOperator} L_u(u_{i}) = \frac{u^{n+1}_{j} -u^{n-1}_{j}}{\Delta t} + a \frac{u^{n}_{j+1} -u^{n}_{j-1}}{ \Delta x}, \end{align} $$ and its local truncation error can be then defined as $$ \begin{align} \tau_u = L_u(u(x_j,t^n)) \label{eq:LeapLTE}, \end{align} $$ which, using Taylor series expansions, yields $$ \begin{align} \begin{split} \tau_u = & \frac{\left( u_j^{n} + \Delta t \left. \frac{\partial u}{\partial t}\right|_j^n + \frac{{\Delta t}^2}{2} \left. \frac{\partial^2 u}{\partial t^2}\right|_j^n + \frac{{\Delta t}^3}{6} \left. \frac{\partial^3 u}{\partial t^3}\right|_j^n + O(h^4)\right) - \left( u_j^{n} - \Delta t \left. \frac{\partial u}{\partial t}\right|_j^n + \frac{{\Delta t}^2}{2} \left. \frac{\partial^2 u}{\partial t^2}\right|_j^n - \frac{{\Delta t}^3}{6} \left. \frac{\partial^3 u}{\partial t^3}\right|_j^n + O(h^4)\right)}{\Delta t} + \label{_auto13}\\ & + a \frac{\left( u_j^{n} + \Delta x \left. \frac{\partial u}{\partial x}\right|_j^n + \frac{{\Delta x}^2}{2} \left. \frac{\partial^2 u}{\partial x^2}\right|_j^n + \frac{{\Delta x}^3}{6} \left. \frac{\partial^3 u}{\partial x^3}\right|_j^n + O(h^4)\right) - \left( u_j^{n} - \Delta x \left. \frac{\partial u}{\partial x}\right|_j^n + \frac{{\Delta x}^2}{2} \left. \frac{\partial^2 u}{\partial x^2}\right|_j^n - \frac{{\Delta x}^3}{6} \left. \frac{\partial^3 u}{\partial x^3}\right|_j^n + O(h^4)\right)}{\Delta x}, \label{_auto14}\\ \end{split} \label{_auto15} \end{align} $$

in which terms with \( u_j^n, \frac{\partial u^2}{\partial t^2}, \frac{\partial u^2}{\partial x^2} \) cancel. Continuing without including higher order terms yields: $$ \begin{align} \begin{split} \tau_u & = \frac{\left(2 \Delta t \left. \frac{\partial u}{\partial t}\right|_j^n + \frac{{\Delta t}^3}{3} \left. \frac{\partial^3 u}{\partial t^3}\right|_j^n \right) }{\Delta t} + a \frac{\left( 2 \Delta x \left. \frac{\partial u}{\partial x}\right|_j^n + \frac{{\Delta x}^3}{3} \left. \frac{\partial^3 u}{\partial x^3}\right|_j^n \right) }{\Delta x}, \label{_auto16}\\ & = 2 \left( \left.\frac{\partial u}{\partial t}\right|_j^n + a \left.\frac{\partial u}{\partial x}\right|_j^n \right) + \frac{{\Delta t}^2}{3} \left. \frac{\partial^3 u}{\partial t^3}\right|_j^n + \frac{{\Delta x}^2}{3} \left. \frac{\partial^3 u}{\partial x^3}\right|_j^n, \label{_auto17}\\ & = \frac{{\Delta t}^2}{3} \left. \frac{\partial^3 u}{\partial t^3}\right|_j^n + \frac{{\Delta x}^2}{3} \left. \frac{\partial^3 u}{\partial x^3}\right|_j^n, \label{eq:LTE3c2} \end{split} \end{align} $$ which shows that the local truncation error has an order \( O({\Delta t}^2, {\Delta x}^2) \).

d)
Substituting \( u_j^n \rightarrow E_j^n = G^n e^{i\beta x_j} \) in eq. \eqref{eq:leapfrog} $$ \begin{align} G^{n+1} e^{i\beta x_j} = G^{n-1} e^{i\beta x_j} - c \left(G^n e^{i\beta x_{j+1}} - G^n e^{i\beta x_{j-1}}\right), \label{eq:vonNeumann1} \end{align} $$ and further division by \( G^{n-1} e^{i\beta x_j} \), and introducing \( \delta = \beta (x_j-x_{j-1}) \) gives $$ \begin{align} G^{2} = 1 - c \left(G e^{i \delta} - G e^{-i \delta}\right). \label{eq:vonNeumann2} \end{align} $$

Noting that the following trigonometric relation holds: $$ \begin{align} e^{ix} - e^{-ix} = i 2 sin(x), \label{eq:trigonometric} \end{align} $$ we obtain $$ \begin{align} G^{2} + 2 i \,G \, c \,sin(\delta) - 1 = 0. \label{eq:vonNeumann3} \end{align} $$

Further by employing the quadratic formula we get the following roots for \( G \) $$ \begin{align} G_\pm = -i \, c \,sin(\delta) \pm \sqrt{1 - c^2 \, sin^2(\delta)}. \label{eq:vonNeumann4} \end{align} $$ We have two cases. For \( |c|\leq 1 \), one can see that $$ \begin{align} |G| = c^2\,sin^2(\delta) + 1 - c^2\,sin^2(\delta) = 1, \label{eq:vonNeumann5} \end{align} $$ so that the scheme is conditionally stable for \( |c|\leq 1 \). On the other hand, if \( |c|>1 \), there will be some wavelengths for which $c \,sin(\delta)$>1, in which case the two roots are purely imaginary. Consider the case \( c>1 \) and \( sin(\delta)=1 \), then $$ \begin{align} G_\pm = -i c \pm i \sqrt{c^2-1} = -i \left( c \mp \sqrt{c^2-1} \right) , \label{eq:vonNeumann6} \end{align} $$ for which \( G_->1 \), resulting in an unstable scheme.

d)

def leapfrog(Uold, Uolder, c):
    
    Unew = Uolder[1:-1] - c*(Uold[2:] - Uold[:-2])
    
    return Unew