In previous examples we tested our implementations of ODE-schemes by comparing the numerical solutions with an analytical/exact solution. This may be called "Method of Exact Solution", MES. The drawback of this method is that it requires an analytical solution. In the case of ODE´s this does not necessarily constitute a problem. However keeping in mind that numerical methods are generally used in cases where analytical solutions are hard or impossible to find, we need a more general method for testing our schemes. This may be done by using the "Method of Manufactures Solution", MMS. Suppose we are solving the general differential equation. $$ \begin{align} \boldsymbol{D}\boldsymbol{y}(x) = \boldsymbol{f}(x,\boldsymbol{y}) \label{_auto1} \end{align} $$ where \( \boldsymbol{D} \) is the differential operator, \( \boldsymbol{y}(x) \) is the solution and \( \boldsymbol{f}(x,\boldsymbol{y}) \) is a source term. The MMS approach would then be to choose/manufacture the solution \( \boldsymbol{y}_m \), and then compute an additional source term \( \boldsymbol{f}_m \) so that the differential equation associated to \( \boldsymbol{y}_m \) is $$ \begin{align} \boldsymbol{D}\boldsymbol{y} = \boldsymbol{f} + \boldsymbol{f}_m, \label{_auto2} \end{align} $$
where \( \boldsymbol{f}_m \) is obtained from evaluating $$ \begin{align} \label{eq:gen_ode} \boldsymbol{f}_m = \boldsymbol{D}\boldsymbol{y}_m - \boldsymbol{f}(x,\boldsymbol{y}_m). \end{align} $$
Now let's look at an example with a simple linear ODE: $$ \begin{align} \label{eq:simple_linear_ode_1} \frac{dy}{dx} = \lambda y \end{align} $$
in which we choose \( y_m \) to be \( sin(x) \), where the underscore m denotes manufactured. Then, we can compute \( f_m \) as $$ \begin{align} \label{eq:simple_linear_ode_2} f_m = \frac{dy_m}{dx} - \lambda y_m = cos(x) - \lambda sin(x). \end{align} $$
The initial and or boundary conditions are also easy to apply, since we can calculate them directly from the manufactured solution. Thus the ODE to be solved in this case would be: $$ \begin{align} \label{eq:simple_linear_ode_with_initial} \frac{dy}{dx} = \lambda y + cos(x) - \lambda sin(x) \\ y(x_0) = sin(x_0) \label{_auto3} \end{align} $$
In this case the differentiations and computations needed to calculate \( f \) is straight forward, and could have easily be done manually (hard coding it), however as we will see later these mathematical calculations may become quite complicated. We therefor use the symbolic python module sympy also in this simple example, as this will become very handy as the differential equations and expressions become more complicated. In this test the acceptance criteria is that the observed order of accuracy is close to the theoretical order. For consistent methods the discretization error \( \epsilon \) is related to the theoretical order: $$ \begin{align} \label{eq:discretization__error_gen} \epsilon = Ch^p + H.O.T \approx Ch^p \end{align} $$
where \( C \) is a constant, \( h \) is a generic step length (e.g. \( dt \) or \( dx \)), \( p \) is the theoretical order, and \( H.O.T \) are the higher order terms (\( \mathcal{O} \left(C_1 h^{p + 1}\right) \)). Now one way to determine the observed order of accuracy is to calculate the discretization error (f-fanalytic) in some manner using two or more grids: $$ \begin{align} \label{eq:discretization__error} {\epsilon}_1 = Ch^p \\ {\epsilon}_2 = C\left(\frac{h}{r}\right)^p \label{_auto4} \end{align} $$
Where \( r \) is the refinement ratio (e.g \( r = 2 \) would lead to \( h_2 = 0.5 h_1 \)) Now dividing epsilon 1 by epsilon 2 and taking the log on both sides and rearranging lead to the following expression for the observed order of accuracy: $$ \begin{align} \label{eq:discretization__error_order} p = \frac{log\left(\frac{{\epsilon}_1}{{\epsilon}_2} \right)}{log\left(r \right)} \end{align} $$
Using MMS together with an acceptance criteria based on the observed order of accuracy is a very good way to verify numerical schemes/solver/codes. The above example is showed below, together with the result from the convergence/observed order of accuracy - test:
# src-ch1/MMS_example1.py; ODEschemes.py @ git@lrhgit/tkt4140/src/src-ch1/ODEschemes.py;
import numpy as np
import matplotlib.pyplot as plt
from ODEschemes import euler, heun, rk4
from math import sqrt, pi
from sympy import symbols, diff, lambdify, sin
def func(y,x):
""" Function that returns the du/dt of the differential equation
dy/dx = f(y,x)
Args:
y(float): solutian array y(x)
x(float): current time
Returns:
out(float): f(y,x)
"""
return ffunc(y,x)
#### Use sympy to calculate the source term g based on the operator d/dx, and um ####
x = symbols('x')
yt = symbols('yt')
y_m = sin(x)
beta = -1.
f = beta*yt
f_m = diff(y_m, x) - beta*y_m
yfunc = lambdify(x, y_m, np) # create python function of the manufactured solution.
ffunc = lambdify((yt,x), f+f_m, np) # create python function of the source term g
x0, xend = 0, 2*pi # domain
y0 = yfunc(x0) # initial value
schemes = [euler, heun, rk4] # list of solvers imported from ODEschemes.py.
schemes_order = {} # dict to be filled in with order approximations for all schemes
Ndts = 5 # Number of times to refine timestep in convergence test
for scheme in schemes: # iterate through all schemes
N = 10 # initial number of time steps
order_approx = [] # start of with empty list of orders for all schemes
for i in range(Ndts+1):
time = np.linspace(x0, xend, N+1)
y = scheme(func, y0, time) # . e.g: euler(func, z0, time)
max_error = max(abs(y[:,0] - yfunc(time))) # calculate infinity norm
if i > 0: # Compute the observed order of accuracy based on two error norms
order = np.log(previous_max_error/max_error)/np.log(2)
order_approx.append(order)
previous_max_error = max_error
N *=2 # double number of timestep (halve dt)
schemes_order[scheme.func_name] = order_approx #listof orders assigned to scheme
#### Plotting ####
N = N/2**Ndts
N_list = [N*2**i for i in range(1, Ndts+1)]
N_list = np.asarray(N_list)
plt.figure()
for key in schemes_order: # Iterate all keys in dictionary schemes_order
plt.plot(N_list, (np.asarray(schemes_order[key])))
#Plot theoretical n for 1st, 2nd and 4th order schemes
plt.axhline(1.0, xmin=0, xmax=N, linestyle=':', color='k')
plt.axhline(2.0, xmin=0, xmax=N, linestyle=':', color='k')
plt.axhline(4.0, xmin=0, xmax=N, linestyle=':', color='k')
plt.xticks(N_list, rotation=-70)
legends = schemes_order.keys()
legends.append('theoretical')
plt.legend(legends, loc='best', frameon=False)
plt.title('Observed order of accuracy and MMS')
plt.xlabel('Number of time_steps')
plt.ylabel('Scheme order approximation')
plt.axis([0, max(N_list), 0, 5])
#plt.savefig('../figs/MMSExample1.png') # transparent=True
plt.show()
Figure 1: The convergence rate for the various ODE-solvers a function of the number of timesteps.

Note that \( y \) and all it's derivaties up to the power n where n is the order of the ODE should be continuos (in this case up to \( y''' \)). To test every aspect of the code the manufactured solution should also match the general set of equations solved by the code. In this case it would not make sense to chose \( y_m = y_m(x) = x \), since this would leed to \( y''_m = y'''_m = 0 \). For the sake of generality let's choose \( y_m \) to be the normal (or Gaussian) distribution \( y(x) = \frac{1}{\sigma \sqrt{2 \pi}}e^{-\frac{(x - \mu)^2}{2 {\sigma}^2}} \). The differential operator \( \boldsymbol{D} \) will now be \( \frac{d^3}{dx^3} + \frac{d^2}{dx^2} \cdot y + \frac{d}{dx } \). This leeds to the rather complicated expression for the source term $$ \begin{align} \label{eq:source__term} f_m = \frac{\sqrt{2}}{2 \sqrt{\pi} \sigma} e^{- \frac{\left(- \mu + x\right)^{2}}{2 \sigma^{2}}} - \frac{\sqrt{2}}{4 \sqrt{\pi} \sigma^{3}} \left(- 2 \mu + 2 x\right) \left(- \frac{\sqrt{2}}{2 \sqrt{\pi} \sigma^{3}} e^{- \frac{\left(- \mu + x\right)^{2}}{2 \sigma^{2}}} + \frac{\sqrt{2}}{8 \sqrt{\pi} \sigma^{5}} \left(- 2 \mu + 2 x\right)^{2} e^{- \frac{\left(- \mu + x\right)^{2}}{2 \sigma^{2}}}\right) e^{- \frac{\left(- \mu + x\right)^{2}}{2 \sigma^{2}}} + \frac{\sqrt{2}}{8 \sqrt{\pi} \sigma^{5}} \left(- 8 \mu + 8 x\right) e^{- \frac{\left(- \mu + x\right)^{2}}{2 \sigma^{2}}} \\ + \frac{\sqrt{2}}{4 \sqrt{\pi} \sigma^{5}} \left(- 2 \mu + 2 x\right) e^{- \frac{\left(- \mu + x\right)^{2}}{2 \sigma^{2}}} - \frac{\sqrt{2}}{16 \sqrt{\pi} \sigma^{7}} \left(- 2 \mu + 2 x\right)^{3} e^{- \frac{\left(- \mu + x\right)^{2}}{2 \sigma^{2}}} \label{_auto5} \end{align} $$
In MMS the source term is usually complicated and to avoid the hassle of performing the differentiations and implementing such awkward RHS/source-terms manually, the sympy module now comes at hand. Renaming so that \( y = y_0 \), the set of 1st order ODE's will be: $$ \begin{align} \label{eq:difficult_linear_ode_with_initial} y_0 ' = y_1 \\ y_1 ' = y_2 \label{_auto6}\\ y_2 ' = f_m - y_2 \cdot y_0 - y_1 \label{_auto7}\\ y_0\left(x_0 \right) = y_m\left(x_0 \right), y_1\left(x_0 \right) = y_m'\left(x_0 \right), y_2\left(x_0 \right) = y_m ''\left(x_0 \right) \label{_auto8} \end{align} $$
Now the procedure to calculate the observed order of accuracy could be done in the same manner as in the previous example. However to get a better understanding of the discretization error and to show more aspects of the sympy module we show a different approach in this example. As described earlier we start by assuming our discretization error is of the form \( \epsilon = Ch^p \), where C and p are said to be constant when the discretization error is in it's asymptotic range . Thus if we calculate the discretization error for two different time-steps \( h \), we could solve for the two unknowns C and p: $$ \begin{align} \label{eq:discretization__error_nonlinear} f_1 = Ch_1^p - {\epsilon}_1 = 0\\ f_2 = Ch_2^p - {\epsilon}_2 = 0 \label{_auto9} \end{align} $$
This is a set of nonlinear algebraic equations, and to solve the unknowns we chose to use a nonlinear iterative "Newton-Rhapson" method in two dimensions $$ \begin{equation*} \begin{aligned} \boldsymbol{x_{i+1}}= \boldsymbol{x_i}-(\boldsymbol{J_i})^{-1} \boldsymbol{f}_i \end{aligned} \end{equation*} $$
where \( \boldsymbol{f} = [f_1, f_2]^T \), \( \boldsymbol{x} = [C, p]^T \) and \( \boldsymbol{J} \) is the Jacobian Matrix: $$ \begin{equation*} \begin{aligned} J_{jk}=\frac{{\partial}f_j}{{\partial} x_k} = \begin{bmatrix} \frac{\partial f_1}{\partial C} & \frac{\partial f_1}{\partial p}\\ \frac{\partial f_2}{\partial C} & \frac{\partial f_2}{\partial p} \\ \end{bmatrix} \end{aligned} \end{equation*} $$
To start the iteration an initial guess of the value of \( \boldsymbol{x} = [C, p]^T \) need to be made. A proposal of an implementation of this test is showed below. The result from the test give the following functions for the discretization error for the different schemes: $$ \begin{align} euler \rightarrow \epsilon = 2.19 h^{1.00} \nonumber \\ heun \rightarrow \epsilon = 4.5 h^{2.00} \label{eq:discretization__error_funcs} \\ rk4 \rightarrow \epsilon = 0.19 h^{3.908} \nonumber \end{align} $$
In which the observed orders of 1.000, 2.000 and 3.908 are very close to theoretical values of 1, 2 and 4 respectively.
# src-ch1/MMS_example2.py; ODEschemes.py @ git@lrhgit/tkt4140/src/src-ch1/ODEschemes.py;
import numpy as np
import matplotlib.pylab as plt
from ODEschemes import euler, heun, rk4
from sympy import exp, symbols, diff, lambdify
from math import sqrt, pi
# Newton solver
def Newton_solver_sympy(error, h, x0):
""" Function that solves for the nonlinear set of equations
error1 = C*h1^p --> f1 = C*h1^p - error1 = 0
error2 = C*h2^p --> f2 = C h2^p - error 2 = 0
where C is a constant h is the step length and p is the order,
with use of a newton rhapson solver. In this case C and p are
the unknowns, whereas h and error are knowns. The newton rhapson
method is an iterative solver which take the form:
xnew = xold - (J^-1)*F, where J is the Jacobi matrix and F is the
residual funcion.
x = [C, p]^T
J = [[df1/dx1 df2/dx2],
[df2/dx1 df2/dx2]]
F = [f1, f2]
This is very neatly done with use of the sympy module
Args:
error(list): list of calculated errors [error(h1), error(h2)]
h(list): list of steplengths corresponding to the list of errors
x0(list): list of starting (guessed) values for x
Returns:
x(array): iterated solution of x = [C, p]
"""
from sympy import Matrix
#### Symbolic computiations: ####
C, p = symbols('C p')
f1 = C*h[-2]**p - error[-2]
f2 = C*h[-1]**p - error[-1]
F = [f1, f2]
x = [C, p]
def jacobiElement(i,j):
return diff(F[i], x[j])
Jacobi = Matrix(2, 2, jacobiElement) # neat way of computing the Jacobi Matrix
JacobiInv = Jacobi.inv()
#### Numerical computations: ####
JacobiInvfunc = lambdify([x], JacobiInv)
Ffunc = lambdify([x], F)
x = x0
for n in range(8): #perform 8 iterations
F = np.asarray(Ffunc(x))
Jinv = np.asarray(JacobiInvfunc(x))
xnew = x - np.dot(Jinv, F)
x = xnew
# print "n, x: ", n, x, F
x[0] = round(x[0], 2)
x[1] = round(x[1], 3)
return x
# Differential function
def func(y,t):
""" Function that returns the dfn/dt of the differential equation f''' + f''*f + f' = RHS
as a system of 1st order equations; f = f0
f0' = f1
f1' = f2
f2' = RHS - f2*f0 - f1
Args:
y(array): solutian array [f0, f1, f2] at time t
t(float): current time
Returns:
yout(array): differantiation array [f0', f1', f2'] at time t
"""
yout = np.zeros_like(y)
yout[:] = [y[1], y[2], RHS(t) -y[1]- y[0]*y[2]]
return yout
#### Main Program starts here ####
t = symbols('t')
sigma= 0.5 # standard deviation
mu = 0.5 # mean value
#### Perform needed differentiations based on the differential equation ####
f = (1/(sigma*sqrt(2*pi)))*exp(-((t-mu)**2)/(2*sigma**2))
dfdt = diff(f, t)
d2fdt = diff(dfdt, t)
d3fdt = diff(d2fdt, t)
RHS = d3fdt + f*d2fdt + dfdt
#### Create Python functions of f, RHS and needed differentiations of f ####
f = lambdify([t], f, np)
dfdt = lambdify([t], dfdt, np)
d2fdt = lambdify([t], d2fdt)
RHS = lambdify([t], RHS)
t0, tend = -1.5, 2.5
z0 = np.array([f(t0), dfdt(t0), d2fdt(t0)]) # initial values
schemes = [euler, heun, rk4] # list of schemes; each of which is a function
schemes_error = {} # empty dictionary.
h = [] # empty list of time step
Ntds = 8 # number of times to refine dt
for k, scheme in enumerate(schemes):
N = 20 # initial number of time steps
error = [] # start of with empty list of errors for all schemes
legendList = []
for i in range(Ntds + 1):
time = np.linspace(t0, tend, N+1)
if k==0:
h.append(time[1] - time[0]) # add this iteration's dt to list h
z = scheme(func, z0, time) # e.g: euler(func, z0, time)
fanalytic = f(time) # call analytic function f
abs_error = np.abs(z[:,0]- fanalytic) # calculate infinity norm
error.append(max(abs_error))
N *=2 # refine dt
schemes_error[scheme.func_name] = error # Add a key:value pair to the dictionary
ht = np.asarray(h)
eulerError = np.asarray(schemes_error["euler"])
heunError = np.asarray(schemes_error["heun"])
rk4Error = np.asarray(schemes_error["rk4"])
[C_euler, p_euler] = Newton_solver_sympy(eulerError, ht, [1,1])
[C_heun, p_heun] = Newton_solver_sympy(heunError, ht, [1,2])
[C_rk4, p_rk4] = Newton_solver_sympy(rk4Error, ht, [1,4])
print C_euler, p_euler
print C_heun, p_heun
print C_rk4, p_rk4
from sympy import latex
h = symbols('h')
epsilon_euler = C_euler*h**p_euler
epsilon_euler_latex = '$' + latex(epsilon_euler) + '$'
epsilon_heun = C_heun*h**p_heun
epsilon_heun_latex = '$' + latex(epsilon_heun) + '$'
epsilon_rk4 = C_rk4*h**p_rk4
epsilon_rk4_latex = '$' + latex(epsilon_rk4) + '$'
epsilon_euler = lambdify(h, epsilon_euler, np)
epsilon_heun = lambdify(h, epsilon_heun, np)
epsilon_rk4 = lambdify(h, epsilon_rk4, np)
N = N/2**(Ntds + 2)
N_list = [N*2**i for i in range(1, Ntds + 2)]
N_list = np.asarray(N_list)
plt.figure()
# plt.plot(N_list, np.log2(eulerError), 'b')
# plt.plot(N_list, np.log2(epsilon_euler(ht)), 'b--')
# plt.plot(N_list, np.log2(heunError), 'g')
# plt.plot(N_list, np.log2(epsilon_heun(ht)), 'g--')
# plt.plot(N_list, np.log2(rk4Error), 'r')
# plt.plot(N_list, np.log2(epsilon_rk4(ht)), 'r--')
plt.plot(np.log2(N_list), np.log2(eulerError), 'b')
plt.plot(np.log2(N_list), np.log2(epsilon_euler(ht)), 'b--')
plt.plot(np.log2(N_list), np.log2(heunError), 'g')
plt.plot(np.log2(N_list), np.log2(epsilon_heun(ht)), 'g--')
plt.plot(np.log2(N_list), np.log2(rk4Error), 'r')
plt.plot(np.log2(N_list), np.log2(epsilon_rk4(ht)), 'r--')
LegendList = ['${\epsilon}_{Euler}$', epsilon_euler_latex, '${\epsilon}_{Heun}$', epsilon_heun_latex, '${\epsilon}_{RK4}$', epsilon_rk4_latex]
plt.legend(LegendList, loc='best', frameon=False,fontsize='20')
plt.xlabel('log2(N)')
plt.ylabel('log2($\epsilon$)')
#plt.savefig('../fig-ch1/MMS_example2.png')
plt.show()
Figure 2: Comparison of solutions at different timesteps for the different schemes

Figure 3: Error as a function of mesh points for different schemes
