Method of Manufactured Solution

Fredrik Eikeland Fossan

 

 

Feb 12, 2016

La oss se på en generell differensialligning:

 
$$ \begin{align} \tag{1} \boldsymbol{D}\boldsymbol{u} = 0 \end{align} $$

 

Method of Exact Solution

  • Løser ligning (1) analytisk me tradisjonelle metoder
  • Løser ligning (1) numerisk og sammenlikner med analytisk løsning

Method of Manufactured Solution

  • Starter med å velge en løsning \( \boldsymbol{u_m} \) (manufactured)
  • modifiserer ligning (1) ved å sette inn \( \boldsymbol{u_m} \). Høyre side blir nå ulik 0:

 
$$ \begin{align} \tag{2} \boldsymbol{D}\boldsymbol{u} = \boldsymbol{g} \end{align} $$

 

  • Løser ligning (2) numerisk og sammenlikner med \( \boldsymbol{u_m} \)

Eksempel med en enkel ODL

 
$$ \begin{align} \tag{3} \frac{du}{dt} = g \nonumber \end{align} $$

 

hvor \( \boldsymbol{D} \) i dette tilfellet blir \( \frac{d \cdot}{dt} \)

  • Velger \( u_m \) = \( sin(t) \)
  • Regner ut \( g = \frac{d(sin(t))}{dt} = cos(t) \)
  • Regner ut initial-verdier fra \( u_m \), og løser følgende problem:

 
$$ \begin{align} \tag{4} \frac{du}{dt} = cos(t) \\ u(t_0) = sin(t_0) \end{align} $$

 

Kodet eksempel

  • Enkelt, men bruker sympy

# src-ch1/MMS_example1.py; ODEschemes.py @ git@lrhgit/tkt4140/src/src-ch1/ODEschemes.py;

import numpy as np
import matplotlib.pyplot as plt
from ODEschemes import euler, heun, rk4
from math import pi
from sympy import symbols, diff, lambdify, sin

def func(y,t):
    """ Function that returns the du/dt of the differential equation
            du/dt = g
            
        Args:
            y(float): solutian array u(t)
            t(float): current time

        Returns:
            yout(float): g(t)
    """
    return gfunc(t)

#### Main program starts here
t = symbols('t')
u = sin(t)
g = diff(u, t)

ufunc = lambdify(t, u, np) # create python function of the manufactured solution
gfunc = lambdify(t, g, np) # create python function of the source term g

N = 100
t0, tend = 0, 2*pi # domain
time = np.linspace(t0, tend, N)

u_0 = ufunc(t0) # initial value

uNum = euler(func, u_0, time)
uM = ufunc(time)

#plotting:
plt.figure()
plt.plot(time, uM, 'k')
plt.plot(time, uNum, 'r--')
plt.legend(['Manufactured', 'Numerical'], frameon=False)
plt.xlabel('t')
plt.ylabel('u')
plt.savefig('../fig-ch1/MMSExample0.png', transparent=True) # transparent=True
plt.show()

Bestemmelse av observert orden

For konsistente numeriske metoder har vi følgende forhold mellom diskretiserings-feilen, \( \epsilon \) og metodens orden \( p \):

 
$$ \begin{align} \tag{5} \epsilon = Ch^p + H.O.T \approx Ch^p \end{align} $$

 

For å bestemme den observerte orden trengs to eller flere løsninger hvor ulike steglengder er brukt:

 
$$ \begin{align} {\epsilon}_1 = Ch^p , \quad {\epsilon}_2 = C\left(\frac{h}{r}\right)^p, \quad p = \frac{log\left(\frac{{\epsilon}_1}{{\epsilon}_2} \right)}{log\left(r \right)} \tag{6} \end{align} $$

 

  • Løs problemet ved å bruke steglengde \( h_1 \)
  • Regn ut diskretiserings-feilen \( \epsilon_1 \), ved bruk av en norm (RMS, infinity-norm e.l.)
  • Reduser steglengden med en faktor \( r \), til \( h_2 \)
  • Løs problemet ved å bruke steglengde \( h_2 \)
  • Regn ut diskretiserings-feilen \( \epsilon_2 \)
  • Bruk ligning (6) til å regne ut observert orden.
  • Repeter punktene over til konvergens

RMS and infinity error norm:

 
$$ \begin{align} \tag{7} E = \frac{1}{N} \sum_{i=1}^{N} \sqrt{\left( Ua_i- Un_i\right)^2} , \quad E = max \left( abs(Ua_i- Un_i)\right) \end{align} $$

 

Kodet eksempel

  • Bruk av MMS sammen med sammenligning mellom observert og teoretisk orden gir en veldig generell og bra test

# src-ch1/MMS_example1.py; ODEschemes.py @ git@lrhgit/tkt4140/src/src-ch1/ODEschemes.py;

import numpy as np
import matplotlib.pyplot as plt
from ODEschemes import euler, heun, rk4
from math import sqrt, pi
from sympy import symbols, diff, lambdify, sin

def func(y,t):
    """ Function that returns the du/dt of the differential equation
            du/dt = g
            
        Args:
            y(float): solutian array u(t)
            t(float): current time

        Returns:
            yout(float): g(t)
    """
    return gfunc(t)

#### Use sympy to calculate the source term g based on the operator d/dx, and um ####
t = symbols('t')
u = sin(t)
g = diff(u, t)
ufunc = lambdify(t, u, np) # create python function of the manufactured solution.
gfunc = lambdify(t, g, np) # create python function of the source term g


t0, tend = 0, 2*pi # domain
z0 = ufunc(t0) # initial value

schemes = [euler, heun, rk4] # list of solvers imported from ODEschemes.py. 
schemes_order = {} # dict to be filled in with order approximations for all schemes

Ndts = 5 # Number of times to refine timestep in convergence test

for scheme in schemes: # iterate through all schemes
    N = 10    # initial number of time steps
    order_approx = [] # start of with empty list of orders for all schemes
    
    for i in range(Ndts+1):
        time = np.linspace(t0, tend, N+1)
        z = scheme(func, z0, time) # . e.g: euler(func, z0, time) 
        max_error = max(abs(z[:,0] - ufunc(time))) # calculate infinity norm 
        
        if i > 0: # Compute the observed order of accuracy based on two error norms
            order = np.log(previous_max_error/max_error)/np.log(2)
            order_approx.append(order)
            
        previous_max_error = max_error
        N *=2 # double number of timestep (halve dt)
    
    schemes_order[scheme.func_name] = order_approx #listof orders assigned to scheme

#### Plotting ####
N = N/2**Ndts
N_list = [N*2**i for i in range(1, Ndts+1)]
N_list = np.asarray(N_list)

plt.figure()
for key in schemes_order: # Iterate all keys in dictionary schemes_order 
    plt.plot(N_list, (np.asarray(schemes_order[key])))

# Plot theoretical n for 1st, 2nd and 4th order schemes
plt.axhline(1.0, xmin=0, xmax=N, linestyle=':', color='k')
plt.axhline(2.0, xmin=0, xmax=N, linestyle=':', color='k')
plt.axhline(4.0, xmin=0, xmax=N, linestyle=':', color='k')
plt.xticks(N_list, rotation=-70)
legends = schemes_order.keys()
legends.append('theoretical') 
plt.legend(legends, loc='best', frameon=False)
plt.title('Observed order of accuracy and MMS')
plt.xlabel('Number of time_steps')
plt.ylabel('Scheme order approximation')
plt.axis([0, max(N_list), 0, 5])
#plt.savefig('../figs/MMSExample1.png') # transparent=True
plt.show()

Et Eksempel på en litt vanskeligere ODL

 
$$ \begin{align} \tag{8} u''' + u'' \cdot u + u' = g \nonumber \end{align} $$

 

  • Begrensninger på valg av \( U_m \)

\( u_m = \frac{1}{\sigma \sqrt{2 \pi}}e^{-\frac{(t - \mu)^2}{2 {\sigma}^2}} \) gir:

 
$$ \begin{align} \tag{9} g = \frac{\sqrt{2}}{2 \sqrt{\pi} \sigma} e^{- \frac{\left(- \mu + t\right)^{2}}{2 \sigma^{2}}} - \frac{\sqrt{2}}{4 \sqrt{\pi} \sigma^{3}} \left(- 2 \mu + 2 t\right) \left(- \frac{\sqrt{2}}{2 \sqrt{\pi} \sigma^{3}} e^{- \frac{\left(- \mu + t\right)^{2}}{2 \sigma^{2}}} + \\ \frac{\sqrt{2}}{8 \sqrt{\pi} \sigma^{5}} \left(- 2 \mu + 2 t\right)^{2} e^{- \frac{\left(- \mu + t\right)^{2}}{2 \sigma^{2}}}\right) e^{- \frac{\left(- \mu + t\right)^{2}}{2 \sigma^{2}}} + \frac{\sqrt{2}}{8 \sqrt{\pi} \sigma^{5}} \left(- 8 \mu + 8 t\right) e^{- \frac{\left(- \mu + t\right)^{2}}{2 \sigma^{2}}} \\ + \frac{\sqrt{2}}{4 \sqrt{\pi} \sigma^{5}} \left(- 2 \mu + 2 t\right) e^{- \frac{\left(- \mu + t\right)^{2}}{2 \sigma^{2}}} - \frac{\sqrt{2}}{16 \sqrt{\pi} \sigma^{7}} \left(- 2 \mu + 2 t\right)^{3} e^{- \frac{\left(- \mu + t\right)^{2}}{2 \sigma^{2}}} \nonumber \end{align} $$

 

Her kommer sympy til sin rett. Vi får følgende system å løse:

 
$$ \begin{align} \tag{10} u_0 ' = u_1 \\ u_1 ' = u_2 \\ u_2 ' = g - u_2 \cdot u_0 - u_1\\ u_0\left(t_0 \right) = u_m\left(t_0 \right), u_1\left(t_0 \right) = u_m'\left(t_0 \right), u_2\left(t_0 \right) = u_m ''\left(t_0 \right) \nonumber \end{align} $$

 

Alternativ måte å regne ut observert orden

Dersom vi har regnet ut diskretiseringsfeilen for to forskjellige steglengder får vi to ligninger:

 
$$ \begin{align} \tag{11} f_1 = Ch_1^p - {\epsilon}_1 = 0\\ f_2 = Ch_2^p - {\epsilon}_2 = 0 \nonumber \end{align} $$

 

Dette er to ikke lineære ligninger som kan løses med "Newton-Rhapson" metode i to dimensjoner

 
$$ \begin{equation*} \begin{aligned} \boldsymbol{x_{i+1}}= \boldsymbol{x_i}-(\boldsymbol{J_i})^{-1}f_i \end{aligned} \end{equation*} $$

 
where \( f = [f_1, f_2]^T \), \( x = [C, p]^T \) and J is the Jacobi Matrix:

 
$$ \begin{equation*} \begin{aligned} J_{jk}=\frac{{\partial}f_j}{{\partial} x_k} = \begin{bmatrix} \frac{\partial f_1}{\partial C} & \frac{\partial f_1}{\partial p}\\ \frac{\partial f_2}{\partial C} & \frac{\partial f_2}{\partial p} \\ \end{bmatrix} \end{aligned} \end{equation*} $$

 

Newthon-Rhapson løser ved hjelp av sympy:

def Newton_solver_sympy(error, h, x0):
    """ Function that solves for the nonlinear set of equations
        error1 = C*h1^p --> f1 = C*h1^p - error1 = 0
        error2 = C*h2^p --> f2 = C h2^p - error 2 = 0
        where C is a constant h is the step length and p is the order,
        with use of a newton rhapson solver. In this case C and p are
        the unknowns, whereas h and error are knowns. The newton rhapson 
        method is an iterative solver which take the form:
        xnew = xold - (J^-1)*F, where J is the Jacobi matrix and F is the 
        residual funcion.
            x = [C, p]^T
            J = [[df1/dx1  df2/dx2],
                 [df2/dx1  df2/dx2]]
            F = [f1, f2]
        This is very neatly done with use of the sympy module
        
        Args:
            error(list): list of calculated errors [error(h1), error(h2)]
            h(list): list of steplengths corresponding to the list of errors
            x0(list): list of starting (guessed) values for x
        
        Returns:
            x(array): iterated solution of x = [C, p]

    """
    from sympy import Matrix
    #### Symbolic computiations: ####
    C, p = symbols('C p')
    f1 = C*h[-2]**p - error[-2]
    f2 = C*h[-1]**p - error[-1]
    F = [f1, f2]
    x = [C, p]
    
    def jacobiElement(i,j):
        return diff(F[i], x[j])
        
    Jacobi = Matrix(2, 2, jacobiElement) # neat way of computing the Jacobi Matrix
    JacobiInv = Jacobi.inv()
    #### Numerical computations: ####
    JacobiInvfunc = lambdify([x], JacobiInv)
    Ffunc = lambdify([x], F)
    x = x0
    
    for n in range(8): #perform 8 iterations
        F = np.asarray(Ffunc(x))
        Jinv = np.asarray(JacobiInvfunc(x))
        xnew = x - np.dot(Jinv, F)
        x = xnew
        #print "n, x: ", n, x
    x[0] = round(x[0], 2)
    x[1] = round(x[1], 3)
 
    return x

Kodet eksempel

# src-ch1/MMS_example2.py; ODEschemes.py @ git@lrhgit/tkt4140/src/src-ch1/ODEschemes.py;
 
import numpy as np
import matplotlib.pylab as plt
from ODEschemes import euler, heun, rk4
from sympy import exp, symbols, diff, lambdify
from math import sqrt, pi

# Newton solver
def Newton_solver_sympy(error, h, x0):
    """ Function that solves for the nonlinear set of equations
        error1 = C*h1^p --> f1 = C*h1^p - error1 = 0
        error2 = C*h2^p --> f2 = C h2^p - error 2 = 0
        where C is a constant h is the step length and p is the order,
        with use of a newton rhapson solver. In this case C and p are
        the unknowns, whereas h and error are knowns. The newton rhapson 
        method is an iterative solver which take the form:
        xnew = xold - (J^-1)*F, where J is the Jacobi matrix and F is the 
        residual funcion.
            x = [C, p]^T
            J = [[df1/dx1  df2/dx2],
                 [df2/dx1  df2/dx2]]
            F = [f1, f2]
        This is very neatly done with use of the sympy module
        
        Args:
            error(list): list of calculated errors [error(h1), error(h2)]
            h(list): list of steplengths corresponding to the list of errors
            x0(list): list of starting (guessed) values for x
        
        Returns:
            x(array): iterated solution of x = [C, p]

    """
    from sympy import Matrix
    #### Symbolic computiations: ####
    C, p = symbols('C p')
    f1 = C*h[-2]**p - error[-2]
    f2 = C*h[-1]**p - error[-1]
    F = [f1, f2]
    x = [C, p]
    
    def jacobiElement(i,j):
        return diff(F[i], x[j])
        
    Jacobi = Matrix(2, 2, jacobiElement) # neat way of computing the Jacobi Matrix
    JacobiInv = Jacobi.inv()
    #### Numerical computations: ####
    JacobiInvfunc = lambdify([x], JacobiInv)
    Ffunc = lambdify([x], F)
    x = x0
    
    for n in range(8): #perform 8 iterations
        F = np.asarray(Ffunc(x))
        Jinv = np.asarray(JacobiInvfunc(x))
        xnew = x - np.dot(Jinv, F)
        x = xnew
        #print "n, x: ", n, x
    x[0] = round(x[0], 2)
    x[1] = round(x[1], 3)
 
    return x

# Differential function
def func(y,t):
    """ Function that returns the dfn/dt of the differential equation f''' + f''*f + f' = RHS
        as a system of 1st order equations; f = f0 
                f0' = f1 
                f1' = f2
                f2' = RHS - f2*f0 - f1 
    
        Args:
            y(array): solutian array [f0, f1, f2] at time t
            t(float): current time

        Returns:
            yout(array): differantiation array [f0', f1', f2'] at time t
    """
    yout = np.zeros_like(y)
    yout[:] = [y[1], y[2], RHS(t) -y[1]- y[0]*y[2]]
    
    return yout

#### Main Program starts here ####

t = symbols('t')
sigma=  0.5 # standard deviation
mu = 0.5 # mean value

#### Perform needed differentiations based on the differential equation ####
f = (1/(sigma*sqrt(2*pi)))*exp(-((t-mu)**2)/(2*sigma**2))
dfdt = diff(f, t)
d2fdt = diff(dfdt, t)
d3fdt = diff(d2fdt, t)
RHS = d3fdt + f*d2fdt + dfdt

#### Create Python functions of f, RHS and needed differentiations of f ####
f = lambdify([t], f, np)
dfdt = lambdify([t], dfdt, np)
d2fdt = lambdify([t], d2fdt)
RHS = lambdify([t], RHS)

t0, tend = -1.5, 2.5
z0 = np.array([f(t0), dfdt(t0), d2fdt(t0)]) # initial values

schemes = [euler, heun, rk4] # list of schemes; each of which is a function
schemes_error = {} # empty dictionary. 
h = [] # empty list of time step

Ntds = 8 # number of times to refine dt
 
for k, scheme in enumerate(schemes):
    N = 20    # initial number of time steps
    error = [] # start of with empty list of errors for all schemes
    legendList = [] 
    
    for i in range(Ntds + 1):
        time = np.linspace(t0, tend, N+1)
        
        if k==0:
            h.append(time[1] - time[0]) # add this iteration's dt to list h
        z = scheme(func, z0, time) # e.g: euler(func, z0, time) 
        fanalytic = f(time) # call analytic function f 
        
        abs_error = np.abs(z[:,0]- fanalytic) # calculate infinity norm
        error.append(max(abs_error))
        
        N *=2 # refine dt
    
    schemes_error[scheme.func_name] = error # Add a key:value pair to the dictionary


ht = np.asarray(h) 
eulerError = np.asarray(schemes_error["euler"])
heunError = np.asarray(schemes_error["heun"])
rk4Error = np.asarray(schemes_error["rk4"])

[C_euler, p_euler] = Newton_solver_sympy(eulerError, ht, [1,1])
[C_heun, p_heun] = Newton_solver_sympy(heunError, ht, [1,2])
[C_rk4, p_rk4] = Newton_solver_sympy(rk4Error, ht, [1,4])

from sympy import latex
h = symbols('h')
epsilon_euler = C_euler*h**p_euler
epsilon_euler_latex = '$' + latex(epsilon_euler) + '$'
epsilon_heun = C_heun*h**p_heun
epsilon_heun_latex = '$' + latex(epsilon_heun) + '$'
epsilon_rk4 = C_rk4*h**p_rk4
epsilon_rk4_latex = '$' + latex(epsilon_rk4) + '$'

epsilon_euler = lambdify(h, epsilon_euler, np)
epsilon_heun = lambdify(h, epsilon_heun, np)
epsilon_rk4 = lambdify(h, epsilon_rk4, np)

N = N/2**(Ntds + 2)
N_list = [N*2**i for i in range(1, Ntds + 2)]
N_list = np.asarray(N_list)

plt.figure()
plt.plot(N_list, np.log2(eulerError), 'b')
plt.plot(N_list, np.log2(epsilon_euler(ht)), 'b--')
plt.plot(N_list, np.log2(heunError), 'g')
plt.plot(N_list, np.log2(epsilon_heun(ht)), 'g--')
plt.plot(N_list, np.log2(rk4Error), 'r')
plt.plot(N_list, np.log2(epsilon_rk4(ht)), 'r--')
LegendList = ['${\epsilon}_{euler}$', epsilon_euler_latex, '${\epsilon}_{heun}$', epsilon_heun_latex, '${\epsilon}_{rk4}$', epsilon_rk4_latex]
plt.legend(LegendList, loc='best', frameon=False)
plt.xlabel('N')
plt.ylabel('log2($\epsilon$)')
plt.savefig('../fig-ch1/MMS_example2.png')
plt.show()