Feel++

Solving a linear system

The configuration files .cfg allow for a wide range of options to solve a linear or non-linear system.

We consider now the following example

Unresolved directive in solving.adoc - include::../../../../codes/mylaplacian.cpp[tag=marker1]

To execute this example

# sequential
./feelpp_tut_laplacian
# parallel on 4 cores
mpirun -np 4 ./feelpp_tut_laplacian

As described in section

Direct solver

cholesky and lu factorisation are available. However the parallel implementation depends on the availability of MUMPS. The configuration is very simple.

# no iterative solver
ksp-type=preonly
#
pc-type=cholesky

Using the PETSc backend allows to choose different packages to compute the factorization.

Table 1. Table of factorization package

Package

Description

Parallel

petsc

PETSc own implementation

yes

mumps

MUltifrontal Massively Parallel sparse direct Solver

yes

umfpack

Unsymmetric MultiFrontal package

no

pastix

Parallel Sparse matriX package

yes

To choose between these factorization package

# choose mumps
pc-factor-mat-solver-package=mumps
# choose umfpack (sequential)
pc-factor-mat-solver-package=umfpack

In order to perform a cholesky type of factorisation, it is required to set the underlying matrix to be SPD.

# matrix
auto A = backend->newMatrix(_test=...,_trial=...,_properties=SPD);
# bilinear form
auto a = form2( _test=..., _trial=..., _properties=SPD );

Using iterative solvers

Using CG

ICC(3)

with a relative tolerance of 1e-12:

ksp-rtol=1.e-12
ksp-type=cg
pc-type=icc
pc-factor-levels=3

Using GMRES

ILU(3)

with a relative tolerance of 1e-12 and a restart of 300:

ksp-rtol=1.e-12
ksp-type=gmres
ksp-gmres-restart=300
pc-type=ilu
pc-factor-levels=3

Jacobi

With a relative tolerance of 1e-12 and a restart of 100:

ksp-rtol=1.e-12
ksp-type=gmres
ksp-gmres-restart 100
pc-type=jacobi

Monitoring linear non-linear and eigen problem solver residuals

# linear
ksp_monitor=1
# non-linear
snes-monitor=1
# eigen value problem
eps-monitor=1