Quick Reference

 In this chapter, we develop a quick reference for the various stages of a simulation using Feel++.

CMake and Feel++ applications

Feel++ offers a development environment for solving partial differential equations. It uses many tools to cover the different steps pre-processing, processing and post-processing and large range of numerical methods and needs. To this end it is crucial to have a powerful build environment. Feel++ uses CMake from Kitware and provides various macros to help setting up your own application or research project.

CMake macros

Setting up Feel++ environment

See section Using Feel++.

See section Using Feel++.

For a give application or multiple applications you may define testcases. testcases are difrectory containing a set of files that may include geometry, mesh, cfg or json files.

To define a new testcase case, create a sub-directory where your application, say `myapp` like in the previous section, stands and copy the required files there.

``````cd <source directory of my application>
mkdir mytestcase
# copy files (.geo, .msh, .cfg...) to mytestcase
...``````

then edit the `CMakeLists.txt` in your application directory and add the following line:

``feelpp_add_testcase(mytestcase)``

Then type `make feelpp_add_testcase_mytestcase` in the build directory of your application `myapp`. It will copy in the build directory of your application the directory mytestcase.

INFO: if you updated the testcase data files, executing ```make feelpp_testcase_mytestcase``` will use `rsync` to update the files that were changed in the source.

The macro `feelpp_add_testcase` supports options:

`PREFIX`

(default is `feelpp`) set the prefix of the target to avoid eg name clash

``feelpp_add_testcase(mytestcase PREFIX foo)``

then the target is `foo_add_testcase_mytestcase`.

`DEPS`

set the dependencies of the testcase

``feelpp_add_testcase(mytestcase DEPS myothertestcase)``

it allows to update a testcase depending on changes in an other one.

Setting runtime environment

In this section, we present some tools to initialize and manipulate Feel++ environment.

Initialize Feel++

Environment class is necessary to initialize your application, as seen in FirstApp. The interface is as follows:

``Environment env( _argc, _argv, _desc, _about );``

None of those parameters are required but it is highly recommended to use the minimal declaration:

``````Environment env( _argc=argc, _argv=argv,
_desc=feel_option(),
_author="your_name",
• `_argc` and `_argv` are the arguments of your main function.

• `_desc` is a description of your options.

• `_about` is a brief description of your application.

Options Description

`feel_options()` returns a list of default options used in Feel++.

You can create your own list of options as follows:

``````using namespace Feel;
inline
po::options_description
makeOptions()
{
po::options_description myappOptions( "My app options" );
( "option1", po::value<type1>()->default_value( value1 ), "description1" )
( "option2", po::value<type2>()->default_value( value2 ), "description2" )
( "option3", po::value<type3>()->default_value( value3 ), "description3" )
;
}``````

`makeOptions` is the usual name of this routine but you can change it amd `myappOptions` is the name of you options list.

 Parameter Description `option` the name of parameter `type` the type parameter `value` the default value of parameter `description` the description of parameter

You can then use `makeOptions()` to initialize the Feel++ Environment as follows

``````Environment env( _argc=argc, _argv=argv,
_desc=makeOptions(),
_author="myname",
_email="my@email.com") );``````

Then, at runtime, you can change the parameter as follows

• look into `systemGeoRepository()` which is usually $FEELPP_DIR/share/feel/geo If `filename` is not found, then the empty string is returned. Utility functions Communications A lot of data structures, in fact most of them, in Feel++ are parallel and are associated with a `WorldComm` data structure which allows us to access and manipulate the MPI communicators. We provide some utility free functions that allow a transparent access to the `WorldComm` data structure. We denote by `c` a Feel++ data structure associated to a `WorldComm`.  Feel++ Keyword Description rank(c) returns the local MPI rank of the data structure `c` globalRank(c) returns the global MPI rank of the data For example to print the rank of a mesh data structure ``````// initialise environment... auto mesh = makeMesh<Simplex<2,1>>(); std::cout << "local rank : " << rank(mesh) << "\n";`````` Using computational meshes Introduction Feel++ provides some tools to manipulate meshes. Here is a basic example that shows how to generate a mesh for a square geometry Excerpt from `codes/mymesh.cpp` ``Unresolved directive in Mesh/README.adoc - include::../../../../codes/03-mymesh.cpp[tag=mesh]`` As always, we initialize the {feelpp} environment (see section [FirstApp] ). The `unitSquare()` will generate a mesh for a square geometry. Feel++ provides several functions to automate the GMSH mesh generation for different topologies. These functions will create a geometry file `.geo` and a mesh file `.msh`. We can visualize them in {uri-gmsh-www}[Gmsh]. ``$ gmsh <entity_name>.msh``

Finally we use the `exporter()` (see \ref Exporter) function to export the mesh for post processing. It will create by default a Paraview format file `.sos` and an Ensight format file `.case`.

• `systemGeoRepository()` which is usually "$FEELPP_DIR/share/feel/geo" (Environment) Examples Load a mesh data structure from the file `$HOME/feel/mymesh.msh`.

``````auto mesh = loadMesh(_mesh=new mesh_type,
_filename="mymesh.msh");``````

Load a geometric structure from the file `./mygeo.geo` and automatically create a mesh data structure.

``````auto mesh = loadMesh(_mesh=new mesh_type,
_filename="mygeo.geo");``````

Create a mesh data structure from the file `./feel.geo`.

``auto mesh = loadMesh(_mesh=new Mesh<Simplex< 2 > > );``

In order to load only `.msh` file, you can also use the loadGMSHMesh.

Interface:

``mesh_ptrtype loadGMSHMesh(_mesh, _filename, _refine, _update, _physical_are_elementary_regions);``

Required Parameters:

• `_mesh` a mesh data structure.

• `_filename` filename with extension.

Optional Parameters:

• `_refine` optionally refine with \p refine levels the mesh. - Default =`0`

• `_update` update the mesh data structure (build internal faces and edges).

• Default =`true`

• `_physical_are_elementary_regions` to load specific meshes formats.

• Default = `false`

The file you want to load has to be in an appropriate repository. See LoadMesh.

Examples

From `doc/manual/heatns.cpp`

`````` mesh_ptrtype mesh = loadGMSHMesh( _mesh=new mesh_type,
_filename="piece.msh",
_update=MESH_CHECK|MESH_UPDATE_FACES|MESH_UPDATE_EDGES|MESH_RENUMBER );``````

From `applications/check/check.cpp`

``````mesh = loadGMSHMesh( _mesh=new mesh_type,
_filename=soption("filename"),
_rebuild_partitions=(Environment::worldComm().size() > 1),
_update=MESH_RENUMBER|MESH_UPDATE_EDGES|MESH_UPDATE_FACES|MESH_CHECK );``````

Creating Meshes

createGMSHMesh

Interface
``mesh_ptrtype createGMSHMesh(_mesh, _desc, _h, _order, _parametricnodes, _refine, _update, _force_rebuild, _physical_are_elementary_regions);``

Required Parameters:

• `_mesh` mesh data structure.

• `_desc` descprition. See further.

Optional Parameters:

• `_h` characteristic size.

• Default = `0.1`

• `_order` order.

• Default = `1`

• `_parametricnodes`

• Default = `0`

• `_refine` optionally refine with \p refine levels the mesh.

• Default =`0`

• `_update` update the mesh data structure (build internal faces and edges).

• Default =`true`

• `_force_rebuild` rebuild mesh if already exists.

• Default = `false`

• `_physical_are_elementary_regions` to load specific meshes formats.

• Default = `false`

To generate your mesh you need a description parameter. This one can be create by one the two following function.

geo

Use this function to create a description from a `.geo` file.

Interface
``gmsh_ptrtype geo(_filename, _h, _dim, _order, _files_path);``

Required Parameters:

• `filename`: file to load.

Optional Parameters:

• `_h` characteristic size of the mesh.

• Default = `0.1.`

• `_dim` dimension.

• Default = `3.`

• `_order` order.

• Default = `1.`

• `_files_path` path to the file.

• Default = `localGeoRepository().`

The file you want to load has to be in an appropriate repository. See LoadMesh.

Example

From `doc/manual/heat/ground.cpp`

``````mesh = createGMSHMesh( _mesh=new mesh_type,
_desc=geo( _filename="ground.geo",
_dim=2,
_order=1,
_h=meshSize ) );``````
Excerpt from `doc/manual/fd/penalisation.cpp`
``````mesh = createGMSHMesh( _mesh=new mesh_type,
_desc=geo( _filename=File_Mesh,
_dim=Dim,
_h=doption(_name="gmsh.hsize"),
_update=MESH_CHECK|MESH_UPDATE_FACES|MESH_UPDATE_EDGES|MESH_RENUMBER );``````

domain

Use this function to generate a simple geometrical domain from parameters.

Interface
``````gmsh_ptrtype domain(_name, _shape, _h, _dim, _order, _convex, \
_addmidpoint, _xmin, _xmax, _ymin, _ymax, _zmin, _zmax);``````

Required Parameters:

• `_name` name of the file that will ge generated without extension.

• `_shape` shape of the domain to be generated (simplex or hypercube).

Optional Parameters:

• `_h` characteristic size of the mesh.

• Default = `0.1`

• `_dim` dimension of the domain.

• Default = `2`

• `_order` order of the geometry.

• Default = `1`

• `_convex` type of convex used to mesh the domain.

• Default = `simplex`

• `_addmidpoint` add middle point.

• Default = `true`

• `_xmin` minimum x coordinate.

• Default = `0`

• `_xmax` maximum x coordinate.

• Default = `1`

• `_ymin` minimum y coordinate.

• Default = `0`

• `_ymax` maximum y coordinate.

• Default = `1.`

• `_zmin` minimum z coordinate.

• Default = `0`

• `_zmax` maximum z coordinate.

• Default = `1`

Example

From `doc/manual/laplacian/laplacian.ccp`

``````mesh_ptrtype mesh = createGMSHMesh( _mesh=new mesh_type,
_desc=domain( _name=( boost::format( "%1%-%2%" ) % shape % Dim ).str() ,
_usenames=true,
_shape=shape,
_h=meshSize,
_xmin=-1,
_ymin=-1 ) );``````

From `doc/manual/stokes/stokes.cpp`

``````mesh = createGMSHMesh( _mesh=new mesh_type,
_desc=domain( _name=(boost::format("%1%-%2%-%3%")%"hypercube"%convex_type().dimension()%1).str() ,
_shape="hypercube",
_dim=convex_type().dimension(),
_h=meshSize ) );``````

From `doc/manual/solid/beam.cpp`

``````mesh_ptrtype mesh = createGMSHMesh( _mesh=new mesh_type,
_update=MESH_UPDATE_EDGES|MESH_UPDATE_FACES|MESH_CHECK,
_desc=domain( _name=( boost::format( "beam-%1%" ) % nDim ).str(),
_shape="hypercube",
_xmin=0., _xmax=0.351,
_ymin=0., _ymax=0.02,
_zmin=0., _zmax=0.02,
_h=meshSize ) );``````

Mesh iterators

Feel++ mesh data structure allows to iterate over its entities: elements, faces, edges and points.

The following table describes free-functions that allow to define mesh region over which to operate. MeshType denote the type of mesh passed to the free functions in the table.

 MeshType can be a pointer, a shared_pointer or a reference to a mesh type.

For example :

``````auto mesh = loadMesh( _mesh=Mesh<Simplex<2>>);
auto r1 = elements(mesh); // OK
auto r2 = elements(*mesh); // OK``````
Table 2. Table of mesh iterators
Type Function Description

`elements_t<MeshType>`

`elements(mesh)`

All the elements of a mesh

`markedelements_t<MeshType>`

`markedelements(mesh, id)`

All the elements marked by marked id

`boundaryelements_t<MeshType>`

`boundaryelements(mesh)`

All the elements of the mesh which share a face with the boundary of the mesh.

`internalelements_t<MeshType>`

`internalelements(mesh)`

All the elements of the mesh which share a face with the boundary of the mesh.

`pid_faces_t<MeshType>`

`faces(mesh)`

All the faces of the mesh.

`markedfaces_t<MeshType>`

`markedfaces(mesh)`

All the faces of the mesh which are marked.

`boundaryfaces_t<MeshType>`

`boundaryfaces(mesh)`

All elements that own a topological dimension one below the mesh. For example, if you mesh is a 2D one, `boundaryfaces(mesh)` will return all the lines (because of dimension 2-1=1). These elements which have one dimension less, are corresponding to the boundary faces.

`internalfaces_t<MeshType>`

`internalelements(mesh)`

All the elements of the mesh which are stricly within the domain that is to say they do not share a face with the boundary.

`edges_t<MeshType>`

`edges(mesh)`

All the edges of the mesh.

`boundaryedges_t<MeshType>`

`boundaryedges(mesh)`

All boundary edges of the mesh.

`points_t<MeshType>`

`points(mesh)`

All the points of the mesh.

`markedpoints_t<MeshType>`

`markedpoints(mesh,id)`

All the points marked id of mesh.

`boundarypoints_t<MeshType>`

`boundarypoints(mesh)`

All boundary points of the mesh.

`internalpoints_t<MeshType>`

`internalpoints(mesh)`

All internal points of the mesh(not on the boundary)

Here are some examples on how to use these functionSpace

``````auto mesh = ...;

auto r1 = elements(mesh);
// iterate over the set of elements local to the process(no ghost cell selected, see next section)
for ( auto const&  e : r2 )
{
auto const& elt = unwrap_ref( e );
// work with element elt
...
}

auto r2 = markedelements(mesh,"iron");
// iterate over the set of elements marked iron in the mesh
for ( auto const&  e : r2 )
{
auto const& elt = unwrap_ref( e );
// work with element elt
...
}

auto r3 = boundaryfaces(mesh);
// iterate over the set of faces on the boundary of the mesh
for ( auto const&  e : r3 )
{
auto const& elt = unwrap_ref( e );
// work with element elt
...
}

auto r4 = markededges(mesh,"line");
// iterate over the set of edges marked "line" in the mesh
for ( auto const&  e : r4 )
{
auto const& elt = unwrap_ref( e );
// work with element elt
...
}``````

Extended set of entities

Feel++ allows also to select an extended sets of entities from the mesh, you can extract entities which belongs to the local process but also ghost entities which satisfy the same property as the local ones.

Actually you can select both or one one of them thanks to the enum data structure entity_process_t which provides the following options

 entity_process_t Description `LOCAL_ONLY` only local entities `GHOST_ONLY` only ghost entities `ALL` both local and ghost entities
 Type Function Description `ext_elements_t` `elements(mesh,entity_process_t)` all the elements of mesh associated to entity_process_t. `ext_elements_t` `markedelements(mesh, id, entity_process_t)` all the elements marked id of mesh associated to entity_process_t. `ext_faces_t` `faces(mesh,entity_process_t)` all the faces of mesh associated to entity_process_t. `ext_faces_t` `markedfaces(mesh, id, entity_process_t)` all the faces marked id of mesh associated to entity_process_t. `ext_edges_t` `edges(mesh,entity_process_t)` all the edges of mesh associated to entity_process_t. `ext_edges_t` `markededges(mesh, id, entity_process_t)` all the edges marked id of mesh associated to entity_process_t.
 The type of the object returned for an entity is always the same, for elements it is `ext_elements_t` whether the elements are marked or not. The reason is that in fact we have to create a temporary data structure embedded in the range object that stores a reference to the elements which are selected.

Here is how to select both local and ghost elements from a Mesh

``````auto mesh =...;
auto r = elements(mesh,entity_process_t::ALL);
for (auto const& e : r )
{
// do something on the local and ghost element
...
// do something special on ghost cells
if ( unwrap_ref(e).isGhostCell() )
{...}
}``````

Concatenate sets of entities

Denote \mathcal{E}_{1}, \ldots ,\mathcal{E}_{n} n disjoints sets of the same type of entities (eg elements, faces,edges or points), \cup_{i=1}^{n} \mathcal{E}_i with \cap_{i=0}^{n} \mathcal{E}_i = \emptyset.

We wish to concatenate these n sets. To this end, we use `concatenate` which takes an arbitrary number of disjoints sets.

``````#include <feel/feelmesh/concatenate.hpp>
...
auto E_1 = internalfaces(mesh);
auto E_2 = markedfaces(mesh,"Gamma_1");
auto E_3 = markedfaces(mesh,"Gamma_2");
auto newset = concatenate( E_1, E_2, E_3 );
cout << "measure of newset = " << integrate(_range=newset, _expr=cst(1.)).evaluate() << std::endl;``````

Compute the complement of a set of entities

Denote \mathcal{E} a set of entities, eg. the set of all faces (both internal and boundary faces). Denote \mathcal{E}_\Gamma a set of entities marked by \Gamma. We wish to build {\Gamma}^c=\mathcal{E}\backslash\Gamma. To compute the complement, Feel++ provides a `complement` template function that requires \mathcal{E} and a predicate that return `true` if an entity of \mathcal{E} belongs to \Gamma, `false` otherwise. The function returns mesh iterators over \Gamma^c.

``````#include <feel/feelmesh/complement.hpp>
...
auto E = faces(mesh);
// build set of boundary faces, equivalent to boundaryfaces(mesh)
auto bdyfaces = complement(E,[](auto const& e){return e.isOnBoundary()});
cout << "measure of bdyfaces = " << integrate(_range=bdyfaces, _expr=cst(1.)).evaluate() << std::endl;
// should be the same as above
cout << "measure of boundaryfaces = " << integrate(_range=boundaryfaces(mesh), _expr=cst(1.)).evaluate() << std::endl;``````

Helper function on entities set

Feel++ provides some helper functions to apply on set of entities. We denote by range_t the type of the entities set.

 Type Function Description size_type nelements(range_t,bool) returns the local number of elements in entities set range_t of bool is false, other the global number which requires communication (default: global number) WorldComm worldComm(range_t) returns the WorldComm associated to the entities set

Create a new range

A range can be also build directly by the user. This customized range is stored in a std container which contains the c++ references of entity object. We use boost::reference_wrapper for take c++ references and avoid copy of mesh data. All entities enumerated in the range must have same type (elements,faces,edges,points). Below we have an example which select all active elements in mesh for the current partition (i.e. identical to elements(mesh)).

``````auto mesh = ...;
// define reference entity type
typedef boost::reference_wrapper<typename mesh_type::element_type const> element_ref_type;
// store entities in a vector
typedef std::vector<element_ref_type> cont_range_type;
boost::shared_ptr<cont_range_type> myelts( new cont_range_type );
for (auto const& elt : elements(mesh) )
{
myelts->push_back(boost::cref(elt));
}
// generate a range object usable in feel++
auto myrange = boost::make_tuple( mpl::size_t<MESH_ELEMENTS>(),
myelts->begin(),myelts->end(),myelts );``````

Next, this range can be used in feel++ language.

``double eval = integrate(_range=myrange,_expr=cst(1.)).evaluate()(0,0);``

Mesh Markers

Elements and their associated sub-entities can be marked.

A marker is an integer specifying for example a material id, a boundary condition id or some other property associated with the entity.

A dictionary can map string to marker ids.

The dictionary is stored in the Mesh data structures and provides the set of correspondances between strings and ids.

To access a marker, it is necessary to verify that it exists as follows

``````for( auto const& ewrap : elements(mesh))
{
auto const& e = unwrap_ref( ewrap );
if ( e.hasMarker() ) (1)
{
std::cout << "Element " << e.id() << " has marker " << e.marker() << std::endl;
}
if ( e.hasMarker(5) ) (2)
{
std::cout << "Element " << e.id() << " has marker 5 " << e.marker(5) << std::endl;
}
}``````
 1 check if marker 1 (the default marker) exists, if yes then print it 2 check if marker 5 exists, if yes then print it

Mesh Operations

`straightenMesh`

One of the optimisations that allows to have a huge gain in computational effort is to straighten all the high order elements except for the boundary faces of the computational mesh. This is achieved by moving all the nodes associated to the high order transformation to the position these nodes would have if a first order geometrical transformation were applied. This procedure can be formalized in the following operator

\mathbf{\eta}^{\mathrm{straightening}}_K(\mathbf{\varphi}^N_{K}(\mathbf{x}^*)) = \left(\mathbf{\varphi}^1_{K}(\mathbf{x}^*)-\mathbf{\varphi}^N_{K}(\mathbf{x}^*)\right) - {\left( \mathbf{\varphi}^1_{K \cap \Gamma}(\mathbf{x}^*)-\mathbf{\varphi}^N_{K \cap \Gamma}(\mathbf{x}^*)\right)}

where \mathbf{x}^* is any point in K^* and \mathbf{\varphi}^1_{K}(\mathbf{x}^*) and \mathbf{\varphi}^N_{K}(\mathbf{x}^*) its images by the geometrical transformation of order one and order N, respectively. On one hand, the first two terms ensure that for all K not intersecting \Gamma, the order one and N transformations produce the same image. On the other hand, the last two terms are 0 unless the image of \mathbf{x}^* in on \Gamma and, in this case, we don’t move the high order image of \mathbf{x}^*. This allows to have straight internal elements and elements touching the boundary to remain high order. When applying numerical integration, specific quadratures are considered when dealing with internal elements or elements sharing a face with the boundary. The performances, thanks to this transformation, are similar to the ones obtained with first order meshes. However, it needs to be used with care as it can generate folded meshes.

`createSubmesh`

In multiphysics applications or using advanced numerical methods e.g. involving Lagrange multipliers, it is often required to define Function Spaces on different meshes. Theses meshes are often related. Consider for example a heat transfer problem on a domain \Omega coupled with fluid flow problem on a domain \Omega_f \subset \Omega as in the {uri-benchmarks}heat[Heat Transfer benchmarks]. `createSubmesh` allows to extract \Omega_f out of \Omega while keeping information on the relation between the two meshes to be able to transfer data between these meshes very efficiently.

``````auto mesh=loadMesh(_mesh=new Mesh<Simplex<d>>); (1)
auto fluid_mesh = createSubmesh( mesh, markedelements(mesh,"Air") ); (2)
auto face_mesh = createSubmesh( mesh, faces(mesh) ); (3)``````
 1 create a mesh of simplices of dimension d 2 extract a mesh subregion \Omega_f marked Air from a mesh \Omega 3 extract a d-1 mesh made of all the faces of the d mesh

Integration

You should be able to create a mesh now. If it is not the case, get back to the section Mesh.

Prerequisites

Integrals

Feel++ provide the integrate() function to define integral expressions which can be used to compute integrals, define linear and bi-linear forms.

Interface

``  integrate( _range, _expr, _quad, _geomap );``

Please notice that the order of the parameter is not important, these are `boost` parameters, so you can enter them in the order you want. To make it clear, there are two required parameters and 2 optional and they of course can be entered in any order provided you give the parameter name. If you don’t provide the parameter name (that is to say `_range` = or the others) they must be entered in the order they are described below.

Required parameters:

• `_range` = domain of integration

• `_expr` = integrand expression

Optional parameters:

• `_quad` = quadrature to use instead of the default one, wich means `_Q<integer>()` where the integer is the polynomial order to integrate exactely

• `_geomap` = type of geometric mapping to use, that is to say:

 Feel Parameter Description `GEOMAP_HO` High order approximation (same of the mesh) `GEOMAP_OPT` Optimal approximation: high order on boundary elements order 1 in the interior `GEOMAP_01` Order 1 approximation (same of the mesh)

Example

From `doc/manual/tutorial/dar.cpp`

``````form1( ... ) = integrate( _range = elements( mesh ),
_expr = f*id( v ) );``````

From `doc/manual/tutorial/myintegrals.cpp`

``````  // compute integral f on boundary
double intf_3 = integrate( _range = boundaryfaces( mesh ),
_expr = f );``````

From `doc/manual/advection/advection.cpp`

``````  form2( _test = Xh, _trial = Xh, _matrix = D ) +=
integrate( _range = internalfaces( mesh ),
_expr = ( averaget( trans( beta )*idt( u ) ) * jump( id( v ) ) )
+ penalisation*beta_abs*( trans( jumpt( trans( idt( u ) )) )
*jump( trans( id( v ) ) ) ),
_geomap = geomap );``````

From `doc/manual/laplacian/laplacian.cpp`

`````` auto l = form1( _test=Xh, _vector=F );
l = integrate( _range = elements( mesh ),
_expr=f*id( v ) ) +
integrate( _range = markedfaces( mesh, "Neumann" ),
_expr = nu*gradg*vf::N()*id( v ) );``````

Computing my first Integrals

This part explains how to integrate on a mesh with Feel++ (source `doc/manual/tutorial/myintegrals.cpp` ).

Let’s consider the domain \Omega=[0,1^d] and associated meshes. Here, we want to integrate the following function

\begin{aligned} f(x,y,z) = x^2 + y^2 + z^2 \end{aligned}

on the whole domain \Omega and on part of the boundary \Omega.

There is the appropriate code:

``````int
main( int argc, char** argv )
{
// Initialize Feel++ Environment
Environment env( _argc=argc, _argv=argv,
_desc=feel_options(),
_author="Feel++ Consortium",
_email="feelpp-devel@feelpp.org" ) );

// create the mesh (specify the dimension of geometric entity)
auto mesh = unitHypercube<3>();

// our function to integrate
auto f = Px()*Px() + Py()*Py() + Pz()*Pz();

// compute integral of f (global contribution)
double intf_1 = integrate( _range = elements( mesh ),
_expr = f ).evaluate()( 0,0 );

// compute integral of f (local contribution)
double intf_2 = integrate( _range = elements( mesh ),
_expr = f ).evaluate(false)( 0,0 );

// compute integral f on boundary
double intf_3 = integrate( _range = boundaryfaces( mesh ),
_expr = f ).evaluate()( 0,0 );

std::cout << "int global ; local ; boundary" << std::endl
<< intf_1 << ";" << intf_2 << ";" << intf_3 << std::endl;
}``````

Mean value of a function

Let f a bounded function on domain \Omega. You can evaluate the mean value of a function thanks to the `mean()` function :

\bar{f}=\frac{1}{|\Omega|}\int_\Omega f=\frac{1}{\int_\Omega 1}\int_\Omega f

Interface

``  mean( _range, _expr, _quad, _geomap );``

Required parameters:

• `_range` = domain of integration

• `_expr` = mesurable function

Optional parameters:

• `_quad` = quadrature to use.

• Default = `_Q<integer>()`

• `_geomap` = type of geometric mapping.

• Default = `GEOMAP_OPT`

Example

Stokes example using `mean`
``Unresolved directive in Integrals/mean.adoc - include::../../../../codes/mystokes.cpp[tag=main]``

Norms

Let f a bounded function on domain \Omega.

L2 norms

Let f \in L^2(\Omega) you can evaluate the L^2 norm using the normL2() function:

\parallel f\parallel_{L^2(\Omega)}=\sqrt{\int_\Omega |f|^2}

Interface
``  normL2( _range, _expr, _quad, _geomap );``

or squared norm:

``  normL2Squared( _range, _expr, _quad, _geomap );``

Required parameters:

• `_range` = domain of integration

• `_expr` = mesurable function

Optional parameters:

• `_quad` = quadrature to use.

• Default = `_Q<integer>()`

• `_geomap` = type of geometric mapping.

• Default = `GEOMAP_OPT`

Example

From `doc/manual/laplacian/laplacian.cpp`

``````  double L2error =normL2( _range=elements( mesh ),
_expr=( idv( u )-g ) );``````

From `doc/manual/stokes/stokes.cpp`

Stokes example using `mean`
``Unresolved directive in Integrals/norms.adoc - include::../../../../codes/mystokes.cpp[tag=main]``

H1 norm

In the same idea, you can evaluate the H1 norm or semi norm, for any function f \in H^1(\Omega):

\begin{aligned} \parallel f \parallel_{H^1(\Omega)}&=\sqrt{\int_\Omega |f|^2+|\nabla f|^2}\\ &=\sqrt{\int_\Omega |f|^2+\nabla f * \nabla f^T}\\ |f|_{H^1(\Omega)}&=\sqrt{\int_\Omega |\nabla f|^2} \end{aligned}

where * is the scalar product \cdot when f is a scalar field and the frobenius scalar product : when f is a vector field.

Interface
``  normH1( _range, _expr, _grad_expr, _quad, _geomap );``

or semi norm:

``  normSemiH1( _range, _grad_expr, _quad, _geomap );``

Required parameters:

• `_range` = domain of integration

• `_expr` = mesurable function

• `_grad_expr` = gradient of function (Row vector!)

Optional parameters:

• `_quad` = quadrature to use.

• Default = `_Q<integer>()`

• `_geomap` = type of geometric mapping.

• Default = `GEOMAP_OPT`

normH1() returns a float containing the H^1 norm.

Example

With expression:

``````  auto g = sin(2*pi*Px())*cos(2*pi*Py());
-2*pi*sin(2*pi*Px())*sin(2*pi*Py())*oneY();
// There gradg is a column vector!
// Use trans() to get a row vector
double normH1_g = normH1( _range=elements(mesh),
_expr=g,

With test or trial function `u`

``````  double errorH1 = normH1( _range=elements(mesh),
_expr=(u-g),

L^\infty norm

You can evaluate the infinity norm using the normLinf() function:

\parallel f \parallel_\infty=\sup_\Omega(|f|)
Interface
``  normLinf( _range, _expr, _pset, _geomap );``

Required parameters:

• `_range` = domain of integration

• `_expr` = mesurable function

• `_pset` = set of points (e.g. quadrature points)

Optional parameters:

• `_geomap` = type of geometric mapping.

• Default = `GEOMAP_OPT`

The `normLinf()` function returns not only the maximum of the function over a sampling of each element thanks to the `_pset` argument but also the coordinates of the point where the function is maximum. The returned data structure provides the following interface

• `value()`: return the maximum value

• `operator()()`: synonym to `value()`

• `arg()`: coordinates of the point where the function is maximum

Example
``````  auto uMax = normLinf( _range=elements(mesh),
_expr=idv(u),
_pset=_Q<5>() );
std::cout << "maximum value : " << uMax.value() << std::endl
<<  "         arg : " << uMax.arg() << std::endl;``````

Function Spaces

Prerequisites

The prerequisites are

Notations

We now turn to the next crucial mathematical ingredient: the function space, whose definition depends on \Omega_h - or more precisely its partitioning \mathcal{T}_h - and the choice of basis function. Function spaces in Feel++ follow the same definition and Feel++ provides support for continuous and discontinuous Galerkin methods and in particular approximations in L^2, H^1-conforming and H^1-nonconforming, H^2, H(\mathrm{div}) and H(\mathrm{curl})[^1].

We introduce the following spaces

\begin{aligned} \mathbb{W}_h &= \{v_h \in L^2(\Omega_h): \ \forall K \in \mathcal{T}_h, v_h|_K \in \mathbb{P}_K\},\\ \mathbb{V}_h &= \mathbb{W}_h \cap C^0(\Omega_h)= \{ v_h \in \mathbb{W}_h: \ \forall F \in \mathcal{F}^i_h\ [ v_h ]_F = 0\}\\ \mathbb{H}_h &= \mathbb{W}_h \cap C^1(\Omega_h)= \{ v_h \in \mathbb{W}_h: \ \forall F \in \mathcal{F}^i_h\ [ v_h ]_F = [ \nabla v_h ]_F = 0\}\\ \mathbb{C}\mathbb{R}_h &= \{ v_h \in L^2(\Omega_h):\ \forall K \in \mathcal{T}_h, v_h|_K \in \mathbb{P}_1; \forall F \in \mathcal{F}^i_h\ \int_F [ v_h ] = 0 \}\\ \mathbb{R}{a}\mathbb{T}{u}_h &= \{ v_h \in L^2(\Omega_h):\ \forall K \in \mathcal{T}_h, v_h|_K \in \mathrm{Span}\{1,x,y,x^2-y^2\}; \forall F \in \mathcal{F}^i_h\ \int_F [ v_h ] = 0 \}\\ \mathbb{R}\mathbb{T}_h &= \{\mathbf{v}_h \in [L^2(\Omega_h)]^d:\ \forall K \in \mathcal{T}_h, v_h|_K \in \mathbb{R}\mathbb{T}_k; \forall F \in \mathcal{F}^i_h\ [{\mathbf{v}_h \cdot \mathrm{n}}]_F = 0 \}\\ \mathbb{N}_h &= \{\mathbf{v}_h \in [L^2(\Omega_h)]^d:\ \forall K \in \mathcal{T}_h, v_h|_K \in \mathbb{N}_k; \forall F \in \mathcal{F}^i_h\ [{\mathbf{v}_h \times \mathrm{n}}]_F = 0 \} \end{aligned}

where \mathbb{R}\mathbb{T}_k and \mathbb{N}_k are respectively the Raviart-Thomas and Nédélec finite elements of degree k.

The Legrendre and Dubiner basis yield implicitely discontinuous approximations, the Legendre and Dubiner boundary adapted basis, see~\cite MR1696933, are designed to handle continuous approximations whereas the Lagrange basis can yield either discontinuous or continuous (default behavior) approximations. \mathbb{R}\mathbb{T}_h and \mathbb{N}_h are implicitely spaces of vectorial functions \mathbf{f} such that \mathbf{f}: \Omega_h \subset \mathbb{R}^d \mapsto \mathbb{R}^d. As to the other basis functions, i.e. Lagrange, Legrendre, Dubiner, etc., they are parametrized by their values namely `Scalar`, `Vectorial` or `Matricial.`

 Products of function spaces must be supported. This is very powerful to describe complex multiphysics problems when coupled with operators, functionals and forms described in the next section. Extracting subspaces or component spaces are part of the interface.

Function Spaces

Function spaces support is provided by the `FunctionSpace` class

The `FunctionSpace` class

• constructs the table of degrees of freedom which maps local (elementwise) degrees of freedom to the global ones with respect to the geometrical entities,

• embeds the definition of the elements of the function space allowing for a tight coupling between the elements and their function spaces,

• stores an interpolation data structure (e.g. region tree) for rapid localisation of point sets (determining in which element they reside).

 C++ Function C++ Type Function Space [1] `Pch(mesh)` `Pch_type` P^N_{c,h} `Pchv(mesh)` `Pchv_type` [P^N_{c,h}^d] `Pdh(mesh)` `Pdh_type` P^N_{d,h} `Pdhv(mesh)` `Pdhv_type` [P^N_{d,h}^d] `THch(mesh)` `THch_type` [P^{N+1}_{c,h}^d \times P^N_{c,h}] `Dh(mesh)` `Dh_type` \mathbb{R}\mathbb{T}_h `Ned1h(mesh)` `Ned1h_type` \mathbb{N}_h

[[[1]]]: see Notations for the function spaces definitions.

Here are some examples how to define function spaces with Lagrange basis functions.

``````#include <feel/feeldiscr/pch.hpp>

// Mesh with triangles
using MeshType = Mesh<Simplex<2>>;
// Space spanned by P3 Lagrange finite element
FunctionSpace<MeshType,bases<Lagrange<3>>> Xh;
// is equivalent to (they are the same type)
Pch_type<MeshType,3> Xh;

// using the auto keyword
MeshType mesh = loadMesh( _mesh=new MeshType );
auto Xh = Pch<3>( mesh );
// is equivalent to
auto Xh = FunctionSpace<MeshType,bases<Lagrange<3>>>::New( mesh );
auto Xh = Pch_type<MeshType,3>::New( mesh );``````

Functions

 One important feature in `FunctionSpace` is that it embeds the definition of element which allows for the strict definition of an `Element` of a `FunctionSpace` and thus ensures the correctness of the code.

An element has its representation as a vector, also in the case of product of multiple spaces.

``````#include <feel/feeldiscr/pch.hpp>

// Mesh with triangles
using MeshType = Mesh<Simplex<2>>;
auto mesh = loadMesh( _mesh=new MeshType );

// define P3 Lagrange finite element space
auto P3ch = Pch<3>(mesh);

// definie an element from P3ch, initialized to 0
auto u = P3ch.element();
// definie an element from P3ch, initialized to x^2+y^2
auto v = P3ch.element(Px()*Px()+Py()*Py());``````

Components

``````FunctionSpace<Mesh<Simplex<2> >,
bases<Lagrange<2,Vectorial>, Lagrange<1,Scalar>,
Lagrange<1,Scalar> > > P2P1P1;
auto U = P2P1P1.element();
// Views: changing a view changes U and vice versa
// view on element associated to P2
auto u = U.element<0>();
// extract view of first component
auto ux = u.comp(X);
// view on element associated to 1st P1
auto p = U.element<1>();
// view on element associated to 2nd P1
auto q = U.element<2>();``````

Interpolation

Feel++ has a very powerful interpolation framework which allows to:

• transfer functions from one mesh to another

• transfer functions from one space type to another.

this is done seamlessly in parallel. The framework provides a set of C++ classes and C++ free-functions enabled short, concise and expressive handling of interpolation.

Using interpolation operator

Building interpolation operator I_h : P^1_{c,h} \rightarrow P^0_{td.h}
``````using MeshType = Mesh<Simplex<2>>;
auto mesh loadMesh( _mesh=new MeshType );
auto P1h = Pch<1>( mesh );
auto P0h = Pdh<0>( mesh );
auto Ih = I( _domain=P1h, _image=P0h );``````

De Rahm Diagram

The De Rahm diagram reads as follows: the range of each of the operators coincides with the null space of the next operator in the sequence below, and the last map is a surjection.

\begin{array}{ccccccc} H^1(\Omega)& \overset{\nabla}{\longrightarrow}& H^{\mathrm{curl}}(\Omega)& \overset{\nabla \times}{\longrightarrow}& H^{\mathrm{div}}(\Omega)& \overset{\nabla \cdot}{\longrightarrow}& L^2(\Omega) \end{array}

An important result is that the diagram transfers to the discrete level

\begin{array}{ccccccc} H^1(\Omega)& \overset{\nabla}{\longrightarrow}& H^{\mathrm{curl}}(\Omega)& \overset{\nabla \times}{\longrightarrow}& H^{\mathrm{div}}(\Omega)& \overset{\nabla \cdot}{\longrightarrow}& L^2(\Omega) \\ \left\downarrow\right.\pi_{c,h}& ~ & \left\downarrow\right.\pi_{\mathrm{curl},h}& ~ & \left\downarrow\right.\pi_{\mathrm{div},_h}& ~ & \left\downarrow\right.\pi_{d,h}& ~ \\ U_h& \overset{\nabla}{\longrightarrow}& V_h& \overset{\nabla \times}{\longrightarrow}& W_h& \overset{\nabla \cdot}{\longrightarrow}& Z_h\\ \end{array}

The diagram above is commutative which means that we have the following properties:

\begin{aligned} \nabla(\pi_{c,h} u) &= \pi_{\mathrm{curl},h}( \nabla u ),\\ \nabla\times(\pi_{\mathrm{curl},h} u) &= \pi_{\mathrm{div},h}( \nabla\times u ),\\ \nabla\cdot(\pi_{\mathrm{div},h} u) &= \pi_{d,h}( \nabla\cdot u ) \end{aligned}
 The diagram can be restricted to functions satisfying the homogeneous Dirichlet boundary conditions
\begin{array}{ccccccc} H^1_0(\Omega)& \overset{\nabla}{\longrightarrow}& H_0^{\mathrm{curl}}(\Omega)& \overset{\nabla \times}{\longrightarrow}& H_0^{\mathrm{div}}(\Omega)& \overset{\nabla \cdot}{\longrightarrow}& L^2_0(\Omega) \end{array}

Interpolation operators are provided as is or as shared pointers. The table below presents the alternatives.

 C++ object C++ Type C++ shared object C++ Type Mathematical operator `I(_domain=Xh,_image=Yh)` ```I_t, functionspace_type>``` `IPtr(…​)` ```I_ptr_t, functionspace_type>``` I: X_h \rightarrow Y_h `Grad(_domain=Xh,_image=Wh)` ```Grad_t, functionspace_type>``` `GradPtr(…​)` ```Grad_ptr_t, functionspace_type>``` \nabla: X_h \rightarrow W_h `Curl(_domain=Wh,_image=Vh)` ```Curl_t, functionspace_type>``` `CurlPtr(…​)` ```Curl_ptr_t, functionspace_type>``` \nabla \times : W_h \rightarrow V_h `Div(_domain=Vh,_image=Zh)` ```Div_t, functionspace_type>``` `DivPtr(…​)` ```Div_ptr_t, functionspace_type>``` \nabla \cdot: V_h \rightarrow Z_h
Building the discrete operators associated to the De Rahm diagram in Feel++
``````auto mesh = loadMesh( _mesh=new Mesh<Simplex<Dim>>());
auto Xh = Pch<1>(mesh);
auto Gh = Ned1h<0>(mesh);
auto Ch = Dh<0>(mesh);
auto P0h = Pdh<0>(mesh);
auto Icurl = Curl( _domainSpace = Gh, _imageSpace=Ch );
auto Idiv = Div( _domainSpace = Ch, _imageSpace=P0h );

auto u = Xh->element(<expr>);
auto w = Igrad(u); // w in Gh
auto x = Icurl(w); // z in Ch
auto y = Idiv(x); // y in P0h``````

Saving functions on disk

To save a function on disk to use it later, for example in another application, you can use the `save` function.

The saved file will be named after the name registered for the variable in the constructor (default : `u`).

``````auto Vh = Pch<1>( mesh );
auto u = Vh->element("v");
// do something with u
...
// save /path/to/save/v.h5
u.save( _path="/path/to/save", _type="hdf5" );``````

The `path` parameter creates a directory at this path to store all the degrees of liberty of this function.

The `type` parameter can be `binary`, `text` or `hdf5` . The first two will create one file per processor, whereas "hdf5" will creates only one file.

 To load a function, the mesh need to be exactly the same as the one used when saving it.
``````auto Vh = Pch<1>( mesh );
auto u = Vh->element("v");

The `path` and `type` parameters need to be the same as the one used to save the function.

Extended parallel doftable

In some cases, when we use parallel data, informations from other interfaces of partitions are need. To manage this, we can add ghost degree of freedom on ghost elements at these locations. However, we have to know if data have extended parallel doftable to load and use it.

In order to pass above this restriction, the two function `load` and `save` has been updated to use hdf5 format. With this format, extended parallel doftable or not, the function will work without any issues. More than that, we can load elements with extended parallel doftable and resave it without it, and vice versa. This last feature isn’t available with other formats than hdf5.

Bilinear and Linears Forms

We consider in this section bilinear and linear forms a: X_h \times X_h \rightarrow \mathbb{R} and \ell: X_h \rightarrow \mathbb{R}.

We suppose in this section that you know how to define your Mesh and your function spaces. You may need integration tools too, see Integrals.

There are Feel++ tools you need to create linear and bilinear forms in order to solve variational formulation.

 from now on, `u` denotes an element from your trial function space (unknown function) and `v` an element from your test function space

Building Forms

Using `form1`

To construct a linear form \ell: X_h \rightarrow \mathbb{R}, proceed as follows

``````auto mesh = ...;
// build a P1/Q1 approximation space
auto Xh = Pch<1>( mesh );
auto l = form1(_test=Xh);``````
 Name Parameter Description Status `test` function space e.g. `Xh` define test function space Required

Here are some examples taken from the Feel++ tutorial.

``````// right hand side
auto l = form1( _test=Vh );
l = integrate(_range=elements(mesh), _expr=id(v));``````

From `myadvection.cpp`

``````// right hand side
auto l = form1( _test=Xh );
l+= integrate( _range=elements( mesh ), _expr=f*id( v ) );``````
 The operators `+=` and `=` are supported by linear and bilinear forms.
``````auto a1 = form2(_test=Xh,_trial=Xh);
auto a2 = form2(_test=Xh,_trial=Xh);
// operations on a2 ...
// check that they have the same type and
// copy matrix associated to a2 in a1
a1 = a2;``````

Using `form2`

To define a bilinear form a: X_h \times X_h \rightarrow \mathbb{R}, for example a(u,v)=\int_\Omega uv

Building `form2`

The free-function `form2` allows you to simply define such a bilinear form using the Feel++ language:

``````// define function space
auto Xh = ...;
// define a : Xh x Xh -> R
auto a = form2(_trial=Xh, _test=Xh );
// a(u,v) = \int_\Omega u v
a = integrate(_range=elements(mesh), _expr=idt(u)*id(v));``````
 Name Parameter Description Status `test` function space e.g. `Xh` define test function space Required `trial` function space e.g. `Xh` define trial function space Optional

Here are some examples taken from the Feel++ tutorial

From `mylaplacian.cpp`

``````// left hand side
auto a = form2( _trial=Vh, _test=Vh );
a = integrate(_range=elements(mesh),

From `mystokes.cpp`:

``````// left hand side
auto a = form2( _trial=Vh, _test=Vh );
a = integrate(_range=elements(mesh),
a+= integrate(_range=elements(mesh),
_expr=-div(u)*idt(p)-divt(u)*id(p));``````
 see note above on operators `+=` and `=`
Solving variational formulations

Once you created your linear and bilinear forms you can use the `solve()` member function of your bilinear form.

The following generic example solves: find u \in X_h \text{ such that } a(u,v)=l(v) \forall v \in X_h

Example
``````auto Xh = ...; // function space
auto u = Xh->element();
auto a = form2(_test=Xh, _trial=Xh);
auto l = form1(_test=Xh);

a.solve(_solution=u, _rhs=l, _rebuild=false, _name="");``````
 Name Parameter Description Status `_solution` element of domain function space the solution Required `_rhs` linear form right hand side Required `_rebuild` boolean(Default = `false`) rebuild the solver components Optional `_name` string(Default = "") name of the associated Backend Optional

Here are some examples from the Feel++ tutorial.

From `laplacian.cpp`
``````// solve the equation  a(u,v) = l(v)
a.solve(_rhs=l,_solution=u);``````
Using `on` for Dirichlet conditions

The function `on()` allows you to add Dirichlet conditions to your bilinear form before using the `solve` function.

The interface is as follows

Interface
``on(_range=..., _rhs=..., _element=..., _expr=...);``

Required Parameters:

• `_range` domain concerned by this condition (see Integrals ).

• `_rhs` right hand side. The linear form.

• `_element` element concerned.

• `_expr` the condition.

This function is used with += operator.

Here are some examples from the Feel++ tutorial.

From `mylaplacian.cpp`
``````// apply the boundary condition
a+=on(_range=boundaryfaces(mesh),
_rhs=l,
_element=u,
_expr=expr(soption("functions.alpha")) );``````

There we add the condition: u = 0 \text{ on }\;\partial\Omega \;.

From `mystokes.cpp`
``````a+=on(_range=boundaryfaces(mesh), _rhs=l, _element=u,
_expr=expr<2,1,5>(u_exact,syms));``````

You can also apply boundary conditions per component:

Component-wise Dirichlet conditions
``````a+=on(_range=markedfaces(mesh,"top"),
_element=u[Component::Y],
_rhs=l,
_expr=cst(0.))``````

The notation `u[Component:Y]` allows to access the `Y` component of `u`. `Component::X` and `Component::Z` are respectively the `X` and `Z` components.

Algebraic solutions

Definitions

Matrices

Matrix Definition A matrix is a linear transformation between finite dimensional vector spaces.

Assembling a matrix Assembling a matrix means defining its action as entries stored in a sparse or dense format. For example, in the finite element context, the storage format is sparse to take advantage of the many zero entries.

Symmetric matrix

A = A^T

Definite (resp. semi-definite) positive matrix

All eigenvalue are

1. >0 (resp \geq 0) or

2. x^T A x > 0, \forall\ x (resp. x^T\ A\ x\geq 0\, \forall\ x)

Definite (resp. semi-negative) matrix

All eigenvalue are

1. <0 (resp. \leq 0) or

2. x^T\ A\ x < 0\ \forall\ x (resp. x^T\ A\ x \leq 0\, \forall\ x)

Indefinite matrix

There exists

1. positive and negative eigenvalue (Stokes, Helmholtz) or

2. there exists x,y such that x^TAx > 0 > y^T A y

Preconditioners

Definition

Let A be a \mathbb{R}^{n\times n} matrix, x and b be \mathbb{R}^n vectors, we wish to solve A x = b.

Definition

A preconditioner \mathcal{P} is a method for constructing a matrix (just a linear function, not assembled!) P^{-1} = \mathcal{P}(A,A_p) using a matrix A and extra information A_p, such that the spectrum of P^{-1}A (left preconditioning) or A P^{-1} (right preconditioning) is well-behaved. The action of preconditioning improves the conditioning of the previous linear system.

Left preconditioning: We solve for (P^{-1} A) x = P^{-1} b and we build the Krylov space \{ P^{-1} b, (P^{-1}A) P^{-1} b, (P^{-1}A)^2 P^{-1} b, \dots\}

Right preconditioning: We solve for (A P^{-1}) P x = b and we build the Krylov space \{ b, (P^{-1}A)b, (P^{-1}A)^2b, \dotsc \}

Note that the product P^{-1}A or A P^{-1} is never assembled.

Properties

Let us now describe some properties of preconditioners

• P^{-1} is dense, P is often not available and is not needed

• A is rarely used by \mathcal{P}, but A_p = A is common

• A_p is often a sparse matrix, the \e preconditioning \e matrix

Here are some numerical methods to solve the system A x = b

• Matrix-based: Jacobi, Gauss-Seidel, SOR, ILU(k), LU

• Parallel: Block-Jacobi, Schwarz, Multigrid, FETI-DP, BDDC

• Indefinite: Schur-complement, Domain Decomposition, Multigrid

Preconditioner strategies

Relaxation

Split into lower, diagonal, upper parts: A = L + D + U.

Jacobi

Cheapest preconditioner: P^{-1}=D^{-1}.

``````# sequential
pc-type=jacobi
# parallel
pc-type=block_jacobi``````
Successive over-relaxation (SOR)
\left(L + \frac 1 \omega D\right) x_{n+1} = \left[\left(\frac 1\omega-1\right)D - U\right] x_n + \omega b \\ P^{-1} = \text{$k$ iterations starting with $x_0=0$}\\
• Implemented as a sweep.

• \omega = 1 corresponds to Gauss-Seidel.

• Very effective at removing high-frequency components of residual.

``````# sequential
pc-type=sor``````
Factorization

Two phases

• symbolic factorization: find where fill occurs, only uses sparsity pattern.

• numeric factorization: compute factors.

LU decomposition
• preconditioner.

• Expensive, for m\times m sparse matrix with bandwidth b, traditionally requires \mathcal{O}(mb^2) time and \mathcal{O}(mb) space.

• Bandwidth scales as m^{\frac{d-1}{d}} in d-dimensions.

• Optimal in 2D: \mathcal{O}(m \cdot \log m) space, \mathcal{O}(m^{3/2}) time.

• Optimal in 3D: \mathcal{O}(m^{4/3}) space, \mathcal{O}(m^2) time.

• Symbolic factorization is problematic in parallel.

Incomplete LU
• Allow a limited number of levels of fill: ILU(k).

• Only allow fill for entries that exceed threshold: ILUT.

• Usually poor scaling in parallel.

• No guarantees.

1-level Domain decomposition

Domain size L, subdomain size H, element size h

• Overlapping/Schwarz

• Solve Dirichlet problems on overlapping subdomains.

• No overlap: \textit{its} \in \mathcal{O}\big( \frac{L}{\sqrt{Hh}} \big).

• Overlap \delta: \textit{its} \in \big( \frac L {\sqrt{H\delta}} \big).

``````pc-type=gasm # has a coarse grid preconditioner
pc-type=asm``````
• Neumann-Neumann

• Solve Neumann problems on non-overlapping subdomains.

• \textit{its} \in \mathcal{O}\big( \frac{L}{H}(1+\log\frac H h) \big).

• Tricky null space issues (floating subdomains).

• Need subdomain matrices, not globally assembled matrix.

Notes: Multilevel variants knock off the leading \frac L H.
Both overlapping and nonoverlapping with this bound.

• BDDC and FETI-DP

• Neumann problems on subdomains with coarse grid correction.

• \textit{its} \in \mathcal{O}\big(1 + \log\frac H h \big).

Multigrid

Hierarchy: Interpolation and restriction operators \Pi^\uparrow : X_{\text{coarse}} \to X_{\text{fine}} \qquad \Pi^\downarrow : X_{\text{fine}} \to X_{\text{coarse}}

• Geometric: define problem on multiple levels, use grid to compute hierarchy.

• Algebraic: define problem only on finest level, use matrix structure to build hierarchy.

Galerkin approximation

Assemble this matrix: A_{\text{coarse}} = \Pi^\downarrow A_{\text{fine}} \Pi^\uparrow

Application of multigrid preconditioner ( V -cycle)

• Apply pre-smoother on fine level (any preconditioner).

• Restrict residual to coarse level with \Pi^\downarrow.

• Solve on coarse level A_{\text{coarse}} x = r.

• Interpolate result back to fine level with \Pi^\uparrow.

• Apply post-smoother on fine level (any preconditioner).

Multigrid convergence properties
• Textbook: P^{-1}A is spectrally equivalent to identity

• Constant number of iterations to converge up to discretization error.

• Most theory applies to SPD systems

• variable coefficients (e.g. discontinuous): low energy interpolants.

• mesh- and/or physics-induced anisotropy: semi-coarsening/line smoothers.

• complex geometry: difficult to have meaningful coarse levels.

• Deeper algorithmic difficulties

• nonsymmetric (e.g. advection, shallow water, Euler).

• indefinite (e.g. incompressible flow, Helmholtz).

• Performance considerations

• Aggressive coarsening is critical in parallel.

• Most theory uses SOR smoothers, ILU often more robust.

• Coarsest level usually solved semi-redundantly with direct solver.

• Multilevel Schwarz is essentially the same with different language

• assume strong smoothers, emphasize aggressive coarsening.

List of PETSc Preconditioners

See this PETSc page for a complete list.

 PETSc Description Parallel none No preconditioner yes jacobi diagonal preconditioner yes bjacobi block diagonal preconditioner yes sor SOR preconditioner yes lu Direct solver as preconditioner depends on the factorization package (e.g.mumps,pastix: OK) shell User defined preconditioner depends on the user preconditioner mg multigrid prec yes ilu incomplete lu icc incomplete cholesky cholesky Cholesky factorisation yes asm Additive Schwarz Method yes gasm Scalable Additive Schwarz Method yes ksp Krylov subspace preconditioner yes fieldsplit block preconditioner framework yes lsc Least Square Commutator yes gamg Scalable Algebraic Multigrid yes hypre Hypre framework (multigrid…​) bddc balancing domain decomposition by constraints preconditioner yes

Principles

Feel++ abstracts the PETSc library and provides a subset (sufficient in most cases) to the PETSc features. It interfaces with the following PETSc libraries: `Mat` , `Vec` , `KSP` , `PC` , `SNES.`

• `Vec` Vector handling library

• `Mat` Matrix handling library

• `KSP` Krylov SubSpace library implements various iterative solvers

• `PC` Preconditioner library implements various preconditioning strategies

• `SNES` Nonlinear solver library implements various nonlinear solve strategies

All linear algebra are encapsulated within backends using the command line option `--backend=<backend>` or config file option `backend=<backend>` which provide interface to several libraries

 Library Format Backend PETSc sparse `petsc` Eigen sparse `eigen` Eigen dense `eigen_dense`

The default `backend` is `petsc.`

Somes generic examples

The configuration files `.cfg` allow for a wide range of options to solve a linear or non-linear system.

We consider now the following example [import:"marker1"](../../codes/mylaplacian.cpp)

To execute this example

``````# sequential
./feelpp_tut_laplacian
# parallel on 4 cores
mpirun -np 4 ./feelpp_tut_laplacian``````

As described in section

Direct solver

`cholesky` and `lu` factorisation are available. However the parallel implementation depends on the availability of MUMPS. The configuration is very simple.

``````# no iterative solver
ksp-type=preonly
#
pc-type=cholesky``````

Using the PETSc backend allows to choose different packages to compute the factorization.

 Package Description Parallel `petsc` PETSc own implementation yes `mumps` MUltifrontal Massively Parallel sparse direct Solver yes `umfpack` Unsymmetric MultiFrontal package no `pastix` Parallel Sparse matriX package yes

To choose between these factorization package

``````# choose mumps
pc-factor-mat-solver-package=mumps
# choose umfpack (sequential)
pc-factor-mat-solver-package=umfpack``````

In order to perform a cholesky type of factorisation, it is required to set the underlying matrix to be SPD.

``````// matrix
auto A = backend->newMatrix(_test=...,_trial=...,_properties=SPD);
// bilinear form
auto a = form2( _test=..., _trial=..., _properties=SPD );``````

Using iterative solvers

Using CG and ICC(3)

with a relative tolerance of 1e-12:

``````ksp-rtol=1.e-12
ksp-type=cg
pc-type=icc
pc-factor-levels=3``````
Using GMRES and ILU(3)

with a relative tolerance of 1e-12 and a restart of 300:

``````ksp-rtol=1.e-12
ksp-type=gmres
ksp-gmres-restart=300
pc-type=ilu
pc-factor-levels=3``````
Using GMRES and Jacobi

With a relative tolerance of 1e-12 and a restart of 100:

``````ksp-rtol=1.e-12
ksp-type=gmres
ksp-gmres-restart 100
pc-type=jacobi``````

Monitoring linear non-linear and eigen problem solver residuals

``````# linear
ksp_monitor=1
# non-linear
snes-monitor=1
# eigen value problem
eps-monitor=1``````

Solving the Laplace problem

We start with the quickstart Laplacian example, recall that we wish to, given a domain \Omega, find u such that

-\nabla \cdot (k \nabla u) = f \mbox{ in } \Omega \subset \mathbb{R}^{2},\\ u = g \mbox{ on } \partial \Omega

Monitoring KSP solvers
``feelpp_qs_laplacian --ksp-monitor=true``
Viewing KSP solvers
``````shell> mpirun -np 2 feelpp_qs_laplacian --ksp-monitor=1  --ksp-view=1
0 KSP Residual norm 8.953261456448e-01
1 KSP Residual norm 7.204431786960e-16
KSP Object: 2 MPI processes
type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=1000
tolerances:  relative=1e-13, absolute=1e-50, divergence=100000
left preconditioning
using nonzero initial guess
using PRECONDITIONED norm type for convergence test
PC Object: 2 MPI processes
type: shell
Shell:
linear system matrix = precond matrix:
Matrix Object:   2 MPI processes
type: mpiaij
rows=525, cols=525
total: nonzeros=5727, allocated nonzeros=5727
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines``````
Solvers and preconditioners

You can now change the Krylov subspace solver using the `--ksp-type` option and the preconditioner using `--pc-ptype` option.

For example,

• to solve use the conjugate gradient,`cg`, solver and the default preconditioner use the following

`./feelpp_qs_laplacian --ksp-type=cg --ksp-view=1 --ksp-monitor=1`
• to solve using the algebraic multigrid preconditioner, `gamg`, with `cg` as a solver use the following

`./feelpp_qs_laplacian --ksp-type=cg --ksp-view=1 --ksp-monitor=1 --pc-type=gamg`

Block factorisation

Stokes

We now turn to the quickstart Stokes example, recall that we wish to, given a domain \Omega, find (\mathbf{u},p) such that

 -\Delta \mathbf{u} + \nabla p = \mathbf{ f} \mbox{ in } \Omega,\\ \nabla \cdot \mathbf{u} = 0 \mbox{ in } \Omega,\\ \mathbf{u} = \mathbf{g} \mbox{ on } \partial \Omega

This problem is indefinite. Possible solution strategies are

• Uzawa,

• penalty(techniques from optimisation),

• augmented lagrangian approach (Glowinski,Le Tallec)

Note that The Inf-sup condition must be satisfied. In particular for a multigrid strategy, the smoother needs to preserve it.

General approach for saddle point problems

The Krylov subspace solvers for indefinite problems are MINRES, GMRES. As to preconditioning, we look first at the saddle point matrix M and its block factorization M = LDL^T, indeed we have :

M = \begin{pmatrix} A & B \\ B^T & 0 \end{pmatrix} = \begin{pmatrix} I & 0\\ B^T C & I \end{pmatrix} \begin{pmatrix} A & 0\\ 0 & - B^T A^{-1} B \end{pmatrix} \begin{pmatrix} I & A^{-1} B\\ 0 & I \end{pmatrix}
• Elman, Silvester and Wathen propose 3 preconditioners:

P_1 = \begin{pmatrix} \tilde{A}^{-1} & B\\ B^T & 0 \end{pmatrix}, \quad P_2 = \begin{pmatrix} \tilde{A}^{-1} & 0\\ 0 & \tilde{S} \end{pmatrix},\quad P_3 = \begin{pmatrix} \tilde{A}^{-1} & B\\ 0 & \tilde{S} \end{pmatrix}

where \tilde{S} \approx S^{-1} = B^T A^{-1} B and \tilde{A}^{-1} \approx A^{-1}

Preconditioner strategies

Relaxation

Split into lower, diagonal, upper parts: A = L + D + U.

Jacobi

Cheapest preconditioner: P^{-1}=D^{-1}.

``````# sequential
pc-type=jacobi
# parallel
pc-type=block_jacobi``````
Successive over-relaxation (SOR)
\left(L + \frac 1 \omega D\right) x_{n+1} = \left[\left(\frac 1\omega-1\right)D - U\right] x_n + \omega b \\ P^{-1} = \text{$k$ iterations starting with $x_0=0$}
• Implemented as a sweep.

• \omega = 1 corresponds to Gauss-Seidel.

• Very effective at removing high-frequency components of residual.

``````# sequential
pc-type=sor``````

Factorization

Two phases

• symbolic factorization: find where fill occurs, only uses sparsity pattern.

• numeric factorization: compute factors.

LU decomposition
• preconditioner.

• Expensive, for m\times m sparse matrix with bandwidth b, traditionally requires \mathcal{O}(mb^2) time and \mathcal{O}(mb) space.

• Bandwidth scales as m^{\frac{d-1}{d}} in d-dimensions.

• Optimal in 2D: \mathcal{O}(m \cdot \log m) space, \mathcal{O}(m^{3/2}) time.

• Optimal in 3D: \mathcal{O}(m^{4/3}) space, \mathcal{O}(m^2) time.

• Symbolic factorization is problematic in parallel.

Incomplete LU
• Allow a limited number of levels of fill: ILU(k).

• Only allow fill for entries that exceed threshold: ILUT.

• Usually poor scaling in parallel.

• No guarantees.

1-level Domain decomposition

`Domain size $$L$$, subdomain size $$H$$, element size $$h$$`
• Overlapping/Schwarz

• Solve Dirichlet problems on overlapping subdomains.

• No overlap: \textit{its} \in \mathcal{O}\big( \frac{L}{\sqrt{Hh}} \big).

• Overlap \delta: \textit{its} \in \big( \frac L {\sqrt{H\delta}} \big).

• Neumann-Neumann

• Solve Neumann problems on non-overlapping subdomains.

• \textit{its} \in \mathcal{O}\big( \frac{L}{H}(1+\log\frac H h) \big).

• Tricky null space issues (floating subdomains).

• Need subdomain matrices, not globally assembled matrix.

 Multilevel variants knock off the leading \frac L H. Both overlapping and nonoverlapping with this bound.
• BDDC and FETI-DP

• Neumann problems on subdomains with coarse grid correction.

• \textit{its} \in \mathcal{O}\big(1 + \log\frac H h \big).

Multigrid

Introduction

Hierarchy: Interpolation and restriction operators \Pi^\uparrow : X_{\text{coarse}} \to X_{\text{fine}} \qquad \Pi^\downarrow : X_{\text{fine}} \to X_{\text{coarse}}

• Geometric: define problem on multiple levels, use grid to compute hierarchy.

• Algebraic: define problem only on finest level, use matrix structure to build hierarchy.

Galerkin approximation

Assemble this matrix: A_{\text{coarse}} = \Pi^\downarrow A_{\text{fine}} \Pi^\uparrow

Application of multigrid preconditioner (V-cycle)

• Apply pre-smoother on fine level (any preconditioner).

• Restrict residual to coarse level with \Pi^\downarrow.

• Solve on coarse level A_{\text{coarse}} x = r.

• Interpolate result back to fine level with \Pi^\uparrow.

• Apply post-smoother on fine level (any preconditioner).

Multigrid convergence properties
• Textbook: P^{-1}A is spectrally equivalent to identity

• Constant number of iterations to converge up to discretization error.

• Most theory applies to SPD systems

• variable coefficients (e.g. discontinuous): low energy interpolants.

• mesh- and/or physics-induced anisotropy: semi-coarsening/line smoothers.

• complex geometry: difficult to have meaningful coarse levels.

• Deeper algorithmic difficulties

• nonsymmetric (e.g. advection, shallow water, Euler).

• indefinite (e.g. incompressible flow, Helmholtz).

• Performance considerations

• Aggressive coarsening is critical in parallel.

• Most theory uses SOR smoothers, ILU often more robust.

• Coarsest level usually solved semi-redundantly with direct solver.

• Multilevel Schwarz is essentially the same with different language

• assume strong smoothers, emphasize aggressive coarsening.

List of PETSc Preconditioners

See this PETSc page for a complete list.

 PETSc Description Parallel none No preconditioner yes jacobi diagonal preconditioner yes bjacobi block diagonal preconditioner yes sor SOR preconditioner yes lu Direct solver as preconditioner depends on the factorization package (e.g.mumps,pastix: OK) shell User defined preconditioner depends on the user preconditioner mg multigrid prec yes ilu incomplete lu icc incomplete cholesky cholesky Cholesky factorisation yes asm Additive Schwarz Method yes gasm Scalable Additive Schwarz Method yes ksp Krylov subspace preconditioner yes fieldsplit block preconditioner framework yes lsc Least Square Commutator yes gamg Scalable Algebraic Multigrid yes hypre Hypre framework (multigrid…​) bddc balancing domain decomposition by constraints preconditioner yes

Algebra Backends

Non-Linear algebra backends are crucial components of Feel++. They provide a uniform interface between Feel++ data structures and underlying the linear algebra libraries used by Feel++.

Libraries

Feel++ interfaces the following libraries:

• PETSc : Portable, Extensible Toolkit for Scientific Computation

• SLEPc : Eigen value solver framework based on PETSc

• Eigen3

Backend

Backend is a template class parametrized by the numerical type providing access to

• vector

• matrix : dense and sparse

• algorithms : solvers, preconditioners, …​

PETSc provides sequential and parallel data structures whereas Eigen3 is only sequential.

To create a Backend, use the free function `backend(…​)` which has the following interface:

``````backend(_name="name_of_backend",
_rebuild=... /// true|false,
_kind=..., // type of backend,
_worldcomm=... // communicator
)``````

All these parameters are optional which means that the simplest call reads for example:

``auto b = backend();``

They take default values as described in the following table:

 parameter type default value `_name` string "" (empty string ) `_rebuild` boolean false `_kind` string "petsc" `_worldcomm` WorldComm Environment::worldComm()
_name

Backends are store in a Backend factory and handled by a manager that allows to keep track of allocated backends. They a registered with respect to their name and kind. The default name value is en empty string (`""`) which names the default Backend. The _name parameter is important because it provides also the name for the command line/config file option section associated to the associated Backend.

Only the command line/config file options for the default backend are registered. Developers have to register the option for each new Backend they define otherwise failures at run-time are to be expected whenever a Backend command line option is accessed.

Consider that you create a Backend name `projection` in your code like this

``auto b = backend(_name="projection");``

to register the command line options of this Backend

``````Environment env( _argc=argc, _argv=argv,
_desc=backend_options("projection") );``````
_kind

Feel++ supports three kind of Backends:

• petsc : PETSC Backend

• eigen_dense

• eigen_sparse

 SLEPc uses the PETSc Backend since it is based on PETSc.

The kind of Backend can be changed from the command line or configuration file thanks to the "backend" option.

``````auto b = backend(_name="name",
_kind=soption(_prefix="name",_name="backend"))``````

and in the config file

``````[name]
backend=petsc
backend=eigen_sparse``````
_rebuild

If you want to reuse a Backend and not allocate a new one plus add the corresponding option to the command line/configuration file, you can rebuild the Backend in order to delete the data structures already associated to this Backend and in particular the preconditioner. It is mandatory to do that when you solve say a linear system first with dimensions m\times m and then solve another one with different dimension n \times n because in that case the Backend will throw an error saying that the dimensions are incompatible. To avoid that you have to rebuild the Backend.

``````auto b = backend(_name="mybackend");
// solve A x = f
b->solve(_solution=x,_matrix=A,_rhs=f);
// rebuild: clean up the internal Backend data structure
b = backend(_name="mybackend",_rebuild=true);
// solve A1 x1 = f1
b->solve(_solution=x1,_matrix=A1,_rhs=f1);``````
 Although this feature might appear useful, you have to make sure that the solving strategy applies to all problems because you won’t be able to customize the solver/preconditioner for each problem. If you have different problems to solve and need to have custom solver/preconditioner it would be best to have different Backends.
_worldComm

One of the strength of Feel++ is to be able to change the communicator and in the case of Feel++ the WorldComm. This allows for example to

• solve sequential problems

• solve a problem on a subset of MPI processes

For example passing a sequential WorldComm to `backend()` would then in the subsequent use of the Backend generate sequential data structures (e.g. IndexSet, Vector and Matrix) and algorithms (e.g. Krylov Solvers).

`````` // create a sequential Backend
auto b = backend(_name="seq",
_worldComm=Environment::worldCommSeq());
auto A = b->newMatrix(); // sequential Matrix
auto f = b->newVector(); // sequential Vector``````

Info The default WorldComm is provided by `Environment::worldComm()` and it conresponds to the default MPI communicator `MPI_COMM_WORLD`.

Eigen Problem

To solve standard and generalized eigenvalue problems, Feel++ interfaces SLEPc. SLEPc is a library which extends PETSc to provide the functionality necessary for the solution of eigenvalue problems. It comes with many strategies for both standard and generalized problems, Hermitian or not.

We want to find (\lambda_i,x_i) such that Ax = \lambda x. To do that, most eigensolvers project the problem onto a low-dimensional subspace, this is called a Rayleigh-Ritz projection. + Let V_j=[v_1,v_2,...,v_j] be an orthogonal basis of this subspace, then the projected problem reads:

Find (\theta_i,s_i) for i=1,\ldots,j such that B_j s_i=\theta_i s_i where B_j=V_j^T A V_j.

Then the approximate eigenpairs (\lambda_i,x_i) of the original problem are obtained as: \lambda_i=\theta_i and x_i=V_j s_i.

The eigensolvers differ from each other in the way the subspace is built.

Code

In Feel++, there is two functions that can be used to solve this type of problems, `eigs` and `veigs`.

Here is an example of how to use `veigs`.

``````auto Vh = Pch<Order>( mesh );
auto a = form2( _test=Vh, _trial=Vh );
// fill a
auto b = form2( _test=Vh, _trial=Vh );
// fill b
auto eigenmodes = veigs( _formA=a, _formB=b );``````

where `eigenmodes` is a ```std::vector<std::pair<value_type, element_type> >``` with `value_type` the type of the eigenvalue, and `element_type` the type of the eigenvector, a function of the space `Vh`.

The `eigs` function does not take the bilinear forms but two matrices. Also, the solver used, the type of the problem, the position of the spectrum and the spectral transformation are not read from the options.

``````auto Vh = Pch<Order>( mesh );
auto a = form2( _test=Vh, _trial=Vh );
// fill a
auto matA = a.matrixPtr();
auto b = form2( _test=Vh, _trial=Vh );
// fill b
auto matB = b.matrixPtr();
auto eigenmodes = eigs( _matrixA=aHat,
_matrixB=bHat,
_solver=(EigenSolverType)EigenMap[soption("solvereigen.solver")],
_problem=(EigenProblemType)EigenMap[soption("solvereigen.problem")],
_transform=(SpectralTransformType)EigenMap[soption("solvereigen.transform")],
_spectrum=(PositionOfSpectrum)EigenMap[soption("solvereigen.spectrum")]
);
auto femodes = std::vector<decltype(Vh->element())>( eigenmodes.size(), Vh->element() );
int i = 0;
for( auto const& mode : modes )
femodes[i++] = *mode.second.get<2>();``````

where `eigenmodes` is a `std::map<real_type, eigenmodes_type>` with `real_type` of the magnitude of the eigenvalue. And `eigenmodes_type` is a `boost::tuple<real_type, real_type, vector_ptrtype>` with the first `real_type` representing the real part of the eigenvalue, the second `real_type` the imaginary part and the `vector_ptrtype` is a vector but not an element of a functionspace.

The two functions take a parameter `_nev` that tel how many eigenpair to compute. This can be set from the command line option `--solvereigen.nev`. + Another important parameter is `_ncv` which is the size of the subspace, `j` above. This parameter should always be greater than `nev`. SLEPc recommends to set it to ```max(nev+15, 2*nev)```. This can be set from the command line option `--solvereigen.ncv`.

Problem type

Find \lambda\in \mathbb{R} such that Ax = \lambda x

where \lambda is an eigenvalue and x an eigenvector.

But in the case of the finite element method, we will deal with the generalized form :

Find \lambda\in\mathbb{R} such that Ax = \lambda Bx

A standard problem is Hermitian if the matrix A is Hermitian (A=A^*). + A generalized problem is Hermitian if the matrices A and B are Hermitian and if B is positive definite. + If the problem is Hermitian, then the eigenvalues are real. A special case of the generalized problem is when the matrices are not Hermitian but B is positive definite.

The type of the problem can be specified using the EigenProblemType, or at run time with the command line option `--solvereigen.problem` and the following value :

Table 11. Table of problem type
Problem type EigenProblemType command line key

Standard Hermitian

HEP

"hep"

Standard non-Hermitian

NHEP

"nhep"

Generalized Hermitian

GHEP

"ghep"

Generalized non-Hermitian

GNHEP

"gnhep"

Positive definite Generalized non-Hermitian

PGNHEP

"pgnhep"

Position of spectrum

You can choose which eigenpairs will be computed. The user can set it programmatically with `PositionOfSpectrum` or at run time with the command line option `--solvereigen.spectrum` and the following value :

Table 12. Table of position of spectrum
Position of spectrum PositionOfSpectrum command line key

Largest magnitude

LARGEST_MAGNITUDE

"largest_magnitude"

Smallest magnitude

SMALLEST_MAGNITUDE

"smallest_magnitude"

Largest real

LARGEST_REAL

"largest_real"

Smallest real

SMALLEST_REAL

"smallest_real"

Largest imaginary

LARGEST_IMAGINARY

"largest_imaginary"

Smallest imaginary

SMALLEST_IMAGINARY

"smallest_imaginary"

Spectral transformation

It is observed that the algorithms used to solve the eigenvalue problems find solutions at the extremities of the spectrum. To improve the convergence, one need to compute the eigenpairs of a transformed operator. Those spectral transformations allow to compute solutions that are not on the boundary of the spectrum.

There are 3 types of spectral transformation:

Shift

A-\sigma I or B^{-1}A-\sigma I

Shift and invert

(A-\sigma I)^{-1} or (A-\sigma B)^{-1}B

Cayley

(A-\sigma I)^{-1}(A+\nu I) or (A-\sigma B)^{-1}(A+\nu B)

By default, shift and invert is used. You can change it with `--solvereigen.transform`.

Table 13. Table of spectral transformation
Spectral transformation SpectralTransformationType command line key

Shift

SHIFT

shift

Shift and invert

SINVERT

shift_invert

Cayley

CAYLEY

cayley

Eigensolvers

The details of the implementation of the different solvers can be found in the SLEPc Technical Reports.

The default solver is Krylov-Schur, but can be modified using `EigenSolverType` or the option `--solvereigen.solver`.

Table 14. Table of eigensolver
Solver EigenSolverType command line key

Power

POWER

power

Lapack

LAPACK

lapack

Subspace

SUBSPACE

subspace

Arnoldi

Arnoldi

arnoldi

Lanczos

LANCZOS

lanczos

Krylov-Schur

KRYLOVSCHUR

krylovschur

Arpack

ARPACK

arpack

Be careful that all solvers can not compute all the problem types and positions of the spectrum. The possibilities are summarize in the following table.

Table 15. Supported problem type for the eigensolvers
Solver Position of spectrum Problem type

Power

Largest magnitude

any

Lapack

any

any

Subspace

Largest magnitude

any

Arnoldi

any

any

Lanczos

any

standard and generalized Hermitian

Krylov-Schur

any

any

Arpack

any

any

Special cases of spectrum

Computing a large portion of the spectrum

In the case where you want compute a large number of eigenpairs, the rule for `ncv` implies a huge amount of memory to be used. To improve the performance, you can set the `mpd` parameter, which will limit the dimension of the projected problem.

You can set it via the command line with `--solvereigen.mpd <mpd>`.

Computing all the eigenpairs in an interval

If you want to compute all the eigenpairs in a given interval, you need to use the option `--solvereigen.interval-a` to set the beginning of the interval and `--solvereigen.interval-b` to set the end.

In this case, be aware that the problem need to be generalized and hermitian. The solver will be set to Krylov-Schur and the transformation to shift and invert. Beside, you’ll need to use a linear solver that will compute the inertia of the matrix, this is set to Cholesky, with mumps if you can use it. + For now, this method is only implemented in the `eigs` function.

Appendix A: Feel++ File Formats

Feel++ Formats

For performance reasons and allow fast checkpoint restart of simulations, we have develop our own mesh and data file format in parallel.

 Format Description Mode Type `json+hdf5` Feel++ parallel file format Read/Write Metadata & Binary

The format is decomposed into two files : (i) a json file (`.json` file extension) which contain some metadata information on the mesh and (ii) a hdf5 file (`.h5` file extension) which contains the mesh data structure.

Pre-Processing formats

Feel++ supports various file formats that can be used as input mesh file formats.

 Format Description Mode Type `acumesh` Acusim(ALTAIR) mesh file format Read Ascii `gmsh` Gmsh mesh file format Read/Write Ascii/Binary `json+hdf5` Feel++ parallel file format Read/Write Metadata & Binary `med` MED(Salome) mesh file format Read/Write Ascii/Binary `mesh` MEDIT(INRIA) mesh file format Read/Write Ascii

Post Processing formats

Feel++ supports various file formats that can be used as output mesh and data file formats for post-processing.

 Format Description Mode Type `gmsh` Gmsh mesh file format Read/Write Ascii/Binary `ensightgold` Ensight Gold case format Write Binary `h3d` H3D file format Read Database `xdmf` XML/HDF5 file format Write `VTK`
 The H3D file format requires that you have the Altair Hypermesh software installed.