Feel++

Reduced Basis Methods

Model Order Reduction

Definition

Problem statement

Goal Replicate input-output behavior of large-scale system \(\Sigma\) over a certain (restricted) range of

  • forcing inputs and

  • parameter inputs

image

Problem statement Given large-scale system \(\Sigma_\mathcal{N}\) of dimension \(\mathcal{N}\), find a reduced order model \(\Sigma_N\) of dimension \(N << \mathcal{N}\) such that: The approximation error is small, i.e., there exists a global error bound such that

  • \(\|u(\mu) - u_N (\mu)\| \leq \varepsilon_{\mathrm{des}}\), and \(|s(\mu) - s_N (\mu)| \leq \varepsilon^s_{\mathrm{des}} , \forall \mu \in D^{\mu}\).

  • Stability (and passivity) is preserved.

  • The procedure is computationally stable and efficient.

Motivation

Generalized Inverse Problem

  • Given PDE(\(\mu\)) constraints, find value(s) of parameter \(\mu\) which:

    • (OPT) minimizes (or maximizes) some functional;

    • (EST) agrees with measurements;

    • (CON) makes the system behave in a desired manner;

    • or some combination of the above

  • Full solution computationally very expensive due to a repeated evaluation for many different values of \(\mu\)

  • Goal: or

Methodologies

Methodologies

  • Reduced Basis Methods

  • Proper Orthogonal Decomposition

  • Balanced Truncation

  • Krylov Subspace Methods

  • Proper Generalized Decomposition

  • Modal Decomposition

  • Physical model reduction

  • …​

Disclaimer Model Order Reduction Techniques

  • replace your favorite discretization scheme (e.g. FE, FV, FD), but instead are build upon and supplement these schemes.

  • useful if you are interested in a single high-fidelity solution of your high-dimensional problem, but instead if you are interested in the many-query or real-time context.

Some Examples

Cooling of electronical components

Thermal Testcase Description

0.5

0.5

Overview

  • Heat-Transfer with conduction and convection possibly coupled with Navier-Stokes

  • Simple but complex enough to contain all difficulties to test the certified reduced basis

    • non symmetric, non compliant

    • steady/unsteady

    • physical and geometrical parameters

    • coupled models

  • Testcase can be easily complexified

Aerothermal flows

Airbus Use-Case

image

Some Scientific Issues

  • Turbulence

  • Mixed forced and natural convection

  • Boundary conditions coupled to an ECS (Environment Control System)

  • Error prediction (Reduced Basis)

Modeling of high field magnets

+

High Field Magnet Modeling

Laboratoire National des Champs Magnétiques Intenses

Large scale user facility in France

  • High magnetic field : from 24 T

  • Grenoble : continuous magnetic field (36 T)

  • Toulouse : pulsed magnetic field (90 T)

3.4cm

Application domains

  • Magnetoscience

  • Solide state physic

  • Chemistry

  • Biochemistry

2.4cm

image

3.9cm

Magnetic Field

  • Earth : \(5.8 \cdot 10^{-4} T\)

  • Supraconductors : \(24 T\) *

  • Pulsed field : \(90 T\)

Access

  • Call for Magnet Time : \(2 ~\times\) per year

  • \(\approx ~140\) projects per year

3.5cm

4cm

image

5cm

image

4.5cm

image

Why use Reduced Basis Methods ?

Challenges

  • Modeling : multi-physics non-linear models, complex geometries, genericity

  • Account for uncertainties : uncertainty quantification, sensitivity analysis

  • Optimization : shape of magnets, robustness of design

4.8cm

Objective 1 : Fast

  • Complex geometries

    • Large number of dofs

  • Uncertainty quantification

    • Large number of runs

4.4cm

Objective 2 : Reliable

  • Field quality

  • Design optimization

    • Certified bounds

    • Reach material limits

Summary

Summary Many problems in computational engineering require

many or real-time evaluations of PDE(\(\mu\))-induced
input-output relationships.

Model order reduction techniques enable

certified, real-time calculation
of outputs of PDE(\(\mu\))
for parameter estimation, optimization, and control.

Reduced Basis Method

Problem Statement

The Reduced Basis Method

  • <2→ Comparison to other model reduction techniques:

    • Parametrized problems(material, constants, geometry,…​)

    • A posteriori error estimation

    • Offline-online decomposition

    • Greedy algorithm (to construct reduced basis space)

  • <3→ Motivation:

    • Efficient solution of optimization and optimal control problems governed by parametrized PDEs.

Problem Statement

The Main Idea - Key Observation

General Problem Statement Given a system \(\Sigma_\mathcal{N}\) of large dimension N, image where

  • \(u(\mu, t) \in \mathbb{R}^{\mathcal{N}}\), the state

  • \(s(\mu, t)\), the outputs of interest

  • \(g(t)\), the forcing or control inputs

are functions of

  • \(\mu \in D\), the parameter inputs

  • \(t\), time

and the matrices \(M\), \(A\), \(B\), and \(L\) also depend on \(\mu\) \(\ldots\)

General Problem Statement \(\ldots\) construct a reduced order system \(\Sigma_N\) of dimension \(N << \mathcal{N}\),

image

where \(u_N(\mu) \in \mathbb{R}^N\) is the reduced state.

Special case We start by considering \(\dot{u} = 0\)

Full Model

\(\[\begin{align} A(\mu) u(\mu)& = & F(\mu)\\ s(\mu)&=&L^T(\mu) u(\mu) \end{align}]\)

Reduced Model

\(\[\begin{align} A_N(\mu) u_N(\mu)& = & F_N(\mu)\\ s_N(\mu)&=&L^T_N(\mu) u_N(\mu) \end{align}]\)

Key Ingredients

Approximation

  • <1→ Take snapshots'' at different \(\mu\)-values: \(u(\mu_i), i = 1 \ldots N\), and let \(\[Z_N=[\xi_1,\ldots,\xi_N\) \in \mathbb{R}^{\mathcal{N}\times N}\]] where the basis/test functions, \(\xi_i\) \(=\)'' \(u(\mu_i)\), are orthonormalized

  • <2→ For any new \(\mu\), approximate \(u\) by a linear combination of the \(\xi_i\) \(\[u(\mu) \approx \sum_{i=1}^N u_{N,i}(\mu) \xi_i = Z_N u_N(\mu)]\) determined by Galerkin projection, i.e.,

A posteriori error estimation

  • <1→ Assume well-posedness; \(A(\mu)\) pos.def. with min eigenvalue \(\alpha_a :=\lambda_1 >0\), where \(A \xi=\lambda X \xi\) and \(X\) corresponds to the \(X\)-inner product, \((v, v)_X = \|v\|_X^2\)

  • <2→ Let \(\underbrace{e_N = u - Z_N\ u_N}_{\text{error}}\) , and \(\underbrace{r = F - A\ Z_N\ u_N}_{\text{residual}}, \forall \mu \in D\), so that \(\[A(\mu) e_N (\mu) = r(\mu)]\)

  • <3→ Then for any \(\mu \in D\), \(\[\|u(\mu)- Z_N u_N(\mu) \|_X \leq \frac{\|r(\mu)\|_{X'}}{\alpha_{LB}(\mu)} =: \Delta_N(\mu)]\) \(\[|s(\mu)-s_N(\mu)| \leq \|L\|_{X'} \Delta_N(\mu) =: \Delta^s_N(\mu)]\) where \(\alpha_{LB}(\mu)\) is a lower bound to \(\alpha_a(\mu)\), and \(\|r\|_{X'}=r^T X^{-1} r\).

Offline-Online decomposition

Greedy Algorithm

Summary

Reduced Basis Opportunities Computational Opportunities

  • We restrict our attention to the typically smooth and low-dimensional manifold induced by the parametric dependence.
    \(\Rightarrow\) Dimension reduction

  • We accept greatly increased offline cost in exchange for greatly decreased online cost.
    \(\Rightarrow\) Real-time and/or many-query context

Reduced Basis Relevance Real-Time Context (control,\(\ldots\)): \(\[\begin{align} \mu & \rightarrow & s_N(\mu), \Delta^s_N(\mu) & \\ t_0 (``input'') & & & t_0+\delta t_{\mathrm{comp}} (``response'') \end{align}]\) Many-Query Context (design,\(\ldots\)): \(\[\begin{align} \mu_j & \rightarrow & s_N(\mu_j), \Delta^s_N(\mu_j),\quad j=1\ldots J \\ t_0 & & t_0+\delta t_{\mathrm{comp}} J\quad (J \rightarrow \infty) \end{align}]\) \(\Rightarrow\) (real-time) and/or (many-query) .

Reduced Basis Challenges

  • A Posteriori error estimation

    • Rigorous error bounds for outputs of interest

    • Lower bounds to the stability ``constants''

  • Offline-online computational procedures

    • Full decoupling of finite element and reduced basis spaces

    • A posteriori error estimation

    • Nonaffine and nonlinear problems

  • Effective sampling strategies

    • High parameter dimensions

Reduced Basis Outline

  1. Affine Elliptic Problems

    • (non)symmetric, (non)compliant, (non)coercive

    • (Convection)-diffusion, linear elasticity, Helmholtz

  2. Affine Parabolic Problems

    • (Convection)-diffusion equation

  3. Nonaffine and Nonlinear Problems

    • Nonaffine parameter dependence, nonpolynomial nonlinearities

  4. Reduced Basis (RB) Method for Fluid Flow

    • Saddle-Point Problems (Stokes)

    • Navier-Stokes Equations

  5. Applications

    • Parameter Optimization and Estimation (Inverse Problems)

    • Optimal Control

===

Linear Compliant Elliptic Problems

Notations, Definitions, Problem Statement, Example

Inner Product Spaces

Definitions

A space \(Z\) is a linear or vector space if, for any \(\alpha \in \mathbb{R}\) , \(w,v \in Z\), \(\alpha w+v \in Z\)

Note: \(\mathbb{R}\) denotes the real numbers, and \(\mathbb{N}\) and \(\mathbb{C}\) shall denote the natural and complex numbers, respectively.

An inner product space (or Hilbert space) \(Z\) is a linear space equipped with

  • an inner product \((w,v)_Z, \forall w,v \in Z\),and

  • induced norm \(\|w\|_Z = (w,w)_Z, \forall w \in Z\).

Inner Product

An inner product \(w,v \in Z \rightarrow (w,v)_Z \in \mathbb{R}\) has to satisfy

  • Bilinearity

    \((\alpha w+v,z)_Z =\alpha(w,z)_Z +(v,z)_Z \forall \alpha\in R,w,v,z\in Z\)

    +

    \((z,\alpha w+v)_Z =\alpha(z,w)_Z +(z,v)_Z, \forall \alpha\in R, w,v,z\in Z \)

  • Symmetry

    \((w,v)_Z = (v,w)_Z, \forall w,v \in Z\)

  • Positivity

    \((w,w)_Z >0, \forall w \in Z, w \neq 0\)

    +

    \((w,w)_Z =0, \text{ only if } w=0\)

Cauchy-Schwarz inequality: \(\[(w,v)_Z \leq \|w\|_Z\|v\|_Z,\forall w, v \in Z.]\)

Norm

A norm is a map \(\| \cdot \| : Z \rightarrow \mathbb{R}\) such that

  • \(\|w\|_Z > 0\quad \forall w\in Z,w\neq 0,\)

  • \(\|\alpha w\|_Z = |\alpha |\|w\|_Z\quad \forall \alpha \in \mathbb{R},\ \forall w\in Z, \)

  • \(\|w+v\|_Z \leq \|w\|_Z +\|v\|_Z\quad \forall w\in Z,\ \forall v\in Z.\)

Equivalence of norms \(\| \cdot \|_Z\) and \(\| \cdot \|_Y\) : there exist positive constants \(C_1\), \(C_2\) such that \(\[C_1\|v\|_Z \leq \|v\|_Y \leq C_2\|v\|_Z .]\)

Cartesian Product Space Given two inner product spaces \(Z_1\) and \(Z_2\), we define \(\[Z = Z_1 \times Z_2 \equiv \{(w_1,w_2)\ | \ w_1 \in Z_1,\ w_2 \in Z_2\}]\) and given \(w = (w_1,w_2) \in Z, v = (v_1,v_2) \in Z\), we define \(\[w + v \equiv (w_1 + v_1, w_2 + v_2).]\) We also equip \(Z\) with the inner product \(\[(w,v)_Z =(w_1,v_1)_{Z_1} +(w_2,v_2)_{Z_2}]\) and induced norm \(\[\|w\|_Z = (w,w)_Z.]\)

Linear and Bilinear Forms

Linear Forms

A functional \(g : Z \rightarrow \mathbb{R}\) is a linear functional if, for any \(\alpha \in \mathbb{R}, w, v \in Z\) \(\[g(\alpha w + v) = \alpha g(w) + g(v)]\)

A linear form is bounded, or continuous, over \(Z\) if \(\[|g(v)| \leq C \|v\|_Z, \forall v \in Z,]\) for some finite real constant \(C\).

Dual Spaces

Given \(Z\), we define the dual space \(Z'\) as the space of all bounded linear functionals over \(Z\). We associate to \(Z'\) the dual norm \(\[\|g\|_{Z'} = \sup_{v \in Z} \frac{g(v)}{\|v\|_Z} , \forall g \in Z'.]\)

For any \(g \in Z'\), there exists a unique \(w_g \in Z\) such that \(\[(w_g, v)_Z =g(v), \forall v \in Z.]\)

It directly follows that \(\[\|g\|_{Z'} = \|w_g\|_Z.]\)

Bilinear Forms

A form \(b:Z_1 \times Z_2 \rightarrow \mathbb{R} \) is bilinear if, for any \(\alpha \in R\),

  • \(b(\alpha w + v,z) = \alpha b(w,z) + b(v,z), \forall w,v \in Z_1, z \in Z_2 \)

  • \(b(z,\alpha w + v) =\alpha b(z,w) + b(z,v), \forall z \in Z_1, w,v \in Z_2\)

The bilinear form \(b : Z \times Z \rightarrow \mathbb{R}\) is

  • symmetric, if \(\[b(w,v) = b(v,w),]\)

  • skew-symmetric, if \(\[b(w,v) = -b(v,w),]\)

  • positive definite, if \(\[b(v,v) \geq 0\text{ , with equality only for } v = 0.]\)

Bilinear Forms The bilinear form \(b : Z \times Z \rightarrow \mathbb{R}\) is positive semidefinite, if \(\[b(v,v) \geq 0, \forall v \in Z.]\) We also define, for a general bilinear form \(b : Z \times Z \rightarrow \mathbb{R}\), the

  • symmetric part as \(\[b_S(w,v) = 1/2 (b(w,v) + b(v,w)), \forall w,v \in Z;]\)

  • the skew-symmetric part as \(\[b_{SS}(w,v) = 1/2 (b(w,v) - b(v,w)), \forall w,v \in Z.]\)

Bilinear Forms The bilinear form \(b : Z \times Z \rightarrow \mathbb{R}\) is

  • over \(Z\) if \(\[\alpha \equiv \inf_{w\in Z} \frac{b(w,w)}{\|w\|^2_Z}]\) is positive;

  • over \(Z\) if \(\[\gamma \equiv \sup_{w\in Z} \sup_{v\in Z} \frac{b(w, v)}{\|w\|_Z \|v\|_Z}]\) is finite.

Parametric Linear and Bilinear Forms We introduce

  • \(D \in \mathbb{R}^P\) : closed bounded parameter domain;

  • \(\mu = (\mu_1,\ldots,\mu_P) \in D\) : parameter vector.

We shall say that

  • \(g:Z\times D\rightarrow \mathbb{R}\) is a if, for all \(\mu \in D, g( \cdot ; \mu) : Z \rightarrow \mathbb{R}\) is a linear form;

  • \(b:Z\times Z\times D\rightarrow \mathbb{R}\) is a if,for all \(\mu \in D, b( \cdot , \cdot ; \mu) : Z \times Z \rightarrow \mathbb{R}\) is a bilinear form.

Concepts of symmetry,\(\ldots\) directly extend to the parametric case.

Parametric Linear and Bilinear Forms The parametric bilinear form \(b : Z \times Z \times D \rightarrow \mathbb{R}\) is

  • coercive over Z if \(\[\alpha(\mu) \equiv \inf_{w \in Z} \frac{b(w,w;\mu)}{\|w\|^2_Z}]\) is positive for all \(\mu \in D\);

  • continuous over \(Z\) if \(\[\gamma(\mu)\equiv \sup_{w \in Z} \sup_{v \in Z} \frac{b(w, v; \mu)}{\|w\|_Z\|v\|_Z}]\) is finite for all \(\mu \in D.\)

We also define \(\[\begin{align} (0 <) \alpha _0 & \equiv \min_{\mu \in D} \alpha (\mu)\\ \gamma_0 & \equiv \max_{\mu \in D} \gamma (\mu) (< \infty ). \end{align}]\)

Coercivity EigenProblem We have \(\[\alpha (\mu) \equiv \inf_{w \in Z} \frac{b_S(w,w;\mu)}{\|w\|^2_Z}]\)

Associated generalized eigenproblem:

Given \(\mu \in D\), find \((\chi^{co},\nu^{co})_i(\mu) \in Z \times \mathbb{R}, 1 \leq i \leq \dim(Z),\) such that \(\[b_S(\chi_i^{co}(\mu), v; \mu) = \nu_i^{co}(\mu)(\chi_i^{co}(\mu), v)_Z]\) and \(\[\|\chi_i^{co}(\mu)\|_Z=1]\) Let \(\nu_1^{co}(\mu) \leq \nu_2^{co}(\mu) \leq \ldots \leq \nu_{\dim{Z}}^{co} (\mu)\) and b coercive, then \(\[\alpha (\mu) = \nu_1^{co}(\mu) > 0.]\)

Parameter affine Dependence We assume \(\[g(v;\mu)= \sum_{q=1}^{Q_g} \theta^q_g(\mu)g^q(v), \forall v \in Z,]\) where, for \(1 \leq q \leq Q_g\) (finite),

  • functions \(\theta^q_g : D \rightarrow \mathbb{R}\),

  • forms \(g^q : Z \rightarrow \mathbb{R};\)

and \(\[b(w,v;\mu)= \sum_{q=1}^{Q_b} \theta^q_b(\mu) b^q(w,v),\quad \forall w,v \in Z,]\) where, for \(1 \leq q \leq Q_b\) (finite),

  • functions \(\theta^q_b : D \rightarrow \mathbb{R}\),

  • forms \(b^q : Z \times Z \rightarrow \mathbb{R}\).

Parametric Coercivity

The coercive bilinear form \(b : Z \times Z \times D \rightarrow \mathbb{R}\) \(\[b(w,v;\mu)= \sum_{q=1}^{Q_b} \theta^q_b(\mu) b^q(w,v),\quad \forall w,v \in Z,]\) is if \(c\equiv b_S\) is affine \(\[c(w,v;\mu)= \sum_{q=1}^{Q_c} \theta^q_c(\mu) c^q(w,v),\quad \forall w,v \in Z,]\) and satisfies and

  • \(\theta^q_c(\mu)>0, \forall \mu \in D, 1\leq q\leq Q_c,\)

  • \(c^q(v,v)\geq 0,\forall v \in Z, 1\leq q\leq Q_c.\)

Classes of Functions

Scalar and Vector Fields We consider (real)

  • scalar-valued field variables (e.g., temperature, pressure) \(w : \Omega \rightarrow \mathbb{R}^{d=1}\)

  • vector-valued field variables (e.g., displacement, velocity) \(\mathbf{w} : \Omega \rightarrow \mathbb{R}^d\) , where \(\mathbf{w}(x) = (w_1(x), \ldots , w_d (x));\)

and

  • \(\Omega \in \mathbb{R}^d, d=1, 2, \text{or} 3\) is an open bounded domain

  • \(x = (x_1,...,x_d) \in \Omega \);

  • \(\Omega\) has Lipschitz continuous boundary \(\partial \Omega\) ; and

  • we define the canonical basis vectors as \(e_i, 1 \leq i \leq d.\)

Multi-Index Derivative Given a scalar (or one component of a vector)

  • field \(w : \Omega \rightarrow \mathbb{R}\)SPATIAL DERIVATIVE \(\[(D^\sigma w)(x) = \frac{\partial^\sigma w}{\partial x_1^{\sigma_1} ...\partial x_d^{\sigma d}}]\)

  • field \(w : \Omega \times D \rightarrow \mathbb{R}\) SENSITIVITY DERIVATIVE \(\[(D_\sigma w)(x) = \frac{\partial^\sigma w}{\partial \mu_1^{\sigma_1} ...\partial \mu_d^{\sigma d}}]\)

where

  • \(\sigma = (\sigma_1,\ldots,\sigma_d)\), \(\sigma_i, 1 \leq i \leq d\), non-negative integers;

  • \(|\sigma| = \sum_{j=1}^{d} \sigma_j\) is the order of the derivative; and

  • \(I^{d,n}\) is set of all index vectors \(\sigma \in N^d_0\) such that \(|\sigma | \leq n.\)

Function Spaces

Let \(m \in N_0\), the space \(C^m(\Omega )\) is defined as \(\[C^m(\Omega )\equiv \{w | D^\sigma w \in C^0(\Omega ), \forall \sigma \in I^{d,m}\},]\) and \(C^0(\Omega )\) is the space of continuous functions over \(\Omega \in \mathbb{R}^d\).

We denote by \(C^\infty (\Omega )\) the space of functions \(w\) for which \(D^\sigma\) exists and is continuous for any order \(|\sigma |.\)

Lebesgue Spaces

We define, for \(1 \leq p < \infty\) , the Lebesgue space \(L^p(\Omega )\) as \(\[L^p(\Omega )\equiv \{ w \text{ measurable } |\quad \|w\|_{L^p(\Omega )} < \infty\}]\) where

  • \(\|w\|_{L^p(\Omega )} \equiv \left( \int_\Omega |w|^pdx\right)^{1/p} , 1\leq p<\infty,\)

  • \(\|w\|_{L^\infty (\Omega )} \equiv \mathrm{ess} \sup_{x\in\Omega} |w(x)|, p = \infty .\)

Hilbert Space

Let \(m \in \mathbb{N}_0\), the space \(H^m(\Omega )\) is then defined as \(\[H^m(\Omega )\equiv \{w |\quad D^\sigma w \in L^2(\Omega ), \forall \sigma \in I^{d,m}\},]\) with associated inner product \(\[(w,v)_{H^m(\Omega )}\equiv \sum_{\sigma \in I^{d,m}}\int_\Omega D^\sigma w D^\sigma v dx,]\) and induced norm \(\[\|w\|_{H^m(\Omega )} \equiv \sqrt{(w, w)_{H^m(\Omega )}}.]\)

Special (most important) cases Since we only consider , we require mostly

  • \(L^2(\Omega ) = H^0(\Omega )\): Lebesgue Space \(p = 2\) \(\[(w,v)_{L^2(\Omega)} = \int_\Omega w v \quad \forall w, v \in L^2(\Omega )]\) \(\[\|w\|_{L^2(\Omega)} = \sqrt{(w,w)_{L^2(\Omega)}} \forall w \in L^2(\Omega ),]\) \(\Rightarrow\) Space of all functions \(w : \Omega \rightarrow \mathbb{R}\) square-integrable over \(\Omega\) .

Special (most important) cases Since we only consider , we require mostly

  • \(H^1(\Omega)\) \(\[H^1(\Omega ) \equiv \{w \in L^2(\Omega )| \frac{\partial w}{ \partial xi} \in L^2(\Omega ), 1\leq i\leq d\}]\) with inner product and induced norm \(\[(w,v)_{H^1(\Omega )} \equiv \int_\Omega \nabla w \cdot \nabla v + wv\quad \forall w,v \in H^1(\Omega ),]\), \(\[\|w\|_{H^1(\Omega )} \equiv \sqrt{(w,w)_{H^1(\Omega)}}\quad \forall w \in H^1(\Omega ),]\) and seminorm \(\[|w|_{H^1(\Omega )} \equiv \int_\Omega \nabla w \cdot \nabla w,\quad \forall w \in H^1(\Omega ).]\)

Special (most important) cases Since we only consider , we require mostly

  • the space \(H_0^1(\Omega )\)

    \(\[H^1_0(\Omega) \equiv \{v \in H^1(\Omega )|v_{|\partial \Omega}=0 \}]\) where \(v = 0\) on the boundary \(\partial \Omega .\)

    Note that, for any \(v \in H_0^1(\Omega )\), we have \(\[C_{PF} \|v\|_{H^1(\Omega )} \leq |v|_{H^1(\Omega )} \leq \|v\|_{H^1(\Omega )},]\) and thus \(\[\|v\|_{H^1(\Omega)} = 0 \, \Rightarrow v = 0]\) \(\Rightarrow |v|_{H^1(\Omega )}\) constitutes a norm for \(v \in H_0^1(\Omega ).\)

Projection

Given Hilbert Spaces \(Y\) and \(Z \subset Y\) , the projection, \(\Pi : Y \rightarrow Z\), of \(y \in Y\) onto \(Z\) is defined as \(\[(\Pi y,v)_Y = (y,v)_Y , \forall v \in Z]\)

Properties:

  • Orthogonality: \((y - \Pi y, v)_Y = 0\)

  • Idempotence: \(\Pi (\Pi y) = \Pi y\)

  • Best Approximation \(\|y-\Pi y\|^2_Y = \inf_{v \in Z} \|y-v\|^2_Y, \, \)

Given an orthonormal basis \(\{ \varphi i\}_{i=1, N = \dim(Z)}\), then \(\[\Pi y= \sum_{i=1}^{\dim(Z)} ( \varphi i,y)_Y \varphi_i, \forall y \in Y]\)

Notations

Notations and Definitions

Notations

  • \((\cdot)^\mathcal{N}\) finite element approximation

  • \((\cdot)_N\) reduced basis approximation

  • \(\mu\) input parameter (physical, geometrical,…​)

  • \(s(t;\mu) \approx s^\mathcal{N}(t;\mu)\approx s_N(t;\mu) \) output approximations

  • \(\mu \rightarrow s(t;\mu)\) input-output relationship

Definitions

  • \(\Omega \subset \mathbb{R}^d\) spatial domain

  • \(\mu\) \(P\)-uplet

  • \(\mathcal{D}^\mu \subset \mathbb{R}^P \) parameter space

  • \(s\) output, \(\ell, f\) functionals

  • \(u\) field variable

  • \(X\) function space \(H^1_0(\Omega)^\nu \subset X \subset H^1(\Omega)^\nu\) (\(\nu=1\) for simplicity)
    \((\cdot,\cdot)_X\) scalar product and \(\|\cdot\|_X\) norm associated to \(X\)

Problem Statement

Problem Statement

The formal problem statement reads: Given \(\mu \in {\ensuremath{\mathcal{D}^\mu}\xspace}\), evaluate \(\[s(\mu) = \ell(u(\mu);\mu)]\) where \(u(x;\mu) \in X\) satisfies \(\[a(u(\mu), v; \mu ) = f(v; \mu), \quad \forall v \in X]\)

[rem:problem-statement] We consider first the case of linear affine compliant elliptic problem and then complexify

Hypothesis: Reference Geometry In these notes \(\Omega\) is considered

  • To apply the reduced basis methodology exposed later, we need to setup a reference spatial domain \(\Omega_{\mathrm{ref}}\)

  • We introduce an affine mapping \(\matcal{T}(\cdot;\mu) : \Omega (\equiv \Omega_{\mathrm{ref}} = \Omega_o(\bar{\mu})) \rightarrow \Omega_o(\mu)\) such that \(\[a(u,v;\mu) = a_o(u_o \circ \mathcal{T}_\mu,v_o \circ \mathcal{T}_\mu;\mu)]\)

Hypothesis: Continuity, stability, compliance We consider the following \(\mu-\)PDE

rl \(a(\cdot,\cdot;\mu)\) & bilinear
& symmetric
& continuous
& coercive (\(\forall \mu \in {\ensuremath{\mathcal{D}^\mu}\xspace}\))

\(f(\cdot;\mu), \ell(\cdot;\mu)\) & linear
& bounded (\(\forall \mu \in {\ensuremath{\mathcal{D}^\mu}\xspace}\))

and in particular, to start, the compliant case

  • \(a\) symmetric

  • \(f(\cdot;\mu) = \ell(\cdot;\mu)\quad \forall \mu \in {\ensuremath{\mathcal{D}^\mu}\xspace}\)

Hypothesis: Affine dependence in the parameter We require for the RB methodology \(\[a(u,v;\mu) = \sum_{q=1}^{Q_a} \Theta^q_a(\mu)\ a^q( u, v ),]\) where for \(q=1,...,Q_a\) \(\[\begin{array}[c\){rll} \Theta^q_a :& {\ensuremath{\mathcal{D}^\mu}\xspace}\rightarrow \mathbb{R} & \mu-\text{\alert{dependent} functions}\\ a^q :& X \times X \rightarrow \mathbb{R} & \mu-\text{\alert{independent} bilinear forms} \end{array}\]]

  • similar decomposition is required for \(\ell(v;\mu)\) and \(f(v;\mu)\), and denote \(Q_\ell\) and \(Q_f\) the corresponding number of terms

  • applicable to a large class of problems including geometric variations

  • can be relaxed (see non affine/non linear problems)

Inner Products and Norms

  • and associated norm () \(\[\begin{aligned} (((w,v)))_\mu &= a(w,v;\mu) &\ \forall u,v \in X\\ |||v|||_\mu &= \sqrt{a(v,v;\mu)} &\ \forall v \in X \end{aligned}]\)

  • \(X\)-inner product and associated norm () \(\[\begin{aligned} (w,v)_X &= (((w,v)))_{\bar{\mu}} \ (\equiv a(w,v;\bar{\mu})) &\ \forall u,v \in X\\ ||v||_X &= |||v|||_{\bar{\mu}} \ (\equiv \sqrt{a(v,v;\bar{\mu})}) & \ \forall v \in X \end{aligned}]\)

Coercivity and Continuity Constants

We assume \(a\) and

Recall that

  • constant \(\[(0 < ) \alpha(\mu) \equiv \inf_{v\in X}\frac{a(v,v;\mu)}{||v||^2_X}]\)

  • constant \(\[\gamma(\mu) \equiv \sup_{w\in X} \sup_{v\in X}\frac{a(w,v;\mu)}{\|w\|_X \|v\|_X} ( < \infty)]\)

Example

Example Thermal Block: Heat Transfer

6

(0,0) rectangle (3,3); (0,0) grid (3,3); (0,0) rectangle (3,3);

(1.5,-0.5)node[right]\(\Gamma_0\) (Heat Flux) to[out=180,in=90] (1.5,0); (1.5,3.5)node[right]\({\Gamma_{\mathrm{top}}}\) (Zero Dirichlet) to[out=180,in=90] (1.5,3); (3.5,1.5) node[right]\({\Gamma_{\mathrm{sides}}}\) (Insulated) to[out=180,in=90] (0,1.5) ; (3.5,1.5) to[out=180,in=-90] (3,1.5);

in 5mm,15mm,25mm

in 5mm,15mm,25mm

at (+0.45,+0.3) \(\mu_\theindex\);

3

Example Thermal Block: Problem statement Given \(\mu \in (\mu_1,...\mu_P) \in {\ensuremath{\mathcal{D}^\mu}\xspace}\equiv [\mu^{\text{min}},\mu^{\text{max}}\)^P], evaluate (recall that \(\ell = f\))
\(\[s(\mu) = f(u(\mu))]\) where \(u(\mu) \in X \equiv \{ v \in H^1(\Omega), v|_{\Gamma_{\text{top}}} = 0\}\) satisfies \(\[a(u(\mu), v; \mu) = f(v;\mu) \quad \forall v \in X]\) we have \(P = 8\) and given \(1 < \mu_r < \infty\) we set \(\[\mu^{\mathrm{min}} = 1/\sqrt{\mu_r},\quad \mu^{\mathrm{max}} = \sqrt{\mu_r}]\) such that \(\mu^{\mathrm{max}}/\mu^{\mathrm{min}}=\mu_r.\)

Example Thermal Block Recall we are in the compliant case \(\ell = f\), we have \(\[f(v) = \int_{\Gamma_{0}} v\quad \forall v \in X]\) and \(\[a(u,v;\mu) = \sum_{i=1}^{P} \mu_i \int_{\Omega_i} \nabla u \cdot \nabla v + 1 \int_{\Omega_{P+1}} \nabla u \cdot \nabla v \quad\forall u,\ v\ \in X]\) where \(\Omega = \cup_{i=1}^{P+1} \Omega_i\).

Example Thermal Block The inner product is defined as follows \(\[(u,v)_X = \sum_{i=1}^P \bar{\mu}_i \int_{\Omega_i}\nabla u \cdot \nabla v + 1 \int_{\Omega_{P+1}} \nabla u \cdot \nabla v]\) where \(\bar{\mu}_i\) is a . We have readily that \(a\) is

* * \(\[0 < \frac{1}{\sqrt{\mu_r}} \leq \mathrm{min}(\mu_1/\bar{\mu}_1, \ldots, \mu_P/\bar{\mu}_P,1) \leq \alpha(\mu)]\) * and \(\[\gamma(\mu) \leq \mathrm{max}(\mu_1/\bar{\mu}_1, \ldots, \mu_P/\bar{\mu}_P,1) \leq \sqrt{\mu_r} < \infty]\)

and the linear form \(f\) is .

Example Thermal Block: Affine decomposition We \(\[a(u,v;\mu) = \sum_{q=1}^{P+1} \Theta^q(\mu) a^q(u,v)]\) with \(\[\begin{aligned} \Theta^1(\mu) = \mu_1 & & a^1(u,v) = \int_{\Omega_1} \nabla u \cdot \nabla v\\ & \vdots & \\ \Theta^P(\mu) = \mu_P & & a^P(u,v) = \int_{\Omega_P} \nabla u \cdot \nabla v\\ \Theta^{P+1}(\mu) = 1 & & a^{P+1}(u,v) = \int_{\Omega_{P+1}} \nabla u \cdot \nabla v \end{aligned}]\)

Example Thermal Block

  • 0.5 **

    image

    0.5 **

    image

  • image

``Truth'' FEM Approximation

Let \(\mu \in \mathcal{D}^{\mu}\), evaluate \(\[\displaystyle s^{\mathcal{N}} (\mu) = \ell (u^{\mathcal{N}} (\mu)) \ ,]\) where \(u^{\mathcal{N}} (\mu) \in X^{\mathcal{N}}\) satisfies \(\[a (u^{\mathcal{N}} (\mu), v; \mu ) = f (v), \quad \forall \: v \in X^{\mathcal{N}} \ .]\) Here \(X^{\mathcal{N}} \subset X\) is a finite element approximation of dimension equiped with an inner product \((\cdot,\cdot)_X\) and induced norm \(||\cdot||_X\). Denote also \(X'\) and associated norm \(\[\ell \in X',\qquad\displaystyle ||\ell||_{X'} \equiv \operatorname{sup}_{v\in X}\frac{\ell(v)}{||v||_X}]\).

Purpose

  • \(u(\mu)\) and \(u_{\mathcal{N}}(\mu)\) in the sense that \(\[||u(\mu)-u_{\mathcal{N}}(\mu)||_X \leq \mathrm{tol}\quad\forall \mu \in {\ensuremath{\mathcal{D}^\mu}\xspace}]\)

  • the reduced basis approximation using the FEM approximation

  • the error associated with the reduced basis approximation relative to the FEM approximation

\(\Rightarrow u^{\mathcal{N}} (\mu)\) is a calculable surrogate for \(u(\mu).\) \(\[\|u(\mu)-u^\mathcal{N}(\mu)\|_{X} \leq \underbrace{\|u(\mu)-u^\mathcal{N}(\mu)\|_{X}}_{\leq \varepsilon^\mathcal{N}} + \underbrace{\|u^\mathcal{N}(\mu)-u^N(\mu)\|_X}_{\varepsilon_{\mathrm{tol,min}}}]\)

with \(\varepsilon^\mathcal{N} << \varepsilon_{\mathrm{tol,min}}\)

Reduced Basis Approximation

Reduced Basis Objectives For given accuracy \(\epsilon\), evaluate \(\[\mu \in {\ensuremath{\mathcal{D}^\mu}\xspace}\rightarrow s_N(\mu) (\approx s^\mathcal{N}(\mu)) \text{ and } \Delta^s_N(\mu)]\) that achieves the desired accuracy Reliability \(\[|s^\mathcal{N}(\mu)-s_N(\mu)| \leq \Delta^s_N(\mu) \leq \epsilon]\) for a \(t_{\textsc{comp}}\) Efficiency
\(\[\text{\alert{Independent} of } \mathcal{N} \text{ as } \mathcal{N} \rightarrow \infty]\) where \(t_{\textsc{comp}}\) is the time to perform the input-output relationship \(\[\hfill\mu \rightarrow (s_N(\mu),\Delta^s_N(\mu))]\)

Reduced Basis Objective : Rapid Convergence Build a rapidly convergent approximation of \(\[s_N(\mu) \in \mathbb{R} \text{ and } u_N(\mu) \in X^N \subset X^{\mathcal{N}} \subset X]\) such that for all \(\mu\), we have \(\[s_N(\mu) \rightarrow s^{\mathcal{N}}(\mu) \text{ and } u_N(\mu) \rightarrow u^{\mathcal{N}}(\mu)]\) rapidly as \(N = {\ensuremath{{\operatorname{dim}}}\xspace}{X_N} \rightarrow \infty (= 10-200)\) (and of \(\mathcal{N}\))

Reduced Basis Objective : Reliability and Sharpness Provide error bound \(\Delta_N(\mu)\) and \(\Delta^s_N(\mu)\) : \(\[1 (\text{rigor}) \leq \frac{\Delta_N(\mu)}{\|u^{\mathcal{N}}(\mu) - u_N(\mu)\|_X} \leq \ E (\text{sharpness})]\) and \(\[1 (\text{rigor}) \leq \frac{\Delta^s_N(\mu)}{|s^{\mathcal{N}}(\mu) - s_N(\mu)|} \leq \ E (\text{sharpness})]\) for all \(N = 1 \ldots N_{\textsc{max}}\) and \(\mu \in {\ensuremath{\mathcal{D}^\mu}\xspace}\).

Reduced Basis Objective : Efficiency Develop a two stage strategy : Offline/Online

Offline

very expensive pre-processing, we have typically that for a given \(\mu \in {\ensuremath{\mathcal{D}^\mu}\xspace}\) \(\[t^{\textsc{offline}}_{\textsc{comp}} >> t^{\mu\rightarrow s^{\mathcal{N}}(\mu)}_{\textsc{comp}}]\)

Online

very rapid convergent certified reduced basis input-output relationship \(\[t^{\textsc{online}}_{\textsc{comp}} \text{ independent of } \mathcal{N}]\)

[rem:rbobjectives-efficiency] \(\mathcal{N}\) may/should be chosen

Parametric Manifold \(\mathcal{M}^\mathcal{N}\) We assume

  • the form \(a\) is continuous and coercive (or inf-sup stable); and

  • affine -dependence; and

  • the \(\theta^q(\mu), 1 \leq q \leq Q\), are smooth (i.e., \(\theta^q \in C^\infty(\mathcal{D})\) ;

then \(\[\mathcal{M}^\mathcal{N} = \{ u^\mathcal{N}(\mu),\, \mu \in \mathcal{D}\}]\) is a smooth \(P\)-dimensional manifold in \(X^\mathcal{N}\), since \(\[\| D_\sigma y^\mathcal{N}(\mu) \| \leq C_\sigma \forall \mu \in \mathcal{D}, \text{ for any order } |\sigma| \in \mathbb{N}_{+0}]\)

<1-3>Approximation opportunities: Low-Dimension Manifold

5

(0,0,0) – (1,0,0); (0,0,0) – (0,1,0); (0,0,0) – (0,0,1) node[right]\(Y \equiv H^1(\Omega \subset \mathbb{R}^d)\);

5

Spaces & Bases We define the RB approximation space \(\[X_N =\operatorname*{span}\{\xi^n, 1 \leq n \leq N \},\, 1 \leq N \leq N_{max}]\) with linearly independent basis functions \(\[\xi^n \in X,\, 1 \leq n \leq N_{max}]\) We thus obtain \(\[X_N \subset X, \, {\ensuremath{{\operatorname{dim}}}\xspace}(X_N) = N,\, 1 \leq N \leq N_{max}]\) and \(\[X_1 \subset X_2 \subset \ldots X_{N_{max}} (\subset X)]\) We denote non-hierarchical RB spaces as \(X^{nh}_N, 1 \leq N \leq Nmax,\) \(\[X^{nh}_N \subset X, \, {\ensuremath{{\operatorname{dim}}}\xspace}(X^{nh}_N) = N,\, 1 \leq N \leq N_{max}]\)

Spaces & Bases - Lagrangian

Parameter Samples: \(\[\mbox{\alert{Sample}}: \ \ S_N = \{ \mu_1 \in \mathcal{D}^{\mu}, \ldots, \mu_N \in \mathcal{D}^{\mu} \}\quad 1 \leq N \leq N_{\mathrm{max}},]\) with \(\[S_1 \subset S_2 \ldots S_{N_\mathrm{max}-1} \subset S_{N_\mathrm{max}} \subset {\ensuremath{\mathcal{D}^\mu}\xspace}]\) Lagrangian Hierarchical Space \(\[W_N = {\rm span} \: \{ \xi^n \equiv \underbrace{ u (\mu^n)}_{u^{\mathcal{N}} (\mu^n)}, n = 1, \ldots, N \}.]\) with \(\[W_1 \subset W_2 \ldots \subset W_{N_\mathrm{max}} \subset X^{\mathcal{N}} \subset{X}]\)

Sampling strategies?

  • Equidistributed points in \(\mathcal{D}^\mu\)(curse of dimensionality)

  • Log-random distributed points in \(\mathcal{D}^\mu\)

  • See later for more efficient, adaptive strategies

Space & Bases - Taylor & Hermite

  • Taylor reduced basis spaces: \(\[W^{Taylor}_N = \operatorname*{span}\{D_\sigma u(\mu), \forall \sigma \in I^{P,N-1} \}, 1 \leq N \leq N_{max},]\) field variable sensitivity derivatives

  • Hermite reduced basis spaces: \(\[W^{Hermite}_N ``='' W^{Lagrangian}_N \cup W^{Taylor}_N]\) field variable sensitivity derivatives

Note: We will exclusively use Lagrangian RB spaces in this course.

Space & Bases - Orthogonal Basis Given \(\xi^n = u(\mu^n), 1 \leq n \leq N_{max}\) (Lagrange case) we construct the basis set \(\{\zeta^n, 1 \leq n \leq Nmax\}\), from

^1 = 1/1_X;
for n = 2 : Nmax
z^n =^n- _m=1^n-1 (n,m )_X ^m;
^n = zn/zn_X;
end.

Note: \((\zeta^n,\zeta^m)_X = \delta_{nm}, 1 \leq n,m \leq Nmax\)

Space & Bases - Orthogonal Basis Given reduced basis space \(\[X_N = {\rm span} \: \{ \zeta^n, n = 1, \ldots, N \}, 1 \leq N \leq N_{max}]\) we can express any \(w_N \in X_N\) as \(\[w_N = \sum_{k=1}^N {w_N}_n \zeta^n]\) for unique \({w_N}_n \in \mathbb{R}, 1 \leq n \leq N\)

Reduced basis ``matrices'' \(Z_N \in \mathbb{R}^{\mathcal{N}\times N} , 1 \leq N \leq N_{max}:\) \(\[Z_N=[\zeta^1,\zeta^2,...,\zeta^N\), 1 \leq N \leq N_{max}\]] where, from orthogonality, \(Z^T_{N_{max}} X Z^T_{N_{max}} = I_{N_{max}},\) and \(I_M\) is the Identity matrix in \(\mathbb{R}^{M\times M}\).

Formulation (Linear Compliant Case): a Galerkin method

Galerkin Projection Given \(\mu \in \mathcal{D}^{\mu} \) evaluate

\(\[\label{eq:1} s_N (\mu) = f(u_N (\mu);\mu)]\)

where \(u_N (\mu) \in X_N\) satisfies

\(\[a (u_N (\mu), v; \mu) = f(v;\mu), \ \forall \: v \in X_N \ .]\)

Formulation (Linear Compliant Case): Optimality For any \(\mu \in {\ensuremath{\mathcal{D}^\mu}\xspace}\), we have the following optimality results (thanks to Galerkin) \(\[\begin{aligned} |||u(\mu) - u_N(\mu)|||_{\mu} &= \inf_{v_N \in X_N} |||u(\mu) - v_N(\mu)|||_\mu,\\ ||u(\mu) - u_N(\mu)||_X &\leq \sqrt{\frac{\gamma(\mu)}{\alpha(\mu)}} \inf_{v_N \in X_N} ||u(\mu) - v_N(\mu)||_X,\\ \end{aligned}]\) and \(\[\begin{aligned} s(\mu)-s_N(\mu) &= |||u(\mu) - u_N(\mu)|||^{\alert{2}}_\mu,\\ &= \inf_{v_N \in X_N} |||u(\mu) - v_N(\mu)|||^{\alert{2}}_\mu, \end{aligned}]\) and finally \(\[0 \leq s(\mu)-s_N(\mu) \leq \gamma(\mu)\inf_{v_N \in X_N} ||u(\mu) - v_N(\mu)||^{\alert{2}}_X]\)

Formulation (Linear Compliant Case): offline-online decomposition our RB approximations: \(\[\begin{aligned} u_N(\mu)\ =&\ \sum_{j=1}^N\ {u_N}_j(\mu)\ \zeta_j\label{eq:4} \end{aligned}]\)

\(s_N(\mu)\) \(\[\label{eq:5} \displaystyle s_N(\mu) = \displaystyle\sum_{j=1}^N {u_N}_j(\mu)\ \left\{ \sum_{q=1}^{Q_f}\ \Theta^q_f(\mu)\ f^q(\zeta_j)\right\}]\) where \({u_N}_i(\mu), 1 \leq i \leq N\) satisfies \(\[\begin{aligned} \sum_{j=1}^N \left\{ \sum_{q=1}^{Q_a}\ \Theta^q_a(\mu)\ a^q( \zeta_i, \zeta_{j}) \right\} {u_N}_j(\mu) =& \sum_{q=1}^{Q_f}\ \Theta^q_f(\mu)\ f^q(\zeta_i),\notag \\ & 1 \leq i \leq N \label{eq:6}\\ \end{aligned}]\)

Formulation (Linear Compliant Case): matrix form

Solve \(\[\label{eq:10} \underline{A}_N (\mu) \: \underline{u}_N (\mu) = \underline{F}_N]\)

where \(\[\begin{aligned} (A_N)_{i \: j} (\mu) &= \sum_{q=1}^{Q_a}\ \Theta^q_a(\mu)\ a^q( \zeta_i, \zeta_{j}) , \\ & \\ F_{N \: i} &= \sum_{q=1}^{Q_f}\ \Theta^q_f(\mu) f^q (\zeta_i) \ . \\[.5ex\) & 1 \leq i,j \leq N, \quad 1 \leq i \leq N \end{aligned}\]]

Formulation (Linear Compliant Case): complexity analysis

Offline: independent of \(\mu\)

  • Solve: \(N\) FEM system depending on \(\mathcal{N}\)

  • Form and store: \(f^q (\zeta_i)\)

  • Form and store: \(a^q( \zeta_i, \zeta_{j})\)

Online: independent of \(\mathcal{N}\)

  • Given a new \(\mu \in {\ensuremath{\mathcal{D}^\mu}\xspace}\)

  • Form and solve \(A_N(\mu)\) : \(O(Q N^2)\) and \(O(N^3)\)

  • Compute \(s_N(\mu)\)

Online: \(N << \mathcal{N}\) Online we realize often orders of magnitude computational economies relative to FEM in the context of

Formulation (Linear Compliant Case): Condition number

[prop:1] Thanks to the orthonormalization of the basis function, we have that the condition number of \(A_N(\mu)\) is bounded by the ratio \(\gamma(\mu)/\alpha(\mu)\).

  • Write the Rayleigh Quotient \(\[\frac{v_N^T A_N(\mu) v_N}{v_N^T v_N}, \quad \forall v_N \in \mathbb{R}^N]\)

  • Express \(\[v_N = \sum_{n=1}^N v_{N_n} \zeta^n]\)

  • Use coercivity, continuity and orthonormality.

A Posteriori Error Estimation

Motivations & Preliminaries

``Truth'' Problem statement

Let \(\mu \in \mathcal{D}^{\mu}\), evaluate \(\[\displaystyle s (\mu) = \ell (u (\mu)) \ ,]\) where \(u (\mu) \in X\) satisfies \(\[a (u (\mu), v; \mu ) = f (v), \quad \forall \: v \in X \ .]\) Assumptions

  • linearity, coercivity, continuity

  • affine parameter dependence; and

  • compliance: \(\ell=f\), \(a\) symmetric

Reduced Basis Sample and Space

Parameter Samples: \(\[\mbox{\alert{Sample}}: \ \ S_N = \{ \mu_1 \in \mathcal{D}^{\mu}, \ldots, \mu_N \in \mathcal{D}^{\mu} \}\quad 1 \leq N \leq N_{\mathrm{max}},]\) with \(\[S_1 \subset S_2 \ldots S_{N_\mathrm{max}-1} \subset S_{N_\mathrm{max}} \subset {\ensuremath{\mathcal{D}^\mu}\xspace}]\) Lagrangian Hierarchical Space \(\[W_N = {\rm span} \: \{ \xi^n \equiv \underbrace{ u (\mu^n)}_{u^{\mathcal{N}} (\mu^n)}, n = 1, \ldots, N \}.]\) with \(\[W_1 \subset W_2 \ldots \subset W_{N_\mathrm{max}} \subset X^{\mathcal{N}} \subset{X}]\)

Reduced basis approximation Given \(\mu \in \mathcal{D}^{\mu} \) evaluate

\(\[\label{eq:1} s_N (\mu) = f(u_N (\mu);\mu)]\)

where \(u_N (\mu) \in X_N\) satisfies

\(\[a (u_N (\mu), v; \mu) = f(v;\mu), \ \forall \: v \in X_N \ .]\) Recall:

  • RB Space: \(X_N=``\text{Gram-Schmidt}''(W_N)\)

  • \(u_N(\mu)\) unique (coercivity, continuity, linear dependence)

Coercivity and Continuity Constants

We assume \(a\) and

Recall that

  • constant \(\[(0 < ) \alpha(\mu) \equiv \inf_{v\in X}\frac{a(v,v;\mu)}{||v||^2_X}]\)

  • constant \(\[\gamma(\mu) \equiv \sup_{w\in X} \sup_{v\in X}\frac{a(w,v;\mu)}{\|w\|_X \|v\|_X} ( < \infty)]\)

Affine dependence and parametric coercivity We assume that \(a: X\times X \times \mathcal{D} \rightarrow \mathbb{R}\) is

  • \(\[a(u,v;\mu) = \sum_{q=1}^{Q_a} \Theta^q_a(\mu)\ a^q( u, v ),\, \forall u,v \in X]\)

  • and \(\[\Theta^q_a(\mu) > 0\quad \forall \mu \in \mathcal{D}, \, 1 \leq q \leq Q_a]\) and \(\[a^q(u,v) \geq 0\quad \forall u,v \in X, \, 1 \leq q \leq Q_a]\)

Inner Products and Norms

  • and associated norm () \(\[\begin{aligned} (((w,v)))_\mu &= a(w,v;\mu) &\ \forall u,v \in X\\ |||v|||_\mu &= \sqrt{a(v,v;\mu)} &\ \forall v \in X \end{aligned}]\)

  • \(X\)-inner product and associated norm () \(\[\begin{aligned} (w,v)_X &= (((w,v)))_{\bar{\mu}} \ (\equiv a(w,v;\bar{\mu})) &\ \forall u,v \in X\\ ||v||_X &= |||v|||_{\bar{\mu}} \ (\equiv \sqrt{a(v,v;\bar{\mu})}) & \ \forall v \in X \end{aligned}]\)

Bound theorems

Questions

  • What is the accuracy of \(u_N(\mu)\) and \(s_N(\mu)\) ? Online \(\[\begin{aligned} \|u(\mu)-u_N(\mu)\|_X &\leq \epsilon_{\mathrm{tol}}, \quad \forall \mu \in {\ensuremath{\mathcal{D}^\mu}\xspace}\\ |s(\mu)-s_N(\mu)\|_X &\leq \epsilon^s_{\mathrm{tol}}, \quad \forall \mu \in {\ensuremath{\mathcal{D}^\mu}\xspace}\\ \end{aligned}]\)

  • What is the best value for \(N\) ? Offline/Online

    • \(N\) too large \(\Rightarrow\) computational inefficiency

    • \(N\) too small \(\Rightarrow\) unacceptable error

  • How should we build \(S_N\) ? is there an optimal construction ? Offline

    • Good approximation of the manifold \(\mathcal{M}\) through the RB space, but

    • need for well conditioned RB matrices

A Posteriori Error Estimation: Requirements We shall develop the following error bounds \({\ensuremath{\Delta_N(\mu)}\xspace}\) and \(\Delta^s_N(\mu)\) with the following properties

  • \(1 \leq N \leq N_{\mathrm{max}}\) \(\[\begin{aligned} \|u(\mu)-u_N(\mu)\|_X &\leq \Delta_N(\mu), \quad \forall \mu \in {\ensuremath{\mathcal{D}^\mu}\xspace}\\ |s(\mu)-s_N(\mu)| &\leq \Delta^s_N(\mu), \quad \forall \mu \in {\ensuremath{\mathcal{D}^\mu}\xspace}\end{aligned}]\)

  • \(1 \leq N \leq N_{\mathrm{max}}\) \(\[\begin{gathered} \frac{\Delta_N(\mu)}{\|u(\mu)-u_N(\mu)\|_X} \leq C, \frac{\Delta^s_N(\mu)}{|s(\mu)-s_N(\mu)|} \leq C,\\C\approx 1 \end{gathered}]\)

  • Online cost depend only on \(Q\) and \(N\)

\(u_N(\mu)\) : Error equation and residual dual norm Given our RB approximation \(u_N(\mu)\), we have \(\[\label{eq:20} e(\mu) \equiv u(\mu) - u_N(\mu)]\) that satisfies \(\[\label{eq:21} a( e(\mu), v; \mu ) \ = \ r( u_N(\mu), v; \mu ), \forall v \in X]\) where \(r( u_N(\mu), v; \mu ) = f(v) - a( u_N(\mu), v; \mu )\) is the . We have then from coercivity and the definitions above that \(\[\label{eq:22} ||e(\mu)||_{X} \ \leq\ \frac{||r( u_N(\mu), v; \mu )||_{X'}}{\alpha(\mu)}\ =\ \frac{\varepsilon_N(\mu)}{\alpha(\mu)}]\)

A Posteriori error estimation: Dual norm of the residual

[prop:1] Given \(\mu \in {\ensuremath{\mathcal{D}^\mu}\xspace}\), the dual norm of \(r(u_N(\mu),\cdot;\mu)\) is defined as follows \(\[\begin{aligned} ||r(u_N(\mu),\cdot;\mu)||_{X'} & \equiv \sup_{v\in X} \frac{r(u_N(\mu),v;\mu)}{||v||_X}\\ & = ||{\ensuremath{\Hat{e}(\mu)}\xspace}||_X \end{aligned}]\) where \({\ensuremath{\Hat{e}(\mu)}\xspace}\in X\) satisfies \(\[\begin{aligned} ({\ensuremath{\Hat{e}(\mu)}\xspace},v)_X = r(u_N(\mu),v;\mu) \end{aligned}]\)

The error residual equation can then be rewritten \(\[a( e(\mu), v; \mu ) \ = ({\ensuremath{\Hat{e}(\mu)}\xspace},v)_X, \quad \forall v \in X]\)

\(u_N(\mu)\) : Definitions of energy error bounds and effectivity Given \({\ensuremath{{\alpha_{{\mathrm{LB}}}}(\mu)}\xspace}\) a nonnegative lower bound of \({\ensuremath{\alpha(\mu)}\xspace}\): \(\[\label{eq:23} {\ensuremath{\alpha(\mu)}\xspace}\geq {\ensuremath{{\alpha_{{\mathrm{LB}}}}(\mu)}\xspace}\geq \epsilon_{\alpha} {\ensuremath{\alpha(\mu)}\xspace},\ \epsilon_{\alpha} \ \in\ \)0,1[,\, \forall \mu \in {\ensuremath{\mathcal{D}^\mu}\xspace}\]] Denote \(\varepsilon_N(\mu) = \|{\ensuremath{\Hat{e}(\mu)}\xspace}\|_X = \|r(u_N(\mu),v;\mu\|_{X'}\)

\(\[\label{eq:25} \Delta^{\mathrm{en}}_N(\mu) \ \equiv \ \frac{\varepsilon_N(\mu)}{\sqrt{{\ensuremath{{\alpha_{{\mathrm{LB}}}}(\mu)}\xspace}}}]\)

\(\[\label{eq:25} \eta^{\mathrm{en}}_N(\mu) \ \equiv \ \frac{\Delta^{\mathrm{en}}_N(\mu)}{|||e(\mu)|||_\mu}]\)

\(u_N(\mu)\) : energy error bounds

\(\[\label{eq:26} 1 \ \leq\ \eta^{\mathrm{en}}_N(\mu) \ \leq \sqrt{\frac{{\ensuremath{{\gamma_{{\mathrm{UB}}}}(\mu)}\xspace}}{{\ensuremath{{\alpha_{{\mathrm{LB}}}}(\mu)}\xspace}}}, \quad 1 \leq N \leq N_{\max}, \quad \forall \mu\ \in \ {\ensuremath{\mathcal{D}^\mu}\xspace}]\)

Remarks

  • : Left inequality ensures rigorous upper bound measured in \(||\cdot||_{X}\) , i.e. \(||e(\mu)||_{X} \leq {\ensuremath{\Delta_N(\mu)}\xspace},\ \forall \mu \in {\ensuremath{\mathcal{D}^\mu}\xspace}\)

  • : Right inequality states that \(\Delta_N(\mu)\)overestimates the ``true'' error by at most \(\gamma(\mu) / {\ensuremath{{\alpha_{{\mathrm{LB}}}}(\mu)}\xspace}\)

  • for \(a\) and symmetric \(\[\theta^{\bar{\mu}} \equiv \frac{\Theta^{\max,\bar{\mu}}_a(\mu)}{\Theta^{\min,\bar{\mu}}_a(\mu)} = \frac{\gamma_{\mathrm{ub}}(\mu)}{\alpha_{\mathrm{lb}}(\mu)}]\)

\(s_N(\mu)\) : output error bounds

\(\[1 \ \leq\ \eta^s_N(\mu) \ \leq \frac{{\ensuremath{{\gamma_{{\mathrm{UB}}}}(\mu)}\xspace}}{{\ensuremath{{\alpha_{{\mathrm{LB}}}}(\mu)}\xspace}}, \quad 1 \leq N \leq N_{\max}, \quad \forall \mu\ \in \ {\ensuremath{\mathcal{D}^\mu}\xspace}]\)

where \(\[\label{eq:30} \Delta^s_N (\mu) = {\Delta_N^{\mathrm{en}}}(\mu)^2]\) and \(\[\eta^s_N(\mu)\equiv \frac{\Delta^s_N(\mu)}{s(\mu)-s_N(\mu)}]\)

Rapid convergence of the error in the output Note that the error in the output vanishes quadratically

Relative output error bounds We define

  • the \(\[\Delta^{s,rel}_N (\mu) \equiv \frac{\|\hat{e}(\mu)\|^2_X}{ \alpha_\mathrm{lb}(\mu) s_N(\mu)}= \frac{\Delta_N^{\mathrm{en}}(\mu)^2}{s_N(\mu)}]\)

  • the \(\[\eta^{s,rel}_N(\mu)\equiv \frac{\Delta^{s,rel}_N(\mu)}{s(\mu)-s_N(\mu)/s(\mu)}]\)

\(\[1 \ \leq\ \eta^{s,rel}_N(\mu) \ \leq 2 \frac{{\ensuremath{{\gamma_{{\mathrm{UB}}}}(\mu)}\xspace}}{{\ensuremath{{\alpha_{{\mathrm{LB}}}}(\mu)}\xspace}}, \quad 1 \leq N \leq N_{\max}, \quad \forall \mu\ \in \ {\ensuremath{\mathcal{D}^\mu}\xspace}]\)

for \(\Delta^{s,rel}_N \leq 1\)

\(X\)-norm error bounds We define

  • the \(\[\Delta_N (\mu) \equiv \frac{\|\hat{e}(\mu)\|_X}{\alpha_\mathrm{lb}(\mu)}]\)

  • the \(\[\eta_N(\mu)\equiv \frac{\Delta_N(\mu)}{\|e(\mu)\|_X}]\)

\(\[1 \ \leq\ \eta_N(\mu) \ \leq \frac{{\ensuremath{{\gamma_{{\mathrm{UB}}}}(\mu)}\xspace}}{{\ensuremath{{\alpha_{{\mathrm{LB}}}}(\mu)}\xspace}}, \quad 1 \leq N \leq N_{\max}, \quad \forall \mu\ \in \ {\ensuremath{\mathcal{D}^\mu}\xspace}]\)

Remarks on error bounds Remarks:

  • The error bounds are rigorous upper bounds for the reduced basis error for any \(N = 1,\ldots,N_{max}\) and for all \(\mu \in \mathcal{D}\).

  • The upper bounds for the effectivities are

    • independent of \(N\) , and

    • independent of \(\mathcal{N}\) if \(\alpha_{\mathrm{lb}}(\mu)\) only depends on \(\mu\),

      and are thus stable with respect to RB and FEM refinement.

  • Results for energy norm (and \(X\)-norm) bound directly extend to noncompliant (& nonsymmetric) problems

    • if we choose an appropriate definition for the energy (and \(X\)) norm

Offline-Online decomposition

Offline-Online decomposition Denote \({\ensuremath{\Hat{e}(\mu)}\xspace}\in X\) \(\[\label{eq:34} ||{\ensuremath{\Hat{e}(\mu)}\xspace}||_X = \varepsilon_N(\mu) = ||r(u_N(\mu),\cdot;\mu)||_X]\) such that \(\[\label{eq:36} ({\ensuremath{\Hat{e}(\mu)}\xspace},v)_X = -r(u_N(\mu),v;\mu), \quad \forall v \in X]\)

And recall that \(\[\label{eq:35} -r(u_N(\mu),v;\mu) = f(v) - \sum_{q=1}^Q \sum_{n=1}^N\ \Theta^q(\mu)\ {u_N}_n(\mu)\ a^q( \zeta_n,v), \quad \forall v\ \in\ X]\)

Offline-Online decomposition

  • It follows next that \({\ensuremath{\Hat{e}(\mu)}\xspace}\in X\) satisfies \(\[({\ensuremath{\Hat{e}(\mu)}\xspace},v)_X \ = \ f(v) - \sum_{q=1}^Q \sum_{n=1}^N\ \Theta^q(\mu)\ {u_N}_n(\mu)\ a^q( \zeta_n,v), \quad \forall v\ \in\ X]\)

  • Observe then that the rhs is the sum of products of parameter dependent functions and parameter independent linear functionals, thus invoking \(\[{\ensuremath{\Hat{e}(\mu)}\xspace}\ = \ \mathcal{C} - \sum_{q=1}^Q \sum_{n=1}^N\ \Theta^q(\mu)\ {u_N}_n(\mu)\ \mathcal{L}^q_n]\) where

    • \(\mathcal{C} \in X\) satisfies \(\[(\mathcal{C},v) = f(v), \forall v \in X]\)

    • \(\mathcal{L} \in X\) satisfies \(\[(\mathcal{L}^q_n,v)_X = -a^q(\zeta_n,v), \forall v \in X, \, 1 \leq n \leq N, 1 \leq q \leq Q]\) which are parameter independent problems

Offline-Online decomposition: Error bounds From ([eq:12]) we get \(\[\begin{aligned} ||{\ensuremath{\Hat{e}(\mu)}\xspace}||_X^2\ =\ & (\mathcal{C},\mathcal{C})_X\ +\ \sum_{q=1}^Q \sum_{n=1}^N\ \Theta^q(\mu)\ {u_N}_n(\mu)\ \displaystyle \Bigg\{ 2 ( \mathcal{C}, \mathcal{L}^q_n)_X \notag\\ & + \sum_{q'=1}^{Q'} \sum_{n'=1}^{N'}\ \Theta^{q'}(\mu)\ {u_N}_{n'}(\mu)\ ( \mathcal{L}^{q}_{n}, \mathcal{L}^{q'}_{n'})_X \Bigg\} \label{eq:rbellipticlinear_error:37} \end{aligned}]\)

Remark In ([eq:rbellipticlinear_error:37]), \(||{\ensuremath{\Hat{e}(\mu)}\xspace}||_X^2\) is the sum of products of

  • and

  • ,

the offline-online for the error bounds is now clear.

Offline-Online decomposition: steps and complexity

Offline:

  • Solve for \(\mathcal{C}\) and \(\mathcal{L}^q_n,\ 1 \leq n \leq N,\ 1 \leq q \leq Q\)

  • Form and save \((\mathcal{C},\mathcal{C})_X\), \(( \mathcal{C}, \mathcal{L}^q_n)_X\) and \(( \mathcal{L}^{q}_{n}, \mathcal{L}^{q'}_{n'})_X\), \(1 \leq n,n' \leq N,\ 1 \leq q, q' \leq Q\)

Online

  • Given a new \(\mu \in {\ensuremath{\mathcal{D}^\mu}\xspace}\)

  • Evaluate the sum \(||{\ensuremath{\Hat{e}(\mu)}\xspace}||_X^2\) ([eq:rbellipticlinear_error:37]) in terms of \(\Theta^q(\mu)\) and \({u_N}_n(\mu)\)

  • Complexity in \(O(Q^2 N^2)\) independent of \(\mathcal{N}\)

Sampling strategy: a Greedy algorithm

Offline-Online Scenarii

Offline Given a tolerance \(\tau\), build \(S_N\) and \(W_N\) s.t. \(\[\forall \ \mu\ \in \mathcal{P} \equiv \mathcal{D}^{\mu} \ , \ \Delta_N(\mu) < \tau]\)

Online Given \(\mu\) and a tolerance \(\tau\), find \(N^*\) and thus \(s_{N^*}(\mu)\) s.t. \(\[N^* = \operatorname{arg\ max}_N\ \left( \Delta_{N}(\mu) < \tau \right)]\)

or given \(\mu\) and a max execution time \(T\), find \(N^*\) and thus \(s_{N^*}(\mu)\) s.t. \(\[N^* = \operatorname{arg\ min}_N\ \left( \Delta_{N}(\mu) \mbox{ and execution time } < T \right)]\)

\(S_N\) and \(W_N\) Generation Strategies

Offline Generation

  • Given a tolerance \(\epsilon\), set \(N = 0\) and \(S_0 = \emptyset\)

  • While \({\ensuremath{\Delta_N^{\mathrm{max}}}\xspace}> \epsilon\)

  • \(N = N+1\)

  • If N == 1; then Pick ((log-)randomly) \(\mu_1 \in {\ensuremath{\mathcal{D}^\mu}\xspace}\)

  • Build \({\ensuremath{S_N}\xspace}:= \{ \mu_N \} \cup S_{N-1}\)

  • Build \({\ensuremath{W_N}\xspace}:= \{ \xi = u(\mu_N) \} \cup W_{N-1}\)

  • Compute \({\ensuremath{\Delta_N^{\mathrm{max}}}\xspace}:= \mathrm{max}_{\mu \in {\ensuremath{\mathcal{D}^\mu}\xspace}}\, \Delta_N(\mu)\) *

  • End While

Condition number recall that the \(\zeta_n\) are , this ensures that the condition number will stay bounded by \(\gamma(\mu)/\alpha(\mu)\)

Online Algorithm I

\(\mu\) adaptive online

  • Given \(\mu \in {\ensuremath{\mathcal{D}^\mu}\xspace}\), compute \(({\ensuremath{s_{N^{*}}}\space}(\mu), {\ensuremath{\Delta_{N^{*}}}\space}(\mu))\) such that \({\ensuremath{\Delta_{N^{*}}}\space}(\mu) < \tau.\)

  • \(N = 2\)

  • While \({\ensuremath{\Delta_N}\space}(\mu) > \tau\)

  • Compute \(({\ensuremath{s_N}\space}(\mu), {\ensuremath{\Delta_N}\space}(\mu)) \mbox{ using } ({\ensuremath{S_N}\xspace},{\ensuremath{W_N}\xspace})\)

  • \(N = N * 2\qquad\)
    use the (very) fast convergence properties of RB

  • End While

Online Algorithm II

Offline

  • While \(i <= \mathrm{Imax} >> 1\)

  • Pick log-randomly \(\mu \in {\ensuremath{\mathcal{D}^\mu}\xspace}\)

  • Store in table \(\mathcal{T}, {\ensuremath{\Delta_N}\space}(\mu)\) if for \(N=1,..., {\ensuremath{{N^{\mathrm{max}}}}\xspace}\)

  • \(i = i + 1\); End While

Online Algorithm II – \(\mu\) adaptive online – worst case

  • Given \(\mu \in {\ensuremath{\mathcal{D}^\mu}\xspace}\), compute \(({\ensuremath{s_{N^{*}}}\space}(\mu), {\ensuremath{\Delta_{N^{*}}}\space}(\mu))\) such that \({\ensuremath{\Delta_{N^{*}}}\space}(\mu) < \tau.\)

  • \(N^{*} := \mathrm{arg} \mathrm{max}_{\mathcal{T}}\, {{\ensuremath{\Delta_N}\space}(\mu) \, < \, \tau}\)

  • Use \({\ensuremath{W_{N^{*}}}\xspace}\) to compute \(({\ensuremath{s_{N^{*}}}\space}(\mu),{\ensuremath{\Delta_{N^{*}}}\space}(\mu))\)

Inf-sup lower bound

Lower bound for coercivity constant We require a \({\ensuremath{{\alpha_{{\mathrm{LB}}}}(\mu)}\xspace}\) for \({\ensuremath{\alpha(\mu)}\xspace}= \alpha_c(\mu),\ \forall \mu \in {\ensuremath{\mathcal{D}^\mu}\xspace}\)

Two strategies are available:

  • ``Min \(Theta\)'' approach if \(a\) is parametrically coercive (i.e. the coercivity constant depends solely on \(\mu\))

  • and more generally the Successive Constraint Method(SCM) which can also be applied in case of ``Inf-Sup'' stable problems (Stokes, Helmholtz,…​)

“Min \(\Theta\)“ Approach

``Min \(Theta\)'' approach: Lower bound for \(\alpha(\mu)\)

For a parametrically coercive bilinear form

  • \(\Theta^q(\mu) > 0,\ \forall \mu \in {\ensuremath{\mathcal{D}^\mu}\xspace}\) and

  • \(a^q(v,v) \geq 0,\ \forall v \in X,\ 1 \leq q \leq Q\)

We have \(\[\label{eq:38} \Theta^{\mathrm{min},\Bar{\mu}}_a(\mu) = \alpha(\Bar{\mu})\min_{q=1...Q}\frac{\Theta_a^q(\mu)}{\Theta_a^q(\Bar{\mu})} \leq \alpha(\mu)]\) where \(\Bar{\mu} \in {\ensuremath{\mathcal{D}^\mu}\xspace}\) which was used to define the \(X\)-inner product and induced norm

Recall that \(\[\begin{aligned} (u,v)_X &= a(u,v;\alert{\Bar{\mu}}), \quad \forall u,v \in X\\ \|v\|_X &= \sqrt{(u,v)_X}, \quad \forall v \in X \end{aligned}]\)

``Min \(Theta\)'' approach: Upper bound for \(\gamma(\mu)\) Similarly we develop an upper bound \({\ensuremath{{\gamma_{{\mathrm{UB}}}}(\mu)}\xspace}\) for \(\gamma(\mu)\). We define \(\[\label{eq:38} \infty > \Theta^{\mathrm{max},\Bar{\mu}}_a(\mu) = \gamma(\Bar{\mu}) \max_{q=1...Q}\frac{\Theta_q^q(\mu)}{\Theta_a^q(\Bar{\mu})}\geq \gamma(\mu)]\)

[rem:mintheta-rem] \({\ensuremath{{\gamma_{{\mathrm{UB}}}}(\mu)}\xspace}\) is actually not required in practice but relevant in the theory.

``Min \(Theta\)'' approach: Summary if \(a\) is parametrically coercive we then choose

  • the coercivity constant lower bound to be \(\[{\ensuremath{{\alpha_{{\mathrm{LB}}}}(\mu)}\xspace}\equiv \Theta^{\mathrm{min},\Bar{\mu}}_a(\mu)]\)

  • and the continuity constant upper bound to be (\(a\) symmetric) \(\[{\ensuremath{{\gamma_{{\mathrm{UB}}}}(\mu)}\xspace}\equiv \Theta^{\mathrm{max},\Bar{\mu}}_a(\mu)]\)

  • Online cost to evaluate \({\ensuremath{{\alpha_{{\mathrm{LB}}}}(\mu)}\xspace}\) : \(O(Q_a)\)

  • Choice of inner product important \((u,v)_X &= a(u,v;\alert{\Bar{\mu}})\) (see multiple inner products approach)

  • Extends to non-symmetric problems by considering the symmetric part \(\[a_s(u,v;\mu) = \frac{1}{2}( a(u,v;\mu)+a(v,u;\mu) )]\)

Successive constraint method (SCM)

Successive Constraint method: Stability estimates

We wish to compute \(\alpha_{\mathrm{LB}}: \mathcal{D} \rightarrow \mathbb{R}\) such that \(\[\label{eq:38} 0 < \alpha_{\mathrm{LB}}(\mu) \leq \alpha^{\mathcal{N}}(\mu), \quad \mu \in \mathcal{D}]\) and it computation is rapid \(O(1)\) where \(\[\label{eq:39} \alpha^{\mathcal{N}}(\mu)= \mathrm{inf}_{w \in X^{\mathcal{N}}} \frac{a(w,w;\mu)}{\|w\|_X^2}]\)

Computation of \(\alpha^{\mathcal{N}}(\mu)\) \(\alpha^{\mathcal{N}}(\mu)\) is the minimum eigenvalue of the following generalized eigenvalue problem \(\[\label{eq:40} a(w,v;\mu) = \lambda(\mu)\ m(w,v;\mu), \quad (A w = \lambda B w)]\) where \(m(\cdot,\cdot)\) is the bi-linear form associated with \(\|\cdot\|_X\) and \(B\) is the associated matrix.

Successive Constraint method:Reformulation

The problem as a minimization one First recall \(\[\label{eq:41} a(w,v;\mu) = \sum_{q=1}^{Q_{a}}\ \theta_q(\mu)\ a_q(w,v)]\) Hence we have \(\[\label{eq:42} \alpha^{\mathcal{N}}(\mu)= \mathrm{inf}_{w \in X^{\mathcal{N}}} \sum_{q=1}{^Q_a}\ \theta_q(\mu) \frac{a_q(w,w)}{\|w\|_X^2}]\) and we note \(\[\label{eq:43} \mathcal{J}^{\mathrm{obj}}(w;\mu) = \sum_{q=1}^{Q_a}\ \theta_q(\mu) \frac{a_q(w,w)}{\|w\|_X^2}]\)

Reformulation We have the following optimisation problem \(\[\label{eq:44} \alpha^{\mathcal{N}}(\mu)= \mathrm{inf}_{y \in \mathcal{Y}} \mathcal{J}^{\mathrm{obj}}(\mu; y)]\) where \(\[\label{eq:46} \mathcal{J}^{\mathrm{obj}}(\mu; y) \equiv \sum_{q=1}^{Q_a}\ \theta_q(\mu) y_q]\) and \(\[\label{eq:45} \mathcal{Y} = \Big\{ y \in \mathbb{R}^{Q_a} |\ \exists w \in X^{\mathcal{N}}\ \mathrm{s.t.}\ y_q = \frac{a_q(w,w)}{\|w\|_{X^{\mathcal{N}}}^2}, 1 \leq q \leq Q_a \Big\}]\)

We now need to characterize \(\mathcal{Y}\), to do this we construct two sets \(\mathcal{Y}_{\mathrm{LB}}\) and \(\mathcal{Y}_{\mathrm{UB}}\) such that \(\mathcal{Y}_{\mathrm{UB}} \subset \mathcal{Y} \subset \mathcal{Y}_{\mathrm{LB}}\) over which finding \(\alpha^{\mathcal{N}}(\mu)\) is feasible.

Successive Constraint method: Ingredients First we set the design space for the minimisation problem . We introduce \(\[\label{eq:21} \mathcal{B} = \prod_{q=1}^{Q_a} \Big[ \mathrm{inf}_{w\in X^{\mathcal{N}}} \frac{a_q(w,w)}{\|w\|_X^2}; \mathrm{sup}_{w\in X^{\mathcal{N}}} \frac{a_q(w,w)}{\|w\|_X^2} \Big\)\]] \(\[\label{eq:22} \Xi = \Big\{ \mu_i \in \mathcal{D}; i=1,...,J \Big\}]\) and \(\[\label{eq:23} C_K = \Big\{ \mu_i \in \Xi; i=1,...,K \Big\} \subset \Xi]\)

\(\Xi\) is constructed using a \(\frac{1}{2^p}\) division of \(\mathcal{D}\): in 1D, \(0, 1; \frac{1}{2}; \frac{1}{4}, \frac{3}{4};...\). \(C_K\) will be constructed using a greedy algorithm.

Finally we shall denote \(P_M(\mu;E)\) the set of \(M\) points closest to \(\mu\) in the set \(E\). We shall need this type of set to construct the lower bounds.

Successive Constraint method: Lower bounds \(\mathcal{Y}_{\mathrm{LB}}\)

Given \(M_\alpha, M_+ \in \mathbb{N}\) we are now ready to define \(\mathcal{Y}_{\mathrm{LB}}\) \(\[\begin{gathered} \label{eq:24} \mathcal{Y}_{\mathrm{LB}}(\mu; C_K) = \Big\{ y \in \mathbb{R}^{Q_a}\ |\ y \in \mathcal{B}, \\ \ \sum_{q=1}^{Q_a} \theta_q(\mu') y_q \geq \alpha^{\mathcal{N}}(\mu'),\ \forall \mu' \in P_{M_\alpha}(\mu;C_K) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\ \sum_{q=1}^{Q_a} \theta_q(\mu') y_q \geq \alpha_{\mathrm{LB}}(\mu';C_{K-1}),\ \forall \mu' \in P_{M_+}(\mu;\Xi\backslash C_K) \Big\}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \end{gathered}]\) We now set \(\[\label{eq:25} \alpha_{\mathrm{LB}}(\mu;C_K) = \mathrm{inf}_{y \in \mathcal{Y}_{\mathrm{LB}(\mu;C_K)}}\ \mathcal{J}^{\mathrm{obj}}(\mu;y)]\)

Computing \(\alpha_{\mathrm{LB}}(\mu;C_K)\) is in fact a linear program with \(Q_a\) design variables, \(y_q\), and \(2 Q_a+M_\alpha+M_+\) constraints online. It requires the construction of \(C_K\) offline.

Successive Constraint method: Upper bounds \(\mathcal{Y}_{\mathrm{UB}}\)

Let \(\[\label{eq:26} \mathcal{Y}_{\mathrm{UB}}( C_K ) = \Big\{ y^*(\mu_k), 1 \leq k \leq K \Big\}]\) with \(\[\label{eq:27} y^*(\mu) = \mathrm{arg}\mathrm{min}_{y \in \mathcal{Y}}\ \mathcal{J}^{\mathrm{obj}}( \mu; y )]\) We set \(\[\label{eq:28} \alpha_{\mathrm{UB}}( \mu; C_K) = \mathrm{inf}_{y \in \mathcal{Y}_{\mathrm{UB}}(C_K)}\ \mathcal{J}^{\mathrm{obj}}(\mu;y)]\)

\(\mathcal{Y}_{\mathrm{UB}}\) requires \(K\) eigensolves to compute the eigenmode \(\eta_k\) associated with \(w_k, k=1,...,K\) and \(K Q \mathcal{N}\) inner products to compute the \(y^*_q(w_k)=\frac{a_q(\eta_k,\eta_k;\mu)}{\|\eta_k\|_{X^{\mathcal{N}}}^2}, k=1,...,K\) offline . Then computing \(\alpha_{\mathrm{UB}}( \mu; C_K)\) is a simple enumeration online.

Successive Constraint method: \(C_K\)

\([C_{K_\mathrm{max}}\) = \mathrm{Greedy}(\Xi, \epsilon]) Given \(\Xi\) and \(\epsilon \in [0;1\)]

  • While \(\mathrm{max}_{\mu \in \Xi}\ \frac{\alpha_{\mathrm{UB}}( \mu; C_K) - \alpha_{\mathrm{LB}}( \mu; C_K)}{\alpha_{\mathrm{UB}}( \mu; C_K)} > \epsilon\)

    • \(\mu_{K+1} = \mathrm{arg} \mathrm{max}_{\mu \in \Xi}\ \frac{\alpha_{\mathrm{UB}}( \mu; C_K) - \alpha_{\mathrm{LB}}( \mu; C_K)}{\alpha_{\mathrm{UB}}( \mu; C_K)}\)

    • \(C_{K+1} = C_K \cup \{ \mu_{K+1} \}\)

    • \(K \leftarrow K+1\)

  • set \(K_{\mathrm{max}} = K\)

Successive Constraint method:Offline-Online

\(\mathrm{Offline}\)

  • \(2Q_a+M_\alpha+M_+\) eigensolves \(\alpha^{\mathcal{N}}(\mu), y^*(\mu_k)\)

  • \(n_\Xi K_{\mathrm{max}} LP(Q,M_\alpha,M_+)\) to build \(C_{K_{\mathrm{max}}}\)

  • \(K_{\mathrm{max}} Q\) inner products over \(X^{\mathcal{N}} \Rightarrow \mathcal{Y}_{\mathrm{UB}}\)

\([\alpha_{\mathrm{LB}}(\mu)\) = \mathrm{Online}( \mu, C_{K_{\mathrm{max}}}, M_\alpha, M_+ )] Given \(\mu \in \mathcal{D}\)

  • sort over \(C_{K_{\mathrm{max}}} \Rightarrow P_{M_\alpha}(\mu;C_{K_{\mathrm{max}}})\) and \(P_{M_+}(\mu;\Xi\backslash C_{K_{\mathrm{max}}})\)

  • \((M_\alpha+M_++2) Q_a\) evaluation of \(\theta_q(\mu')\)

  • \(M_\alpha\) lookups to get \(\mu' \rightarrow \alpha^{\mathcal{N}}(\mu')\)

  • \(LP(Q_a,M_\alpha,M+)\) to get \(\alpha_{\mathrm{LB}} (\mu)\)

Numerical Experiments

Problem Statement

Example Thermal Block: Heat Transfer

6

(0,0) rectangle (3,3); (0,0) grid (3,3); (0,0) rectangle (3,3);

(1.5,-0.5)node[right]\(\Gamma_0\) (Heat Flux) to[out=180,in=90] (1.5,0); (1.5,3.5)node[right]\({\Gamma_{\mathrm{top}}}\) (Zero Dirichlet) to[out=180,in=90] (1.5,3); (3.5,1.5) node[right]\({\Gamma_{\mathrm{sides}}}\) (Insulated) to[out=180,in=90] (0,1.5) ; (3.5,1.5) to[out=180,in=-90] (3,1.5);

in 5mm,15mm,25mm

in 5mm,15mm,25mm

at (+0.45,+0.3) \(\mu_\theindex\);

3

Example Thermal Block: Problem statement Given \(\mu \in (\mu_1,...\mu_P) \in {\ensuremath{\mathcal{D}^\mu}\xspace}\equiv [\mu^{\text{min}},\mu^{\text{max}}\)^P], evaluate (recall that \(\ell = f\))
\(\[s(\mu) = f(u(\mu))]\) where \(u(\mu) \in X \equiv \{ v \in H^1(\Omega), v|_{\Gamma_{\text{top}}} = 0\}\) satisfies \(\[a(u(\mu), v; \mu) = f(v;\mu) \quad \forall v \in X]\) we have \(P = 8\) and given \(1 < \mu_r < \infty\) we set \(\[\mu^{\mathrm{min}} = 1/\sqrt{\mu_r},\quad \mu^{\mathrm{max}} = \sqrt{\mu_r}]\) such that \(\mu^{\mathrm{max}}/\mu^{\mathrm{min}}=\mu_r.\)

Example Thermal Block Recall we are in the compliant case \(\ell = f\), we have \(\[f(v) = \int_{\Gamma_{0}} v\quad \forall v \in X]\) and \(\[a(u,v;\mu) = \sum_{i=1}^{P} \mu_i \int_{\Omega_i} \nabla u \cdot \nabla v + 1 \int_{\Omega_{P+1}} \nabla u \cdot \nabla v \quad\forall u,\ v\ \in X]\) where \(\Omega = \cup_{i=1}^{P+1} \Omega_i\).

Example Thermal Block The inner product is defined as follows \(\[(u,v)_X = \sum_{i=1}^P \bar{\mu}_i \int_{\Omega_i}\nabla u \cdot \nabla v + 1 \int_{\Omega_{P+1}} \nabla u \cdot \nabla v]\) where \(\bar{\mu}_i\) is a . We have readily that \(a\) is

* * \(\[0 < \frac{1}{\sqrt{\mu_r}} \leq \mathrm{min}(\mu_1/\bar{\mu}_1, \ldots, \mu_P/\bar{\mu}_P,1) \leq \alpha(\mu)]\) * and \(\[\gamma(\mu) \leq \mathrm{max}(\mu_1/\bar{\mu}_1, \ldots, \mu_P/\bar{\mu}_P,1) \leq \sqrt{\mu_r} < \infty]\)

and the linear form \(f\) is .

Example Thermal Block: Affine decomposition We \(\[a(u,v;\mu) = \sum_{q=1}^{P+1} \Theta^q(\mu) a^q(u,v)]\) with \(\[\begin{aligned} \Theta^1(\mu) = \mu_1 & & a^1(u,v) = \int_{\Omega_1} \nabla u \cdot \nabla v\\ & \vdots & \\ \Theta^P(\mu) = \mu_P & & a^P(u,v) = \int_{\Omega_P} \nabla u \cdot \nabla v\\ \Theta^{P+1}(\mu) = 1 & & a^{P+1}(u,v) = \int_{\Omega_{P+1}} \nabla u \cdot \nabla v \end{aligned}]\)

Example Thermal Block

  • 0.5 **

    image

    0.5 **

    image

  • image

Thermal Block \(P=1\)

ExampleThermal Block \(P=1\)

image

We assume \(\[1/\mu^{\min}_1=\mu^{\max}_1=\sqrt{\mu_r}=10]\) and choose \(\mathcal{N}=256\)

we set \(\bar{\mu}=1\) and have \(\[\Theta^1_a(\mu)=\mu_1,\, \Theta^2_a(\mu) = 1]\) Thus, \(\[\Theta_a^{\min,\bar{\mu}}(\mu_1)=\min(\mu_1,1)\\ \Theta_a^{\max,\bar{\mu}}(\mu_1)=\max(\mu_1,1)]\)

hence

\(\[\theta^{\bar{\mu}}(\mu_1) = \max(\frac{1}{\mu_1},\mu_1)]\)

and \(\[\theta^{\bar{\mu}}(\mu_1) \leq\sqrt{\mu_r} \forall \mu_1 \in \mathcal{D}]\)

Example: Thermal Block \(P=1\) in [RHP2008]

Table 1. Convergence results for \(P=1\)
\(N\) \(\Delta^s_{N,\mathrm{max}}(\mu)\) \(\eta^s_{N,\mathrm{ave}}\) \(\eta^s_{N,\mathrm{max}}\)

1

7.2084E+00

2.3417

3.3305

2

4.5371E–01

2.4858

3.6850

3

6.9652E–04

6.2195

9.8551

4

1.3744E–07

3.3219

7.2632

5

3.1140E–11

6.0789

7.0453

Note that: \(\eta^s_{N,\mathrm{max}}(\mu_1) \leq \eta^s_{\mathrm{max,UB}} \equiv \sqrt{\mu_r} = 10\)

  • Maximum output error bound: \(\Delta^s_{N,\mathrm{max}} = \max_{\mu \in \Xi_{\mathrm{train}}} \Delta^s_N(\mu)\)

  • Average output effectivity: \(\eta^s_{N,\mathrm{ave}} = \frac{1}{\Xi_\mathrm{train}}\sum_{\mu \in \Xi_{\mathrm{train}}} \eta^s_N(\mu)\)

  • Maximum output effectivity: \(\eta^s_{N,\mathrm{max}} = \max_{\mu \in \Xi_{\mathrm{train}}} \eta^s_N(\mu)\)

Example Thermal Block \(P=8\)

Thermal Block \(P=8\)

  • Configuration :

    • 47 600 dofs;

    • Preconditionner : LU – Solver : MUMPS

    • \(\Xi : \) parameter sampling of dimension 1 000.

  • Plot \(\ds{ \max_{\mu \in \Xi} \frac{ |s^{\mathcal{N}}(\mu)-s_N(\mu)|}{s^{\mathcal{N}}(\mu)} }\)

table[x=NbBasis,y=Max]; table[x=NbBasis,y=Max]; ;

Thermal Block \(P=8\)

  • More parameters there are, more rich the problem is;

  • Notations :

    • \(e^i(\mu)\) is the relative error on the output when \(i\) parameters vary.

table[x=NbBasis,y=Max]; table[x=NbBasis,y=Max]; table[x=NbBasis,y=Max]; table[x=NbBasis,y=Max]; table[x=NbBasis,y=Max]; table[x=NbBasis,y=Max]; table[x=NbBasis,y=Max]; table[x=NbBasis,y=Max]; ;