6. Assembling and solving finite element problems¶
A video recording of the following material is available here.
Imperial students can also watch this video on Panopto
Having constructed functions in finite element spaces and integrated them over the domain, we now have the tools in place to actually assemble and solve a simple finite element problem. To avoid having to explicitly deal with boundary conditions, we choose in the first instance to solve a Helmholtz problem [1], find \(u\) in some finite element space \(V\) such that:
where \(\Gamma\) is the domain boundary and \(\mathrm{n}\) is the outward pointing normal to that boundary. \(f\) is a known function which, for simplicity, we will assume lies in \(V\). Next, we form the weak form of this equation by multiplying by a test function in \(V\) and integrating over the domain. We integrate the Laplacian term by parts. The problem becomes, find \(u\in V\) such that:
If we write \(\{\phi_i\}_{i=0}^{n-1}\) for our basis for \(V\), and recall that it is sufficient to ensure that (6.64) is satisfied for each function in the basis then the problem is now, find coefficients \(u_i\) such that:
Note that since (6.64) is linear in \(v = \sum_{i}v_i\phi_i\) we are able to drop the coefficients \(v_i\). Since the left hand side is linear in the scalar coefficients \(u_j\), we can move them out of the integral:
We can write this as a matrix equation:
where:
6.1. Assembling the right hand side¶
A video recording of the following material is available here.
Imperial students can also watch this video on Panopto
The assembly of these integrals exploits the same decomposition property we exploited previously to integrate functions in finite element spaces. For example, (6.70) can be rewritten as:
This has a practical impact once we realise that only a few basis functions are non-zero in each element. This enables us to write an efficient algorithm for right hand side assembly. Assume that at the start of our algorithm:
Now for each cell \(c\), we execute:
Where \(M\) is the cell-node map for the finite element space \(V\), \(N\) is the number of nodes per element in \(V\), and \(\{\Phi_{\hat{i}}\}_{\hat{i}=0}^{N-1}\) are the local basis functions. In other words, we visit each cell and conduct the integral for each local basis function, and add that integral to the total for the corresponding global basis function.
By choosing a suitable quadrature rule, \(\{X_q\}, \{w_q\}\), we can write this as:
6.2. Assembling the left hand side matrix¶
A video recording of the following material is available here.
Imperial students can also watch this video on Panopto
The left hand side matrix follows a similar pattern, however there are two new complications. First, we have two unbound indices (\(i\) and \(j\)), and second, the integral involves derivatives. We will address the question of derivatives first.
6.2.1. Pulling gradients back to the reference element¶
On element \(c\), there is a straightforward relationship between the local and global bases:
We can also, as we showed in Changing coordinates between reference and physical space, express the global coordinate \(x\) in terms of the local coordinate \(X\).
What about \(\nabla\phi\)? We can write the gradient operator in component form and apply (6.75):
However, the expression on the right involves the gradient of a local basis function with respect to the global coordinate variable \(x\). We employ the chain rule to express this gradient with respect to the local coordinates, \(X\):
Using the definition of the Jacobian, and using \(\nabla_x\) and \(\nabla_X\) to indicate the global and local gradient operators respectively, we can equivalently write this expression as:
where \(J^{-\mathrm{T}} = (J^{-1})^\mathrm{T}\) is the transpose of the inverse of the cell Jacobian matrix.
6.2.2. The assembly algorithm¶
A video recording of the following material is available here.
Imperial students can also watch this video on Panopto
We can start by transforming (6.68) to local coordinates (referred to as pulling back) and considering it in the algorithmic form used for the right hand side in (6.71) to (6.74):
We now employ a suitable quadrature rule, \(\{X_q\}, \{w_q\}\), to calculate the integral:
Some readers may find this easier to read using index notation over the geometric dimensions:
6.2.3. A note on matrix insertion¶
For each cell \(c\), the right hand sides of equations (6.80) and (6.81) have two free indices, \(\hat{i}\) and \(\hat{j}\). The equation therefore assembles a local \(N\times N\) matrix corresponding to one integral for each test function, trial function pair on the current element. This is then added to the global matrix at the row and column pairs given by the cell node map \(M(c, \hat{i})\) and \(M(c, \hat{j})\).
Hint
One might naïvely expect that if nodes
is the vector of global
node numbers for the current cell, m
is the matrix of local
integral values and A
is the global matrix, then the Python
code might look like:
A[nodes, nodes] += m # DON'T DO THIS!
Unfortunately, numpy
interprets this as an instruction to
insert a vector into the diagonal of A
, and will complain that
the two-dimensional right hand side does not match the
one-dimensional left hand side. Instead, one has to employ the
numpy.ix_()
function:
A[np.ix_(nodes, nodes)] += m # DO THIS!
No such problem exists for adding values into the global right hand
side vector. If l
is the global right hand side vector and
v
is the vector of local right hand integrals, then the
following will work just fine:
l[nodes] += v
6.2.4. Sparse matrices¶
A video recording of the following material is available here.
Imperial students can also watch this video on Panopto
Each row of the global matrix corresponds to a single global basis function. The number of non-zeros in this row is equal to the number of other basis functions which are non-zero in the elements where the original basis function is non-zero. The maximum number of non-zeros on a row may vary from a handful for a low degree finite element to a few hundred for a fairly high degree element. The important point is that it is essentially independent of the size of the mesh. This means that as the number of cells in the mesh increases, the proportion of the matrix entries on each row which have the value zero increases.
For example, a degree 4 Lagrange finite element space defined on \(64\times 64\) unit square triangular mesh has about 66000 nodes. The full global matrix therefore has more that 4 billion entries and, at 8 bytes per matrix entry, will consume around 35 gigabytes of memory! However, there are actually only around 23 nonzeros per row, so more than 99.9% of the entries in the matrix are zeroes.
Instead of storing the complete matrix, sparse matrix formats store only those entries in the matrix which are nonzero. They also have to store some metadata to describe where in the matrix the non-zero entries are stored. There are various different sparse matrix formats available, which make different trade-offs between memory usage, insertion speed, and the speed of different matrix operations. However, if we make the (conservative) assumption that a sparse matrix takes 16 bytes to store each nonzero value, instead of 8 bytes, then we discover that in the example above, we would use less than 25 megabytes to store the matrix. The time taken to solving the matrix system will also be vastly reduced since operations on zeros are avoided.
Hint
The scipy.sparse
package provides convenient interfaces
which enable Python code to employ a variety of sparse matrix
formats using essentially identical operations to the dense matrix
case. The skeleton code already contains commands to construct
empty sparse matrices and to solve the resulting linear system. You
may, if you wish, experiment with choosing other sparse formats
from scipy.sparse
, but it is very strongly suggested that
you do not switch to a dense numpy array; unless, that is, you
particularly enjoy running out of memory on your computer!
6.3. The method of manufactured solutions¶
When the finite element method is employed to solve Helmholtz problems arising in science and engineering, the value forcing function \(f\) will come from the application data. However for the purpose of testing numerical methods and software, it is exceptionally useful to be able to find values of \(f\) such that an analytic solution to the partial differential equation is known. It turns out that there is a straightforward algorithm for this process. This algorithm is known as the method of manufactured solutions. It has but two steps:
Choose a function \(\tilde{u}\) which satisfies the boundary conditions of the PDE.
Substitute \(\tilde{u}\) into the left hand side of (6.63). Set \(f\) equal to the result of this calculation, and now \(\tilde{u}\) is a solution to (6.63).
To illustrate this algorithm, suppose we wish to construct \(f\) such that:
is a solution to (6.63) defined on a domain \(\Gamma\) bounded by the unit square (a square with vertices at the points \((0,0)\), \((0,1)\), \((1,0)\) and \((1,1)\)). It is simple to verify that \(\tilde{u}\) satisfies the boundary conditions. We then note that:
If we choose:
then \(\tilde{u}\) is a solution to (6.63).
6.4. Errors and convergence¶
6.4.1. The \(L^2\) error¶
When studying finite element methods we are freqently concerned with convergence in the \(L^2\) norm. That is to say, if \(V\) and \(W\) are finite element spaces defined over the same mesh, and \(f\in V, g\in W\) then we need to calculate:
where \(M_V\) is the cell-node map for the space \(V\) and \(M_W\) is the cell-node map for the space \(W\). Likewise \(\{\Phi_i\}\) is the local basis for \(V\) and \(\{\Psi_j\}\) is the local basis for \(W\).
A complete quadrature rule for this integral will, due to the square in the integrand, require a degree of precision equal to twice the greater of the polynomial degrees of \(V\) and \(W\).
6.4.2. Numerically estimating convergence rates¶
Using the approximation results from the theory part of the course, we know that the error term in the finite element solution of the Helmholtz equation is expected to have the form \(\mathcal{O}(h^{p+1})\) where \(h\) is the mesh spacing and \(p\) is the polynomial degree of the finite element space employed. That is to say if \(\tilde{u}\) is the exact solution to our PDE and \(u_h\) is the solution to our finite element problem, then for sufficiently small \(h\):
for some \(c>0\) not dependent on \(h\). Indeed, for sufficiently small \(h\), there is a \(c\) such that we can write:
Suppose we solve the finite element problem for two different (fine) mesh spacings, \(h_1\) and \(h_2\). Then we have:
or equivalently:
By taking logarithms and rearranging this equation, we can produce a formula which, given the analytic solution and two numerical solutions, produces an estimate of the rate of convergence:
6.5. Implementing finite element problems¶
fe_utils/solvers/helmholtz.py
contains a partial implementation of
the finite element method to solve (6.64) with \(f\)
chosen as in (6.84). Your task is to implement the
assemble()
function using (6.74), and
(6.80) or (6.81). The comments in the
assemble()
function provide some guidance as to the steps
involved. You may also wish to consult the errornorm()
function as a guide to the
structure of the code required.
Run:
python fe_utils/solvers/helmholtz.py --help
for guidance on using the script to view the solution, the analytic
solution and the error in your solution. In addition,
test/test_11_helmholtz_convergence.py
contains tests that the
helmholtz solver converges at the correct rate for degree 1, 2 and
3 polynomials.
Warning
test/test_12_helmholtz_convergence.py
may take many seconds or
even a couple of minutes to run, as it has to solve on some
rather fine meshes in order to check convergence.
Footnotes