janmr blog

Finite Difference Discretization of the 2D Laplace Operator

Let us now consider the discretization of the 2D Laplace operator using finite differences. Recall that the Laplace operator is defined as

Δu=u=2ux2+2uy2,\Delta u = \nabla \cdot \nabla u = \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2},

for some sufficiently smooth function u:R2Ru: \mathbb{R}^2 \mapsto \mathbb{R}.

By following the procedure as in the previous post, we get:

2u(x,y)x2=u(xh,y)2u(x,y)+u(x+h,y)h2+O(h2),2u(x,y)y2=u(x,yh)2u(x,y)+u(x,y+h)h2+O(h2),\begin{aligned} \frac{\partial^2 u(x, y)}{\partial x^2} &= \frac{u(x - h, y) - 2 u(x, y) + u(x + h, y)}{h^2} + \cal{O}(h^2), \\ \frac{\partial^2 u(x, y)}{\partial y^2} &= \frac{u(x, y - h) - 2 u(x, y) + u(x, y + h)}{h^2} + \cal{O}(h^2), \end{aligned}

and thus

Δu(x,y)=u(xh,y)+u(x,yh)4u(x,y)+u(x+h,y)+u(x,y+h)h2+O(h2).\Delta u(x, y) = \frac{u(x - h, y) + u(x, y - h) - 4 u(x, y) + u(x + h, y) + u(x, y + h)}{h^2} + \cal{O}(h^2).

Let us now consider a uniform (m+2)×(n+2)(m+2) \times (n+2) grid with spacing hh in both the xx and yy directions. We set xi=ihx_i = i h for i=0,1,,m+1i = 0, 1, \ldots, m+1 and yj=jhy_j = j h for j=0,1,,n+1j = 0, 1, \ldots, n+1. We will assume that uu is zero on the boundary of the domain, i.e., u(x0,)=u(xm+1,)=u(,y0)=u(,yn+1)=0u(x_0, \cdot) = u(x_{m+1}, \cdot) = u(\cdot, y_0) = u(\cdot, y_{n+1}) = 0.

Now let DRm×nD \in \mathbb{R}^{m \times n} be a matrix where Di,jD_{i, j} is the finite difference approximation of Δu(xi,yj)\Delta u(x_i, y_j), i.e.,

Di,j=u(xi1,yj)+u(xi,yj1)4u(xi,yj)+u(xi+1,yj)+u(xi,yj+1)h2.D_{i, j} = \frac{u(x_{i-1}, y_j) + u(x_i, y_{j-1}) - 4 u(x_i, y_j) + u(x_{i+1}, y_j) + u(x_i, y_{j+1})}{h^2}.

We now let URm×nU \in \mathbb{R}^{m \times n} be the matrix with entries Ui,j=u(xi,yj)U_{i, j} = u(x_i, y_j) and we seek an efficient way to compute DD from UU.

One way to do this is to see that DD can come from UU by sweeping through the grid and applying a five-point stencil to each point:

A five-point stencil that describes the finite difference approximation
Figure 1. A five-point stencil that describes the finite difference approximation (ignoring the 1/h² factor)

But how to do this efficiently using matrix operations? We definitely want to avoid looping over the grid ourselves. Instead, we can split the five-point stencil into two parts, one horizontal and one vertical:

Splitting the five-point stencil into a sum of a vertical and a horizontal three-point stencil
Figure 2. Splitting the five-point stencil into a sum of a vertical and a horizontal three-point stencil

This shows that we can compute DD from UU as

D=LmU+ULnT,D = L_m U + U L_n^T,

where LkL_k is the 1D Laplace matrix of order kk:

Lk=1h2[2100121012010012]Rk×k.L_k = \frac{1}{h^2} \begin{bmatrix} -2 & 1 & 0 & \cdots & 0 \\ 1 & -2 & 1 & \ddots & \vdots \\ 0 & 1 & -2 & \ddots & 0 \\ \vdots & \ddots & \ddots & \ddots & 1 \\ 0 & \cdots & 0 & 1 & -2 \end{bmatrix} \in \mathbb{R}^{k \times k}.

Note how the first term, LmUL_m U, applies the three-point stencil to each row of UU and that the second term, ULnTU L_n^T, applies the stencil to each column of UU (transposing LnL_n is strictly not necessary here as LnL_n is symmetric).

Note also how LL contains 3 non-zero values per row (except for the first and last rows), so assuming that it is represented in a sparse (compressed row) format, the matrix-matrix product LmUL_m U requires roughly 6mn6 m n floating point operations (3 multiplications and 3 additions per output entry) and likewise for ULnTU L_n^T. That is a total of 12mn12 m n operations to compute DD from UU.

Another approach is to use the identities related to the vec\operatorname{vec} and Kronecker product operators as seen in an earlier post. Using these, we get

vec(D)=vec(LmU+ULnT)=vec(LmU)+vec(ULnT)=(InLm)vec(U)+(LnIm)vec(U)=(InLm+LnIm)vec(U),\begin{aligned} \operatorname{vec}(D) &= \operatorname{vec}(L_m U + U L_n^T) \\ &= \operatorname{vec}(L_m U) + \operatorname{vec}(U L_n^T) \\ &= (I_n \otimes L_m) \operatorname{vec}(U) + (L_n \otimes I_m) \operatorname{vec}(U) \\ &= (I_n \otimes L_m + L_n \otimes I_m) \operatorname{vec}(U), \end{aligned}

where IkI_k is the k×kk \times k identity matrix.

So the matrix operator here is

Lm×n=InLm+LnIm=LnLmL_{m \times n} = I_n \otimes L_m + L_n \otimes I_m = L_n \oplus L_m

where \oplus denotes the Kronecker sum (available in SciPy as kronsum). It contains (mostly) 5 non-zero values for each of its mnm n rows, so the matrix-vector product Lm×nvec(U)L_{m \times n} \operatorname{vec}(U) requires roughly 10mn10 m n floating point operations.

Note that the approach of splitting the five-point stencil into two parts can also be done using a two-point stencil 1011 \diamond 0 \diamond 1 in one direction and 1411 \diamond -4 \diamond 1 in the other. This will reduce the number of floating point operations down to 4mn+6mn=10mn4 m n + 6 m n = 10 m n, just like the Kronecker sum approach.

Feel free to leave any question, correction or comment in this Mastodon thread.