From e2f6ff4487606f4052b9c912b9faa2c8d8f1ca10 Mon Sep 17 00:00:00 2001 From: Prefetch Date: Sun, 18 Jun 2023 17:59:42 +0200 Subject: Improve knowledge base --- .../know/concept/sturm-liouville-theory/index.md | 321 +++++++++++---------- 1 file changed, 164 insertions(+), 157 deletions(-) (limited to 'source/know/concept/sturm-liouville-theory/index.md') diff --git a/source/know/concept/sturm-liouville-theory/index.md b/source/know/concept/sturm-liouville-theory/index.md index bff57af..d7984b5 100644 --- a/source/know/concept/sturm-liouville-theory/index.md +++ b/source/know/concept/sturm-liouville-theory/index.md @@ -8,14 +8,15 @@ categories: layout: "concept" --- -**Sturm-Liouville theory** defines the analogue of Hermitian matrix -eigenvalue problems for linear second-order ODEs. +**Sturm-Liouville theory** extends +the concept of Hermitian matrix eigenvalue problems +to linear second-order ordinary differential equations. -It states that, given suitable boundary conditions, any linear -second-order ODE can be rewritten using the **Sturm-Liouville operator**, -and that the corresponding eigenvalue problem, known as a -**Sturm-Liouville problem**, will give real eigenvalues and a complete set -of eigenfunctions. +It states that, given suitable boundary conditions, +any such equation can be rewritten using the **Sturm-Liouville operator**, +and that the corresponding eigenvalue problem, +known as a **Sturm-Liouville problem**, +will give real eigenvalues and a complete set of eigenfunctions. @@ -23,18 +24,19 @@ of eigenfunctions. Consider the most general form of a second-order linear differential operator $$\hat{L}$$, where $$p_0(x)$$, $$p_1(x)$$, and $$p_2(x)$$ -are real functions of $$x \in [a,b]$$ which are nonzero for all $$x \in ]a, b[$$: +are real functions of $$x \in [a,b]$$ and are nonzero for all $$x \in \,\,]a, b[$$: $$\begin{aligned} - \hat{L} \{u(x)\} = p_0(x) u''(x) + p_1(x) u'(x) + p_2(x) u(x) + \hat{L} \{u(x)\} + \equiv p_2(x) \: u''(x) + p_1(x) \: u'(x) + p_0(x) \: u(x) \end{aligned}$$ -We now define the **adjoint** or **Hermitian** operator -$$\hat{L}^\dagger$$ analogously to matrices: +Analogously to matrices, +we now define its **adjoint** operator $$\hat{L}^\dagger$$ as follows: $$\begin{aligned} - \inprod{f}{\hat{L} g} - = \inprod{\hat{L}^\dagger f}{g} + \inprod{\hat{L}^\dagger f}{g} + \equiv \inprod{f}{\hat{L} g} \end{aligned}$$ What is $$\hat{L}^\dagger$$, given the above definition of $$\hat{L}$$? @@ -43,146 +45,155 @@ We start from the inner product $$\inprod{f}{\hat{L} g}$$: $$\begin{aligned} \inprod{f}{\hat{L} g} &= \int_a^b f^*(x) \hat{L}\{g(x)\} \dd{x} - = \int_a^b (f^* p_0) g'' + (f^* p_1) g' + (f^* p_2) g \dd{x} + = \int_a^b (f^* p_2) g'' + (f^* p_1) g' + (f^* p_0) g \dd{x} \\ - &= \big[ (f^* p_0) g' + (f^* p_1) g \big]_a^b - \int_a^b (f^* p_0)' g' + (f^* p_1)' g - (f^* p_2) g \dd{x} + &= \Big[ (f^* p_2) g' + (f^* p_1) g \Big]_a^b - \int_a^b (f^* p_2)' g' + (f^* p_1)' g - (f^* p_0) g \dd{x} \\ - &= \big[ f^* \big( p_0 g' \!+\! p_1 g \big) \!-\! (f^* p_0)' g \big]_a^b + \int_a^b \! \big( (f p_0)'' - (f p_1)' + (f p_2) \big)^* g \dd{x} + &= \Big[ f^* (p_2 g' + p_1 g) - (f^* p_2)' g \Big]_a^b + \int_a^b \! \Big( (f p_2)'' - (f p_1)' + (f p_0) \Big)^* g \dd{x} \\ - &= \big[ f^* \big( p_0 g' + (p_1 - p_0') g \big) - (f^*)' p_0 g \big]_a^b + \int_a^b \big( \hat{L}^\dagger\{f\} \big)^* g \dd{x} -\end{aligned}$$ - -We now have an expression for $$\hat{L}^\dagger$$, but are left with an -annoying boundary term: - -$$\begin{aligned} - \inprod{f}{\hat{L} g} - &= \big[ f^* \big( p_0 g' + (p_1 - p_0') g \big) - (f^*)' p_0 g \big]_a^b + \inprod{\hat{L}^\dagger f}{g} + &= \Big[ f^* \big( p_2 g' + (p_1 - p_2') g \big) - (f^*)' p_2 g \Big]_a^b + \int_a^b \Big( \hat{L}^\dagger\{f\} \Big)^* g \dd{x} \end{aligned}$$ -To fix this, -let us demand that $$p_1(x) = p_0'(x)$$ and that -$$[p_0(f^* g' - (f^*)' g)]_a^b = 0$$, leaving: +The newly-formed operator on $$f$$ must be $$\hat{L}^\dagger$$, +but there is an additional boundary term. +To fix this, we demand that $$p_1(x) = p_2'(x)$$ +and that $$\big[ p_2 (f^* g' - (f^*)' g) \big]_a^b = 0$$, leaving: $$\begin{aligned} \inprod{f}{\hat{L} g} - &= \big[ p_0 \big( f^* g' - (f^*)' g \big) \big]_a^b + \inprod{\hat{L}^\dagger f}{g} - = \inprod{\hat{L}^\dagger f}{g} + &= \Big[ f^* \big( p_2 g' + (p_1 - p_2') g \big) - (f^*)' p_2 g \Big]_a^b + \inprod{\hat{L}^\dagger f}{g} + \\ + &= \Big[ p_2 \big( f^* g' - (f^*)' g \big) \Big]_a^b + \inprod{\hat{L}^\dagger f}{g} + \\ + &= \inprod{\hat{L}^\dagger f}{g} \end{aligned}$$ -Using the aforementioned restriction $$p_1(x) = p_0'(x)$$, -we then take a look at the definition of $$\hat{L}^\dagger$$: +Let us look at the expression for $$\hat{L}^\dagger$$ we just found, +with the restriction $$p_1 = p_2'$$ in mind: $$\begin{aligned} \hat{L}^\dagger \{f\} - &= (p_0 f)'' - (p_1 f)' + (p_2 f) + &= (p_2 f)'' - (p_1 f)' + (p_0 f) \\ - &= p_0 f'' + (2 p_0' - p_1) f' + (p_0'' - p_1' + p_2) f + &= (p_2'' f + 2 p_2' f' + p_2 f'') - (p_1' f + p_1 f') + (p_0 f) \\ - &= p_0 f'' + p_0' f' + p_2 f + &= p_2 f'' + (2 p_2' - p_1) f' + (p_2'' - p_1' + p_0) f \\ - &= (p_0 f')' + p_2 f + &= p_2 f'' + p_1 f' + p_0 f + \\ + &= \hat{L}\{f\} \end{aligned}$$ -The original operator $$\hat{L}$$ reduces to the same form, -so it is **self-adjoint**: +So $$\hat{L}$$ is **self-adjoint**, i.e. $$\hat{L}^\dagger$$ is the same as $$\hat{L}$$! +Indeed, every such second-order linear operator is self-adjoint +if it satisfies the constraints $$p_1 = p_2'$$ and $$\big[ p_2 (f^* g' - (f^*)' g) \big]_a^b = 0$$. + +But what if $$p_1 \neq p_2'$$? +Let us multiply $$\hat{L}$$ by an unknown $$p(x) \neq 0$$ +and divide by $$p_2(x) \neq 0$$: $$\begin{aligned} - \hat{L} \{f\} - &= p_0 f'' + p_0' f' + p_2 f - = (p_0 f')' + p_2 f - = \hat{L}^\dagger \{f\} + \frac{p}{p_2} \hat{L} \{u\} + = p u'' + p \frac{p_1}{p_2} u' + p \frac{p_0}{p_2} u \end{aligned}$$ -Consequently, every such second-order linear operator $$\hat{L}$$ is self-adjoint, -as long as it satisfies the constraints $$p_1(x) = p_0'(x)$$ and $$[p_0 (f^* g' - (f^*)' g)]_a^b = 0$$. - -Let us ignore the latter constraint for now (it will return later), -and focus on the former: what if $$\hat{L}$$ does not satisfy $$p_0' \neq p_1$$? -We multiply it by an unknown $$p(x) \neq 0$$, and divide by $$p_0(x) \neq 0$$: +We now demand that the derivative $$p'(x)$$ of the unknown $$p(x)$$ satisfies: $$\begin{aligned} - \frac{p(x)}{p_0(x)} \hat{L} \{u\} = p(x) u'' + p(x) \frac{p_1(x)}{p_0(x)} u' + p(x) \frac{p_2(x)}{p_0(x)} u + p'(x) + = p(x) \frac{p_1(x)}{p_2(x)} + \quad \implies \quad + \frac{p_1(x)}{p_2(x)} \dd{x} + = \frac{1}{p(x)} \dd{p} \end{aligned}$$ -We now define $$q(x)$$, -and demand that the derivative $$p'(x)$$ of the unknown $$p(x)$$ satisfies: +Taking the indefinite integral of this differential equation +yields an expression for $$p(x)$$: $$\begin{aligned} - q(x) = p(x) \frac{p_2(x)}{p_0(x)} - \qquad - p'(x) = p(x) \frac{p_1(x)}{p_0(x)} + \int \frac{p_1(x)}{p_2(x)} \dd{x} + = \int \frac{1}{p} \dd{p} + = \ln\!\big( p(x) \big) + \quad \implies \quad + \boxed{ + p(x) + = \exp\!\bigg( \int \frac{p_1(x)}{p_2(x)} \dd{x} \bigg) + } \end{aligned}$$ -The latter is a differential equation for $$p(x)$$, which we solve by integration: +We define an additional function $$q(x)$$ +based on the last term of $$(p / p_2) \hat{L}$$ shown above: $$\begin{aligned} - \frac{p_1(x)}{p_0(x)} \dd{x} - &= \frac{1}{p(x)} \dd{p} - \\ - \implies \quad - \int \frac{p_1(x)}{p_0(x)} \dd{x} - &= \int \frac{1}{p} \dd{p} - = \ln\!\big( p(x) \big) - \\ - \implies \qquad\qquad - p(x) - &= \exp\!\bigg( \int \frac{p_1(x)}{p_0(x)} \dd{x} \bigg) + \boxed{ + q(x) + \equiv p(x) \frac{p_0(x)}{p_2(x)} + } + = \frac{p_0(x)}{p_2(x)} \exp\!\bigg( \int \frac{p_1(x)}{p_2(x)} \dd{x} \bigg) \end{aligned}$$ -Now that we have $$p(x)$$ and $$q(x)$$, we can define a new operator $$\hat{L}_p$$ as follows: +When rewritten using $$p$$ and $$q$$, +the modified operator $$(p / p_2) \hat{L}$$ looks like this: $$\begin{aligned} - \hat{L}_p \{u\} - = \frac{p}{p_0} \hat{L} \{u\} + \frac{p}{p_2} \hat{L} \{u\} = p u'' + p' u' + q u = (p u')' + q u \end{aligned}$$ This is the self-adjoint form from earlier! -So even if $$p_0' \neq p_1$$, any second-order linear operator with $$p_0(x) \neq 0$$ -can easily be put in self-adjoint form. - -This general form is known as the **Sturm-Liouville operator** $$\hat{L}_{SL}$$, -where $$p(x)$$ and $$q(x)$$ are nonzero real functions of the variable $$x \in [a,b]$$: +So even if $$p_1 \neq p_2'$$, any second-order linear operator +with $$p_2(x) \neq 0$$ can easily be made self-adjoint. +The resulting general form is called the **Sturm-Liouville operator** $$\hat{L}_\mathrm{SL}$$, +for nonzero $$p(x)$$: $$\begin{aligned} \boxed{ - \hat{L}_{SL} \{u(x)\} - = \frac{d}{dx}\Big( p(x) \frac{du}{dx} \Big) + q(x) u(x) - = \hat{L}_{SL}^\dagger \{u(x)\} + \begin{aligned} + \hat{L}_\mathrm{SL} \{u(x)\} + &= \hat{L}_\mathrm{SL}^\dagger \{u(x)\} + \\ + &= \Big( p(x) \: u'(x) \Big)' + q(x) \: u(x) + \end{aligned} } \end{aligned}$$ +Still subject to the constraint $$\big[ p (f^* g' - (f^*)' g) \big]_a^b = 0$$ +such that $$\inprod{f}{\hat{L}_\mathrm{SL} g} = \inprod{\hat{L}_\mathrm{SL}^\dagger f}{g}$$. + ## Eigenvalue problem -A **Sturm-Liouville problem** (SLP) is analogous to a matrix eigenvalue problem, -where $$w(x)$$ is a real weight function, $$\lambda$$ is the **eigenvalue**, -and $$u(x)$$ is the corresponding **eigenfunction**: +An eigenvalue problem of $$\hat{L}_\mathrm{SL}$$ +is called a **Sturm-Liouville problem** (SLP). +The goal is to find the **eigenvalues** $$\lambda$$ +and corresponding **eigenfunctions** $$u(x)$$ that fulfill: $$\begin{aligned} \boxed{ - \hat{L}_{SL}\{u(x)\} = - \lambda w(x) u(x) + \hat{L}_\mathrm{SL}\{u(x)\} = - \lambda \: w(x) \: u(x) } \end{aligned}$$ -Necessarily, $$w(x) > 0$$ except in isolated points, where $$w(x) = 0$$ is allowed; -the point is that any inner product $$\inprod{f}{w g}$$ may never be zero due to $$w$$'s fault. -Furthermore, the convention is that $$u(x)$$ cannot be trivially zero. +Where $$w(x)$$ is a real weight function satisfying $$w(x) > 0$$ for $$x \in \,\,]a, b[$$. +By convention, the trivial solution $$u = 0$$ is not valid. +Some authors have the opposite sign for $$\lambda$$ and/or $$w$$. -In our derivation of $$\hat{L}_{SL}$$, -we removed a boundary term to get self-adjointness. -Consequently, to have a valid SLP, the boundary conditions for -$$u(x)$$ must be as follows, otherwise the operator cannot be self-adjoint: +In our derivation of $$\hat{L}_\mathrm{SL}$$ above, +we imposed the constraint $$\big[ p (f^* g' - (f')^* g) \big]_a^b = 0$$ to ensure that +$$\inprod{\hat{L}_\mathrm{SL}^\dagger f}{g} = \inprod{f}{\hat{L}_\mathrm{SL} g}$$. +Consequently, to have a valid SLP, +the boundary conditions (BCs) on $$u$$ must be such that, +for any two (possibly identical) eigenfunctions $$u_m$$ and $$u_n$$, we have: $$\begin{aligned} - \Big[ p(x) \big( u^*(x) u'(x) - (u'(x))^* u(x) \big) \Big]_a^b = 0 + \Big[ p(x) \big( u_m^*(x) \: u_n'(x) - \big(u_m'(x)\big)^* u_n(x) \big) \Big]_a^b = 0 \end{aligned}$$ -There are many boundary conditions (BCs) which satisfy this requirement. -Some notable ones are listed here non-exhaustively: +There are many boundary conditions that satisfy this requirement. +Some notable ones are listed non-exhaustively below. +Verify for yourself that these work: + **Dirichlet BCs**: $$u(a) = u(b) = 0$$ + **Neumann BCs**: $$u'(a) = u'(b) = 0$$ @@ -190,108 +201,103 @@ Some notable ones are listed here non-exhaustively: + **Periodic BCs**: $$p(a) = p(b)$$, $$u(a) = u(b)$$, and $$u'(a) = u'(b)$$ + **Legendre "BCs"**: $$p(a) = p(b) = 0$$ -Once this requirement is satisfied, Sturm-Liouville theory gives us -some very useful information about $$\lambda$$ and $$u(x)$$. -From the definition of an SLP, we know that, given two arbitrary (and possibly identical) -eigenfunctions $$u_n$$ and $$u_m$$, the following must be satisfied: - -$$\begin{aligned} - 0 = \hat{L}_{SL}\{u_n\} + \lambda_n w u_n = \hat{L}_{SL}\{u_m^*\} + \lambda_m^* w u_m^* -\end{aligned}$$ - -We subtract these expressions, multiply by the eigenfunctions, and integrate: +If this is fulfilled, Sturm-Liouville theory gives us +useful information about $$\lambda$$ and $$u$$. +By definition, the following must be satisfied +for two arbitrary eigenfunctions $$u_m$$ and $$u_n$$: $$\begin{aligned} 0 - &= \int_a^b u_m^* \big(\hat{L}_{SL}\{u_n\} + \lambda_n w u_n\big) - u_n \big(\hat{L}_{SL}\{u_m^*\} + \lambda_m^* w u_m^*\big) \:dx + &= \hat{L}_\mathrm{SL}\{u_m^*\} + \lambda_m^* w u_m^* \\ - &= \int_a^b u_m^* \hat{L}_{SL}\{u_n\} - u_n \hat{L}_{SL}\{u_m^*\} + u_n u_m^* w (\lambda_n - \lambda_m^*) \:dx + &= \hat{L}_\mathrm{SL}\{u_n\} + \lambda_n w u_n \end{aligned}$$ -Rearranging this a bit reveals that these are in fact three inner products: +We multiply each by the other eigenfunction, +subtract the results, and integrate: $$\begin{aligned} - \int_a^b u_m^* \hat{L}_{SL}\{u_n\} - u_n \hat{L}_{SL}\{u_m^*\} \:dx - &= (\lambda_m^* - \lambda_n) \int_a^b u_n u_m^* w \:dx + 0 + &= \int_a^b u_m^* \big(\hat{L}_\mathrm{SL}\{u_n\} + \lambda_n w u_n\big) + - u_n \big(\hat{L}_\mathrm{SL}\{u_m^*\} + \lambda_m^* w u_m^*\big) \dd{x} \\ - \inprod{u_m}{\hat{L}_{SL} u_n} - \inprod{\hat{L}_{SL} u_m}{u_n} - &= (\lambda_m^* - \lambda_n) \inprod{u_m}{w u_n} + &= \int_a^b u_m^* \hat{L}_\mathrm{SL}\{u_n\} - u_n \hat{L}_\mathrm{SL}\{u_m^*\} + + (\lambda_n - \lambda_m^*) u_m^* w u_n \dd{x} + \\ + &= \inprod{u_m}{\hat{L}_\mathrm{SL} u_n} - \inprod{\hat{L}_\mathrm{SL} u_m}{u_n} + + (\lambda_n - \lambda_m^*) \inprod{u_m}{w u_n} \end{aligned}$$ -The operator $$\hat{L}_{SL}$$ is self-adjoint by definition, -so the left-hand side vanishes, leaving us with: +The operator $$\hat{L}_\mathrm{SL}$$ is self-adjoint of course, +so the first two terms vanish, leaving us with: $$\begin{aligned} 0 - &= (\lambda_m^* - \lambda_n) \inprod{u_m}{w u_n} + &= (\lambda_n - \lambda_m^*) \inprod{u_m}{w u_n} \end{aligned}$$ -When $$m = n$$, the inner product $$\inprod{u_n}{w u_n}$$ is real and positive -(assuming $$u_n$$ is not trivially zero, in which case it would be disqualified anyway). -In this case we thus know that $$\lambda_n^* = \lambda_n$$, -i.e. the eigenvalue $$\lambda_n$$ is real for any $$n$$. - -When $$m \neq n$$, then $$\lambda_m^* - \lambda_n$$ may or may not be zero, -depending on the degeneracy. If there is no degeneracy, we -see that $$\inprod{u_m}{w u_n} = 0$$, i.e. the eigenfunctions are orthogonal. +When $$m = n$$, we get $$\inprod{u_n}{w u_n} > 0$$, +so the equation is only satisfied if $$\lambda_n^* = \lambda_n$$, +meaning the eigenvalue $$\lambda_n$$ is real for any $$n$$. +When $$m \neq n$$, then $$\lambda_n - \lambda_m^*$$ +may or may not be zero depending on the degeneracy. +If there is no degeneracy, then $$\lambda_n - \lambda_m^* \neq 0$$, +meaning $$\inprod{u_m}{w u_n} = 0$$, i.e. the eigenfunctions are orthogonal. +In case of degeneracy, manual orthogonalization is needed, +which is guaranteed to be doable using the [Gram-Schmidt method](/know/concept/gram-schmidt-method/). -In case of degeneracy, manual orthogonalization is needed, but as it turns out, -this is guaranteed to be doable, using e.g. the [Gram-Schmidt method](/know/concept/gram-schmidt-method/). - -In conclusion, **a Sturm-Liouville problem has real eigenvalues $$\lambda$$, -and all the corresponding eigenfunctions $$u(x)$$ are mutually orthogonal**: +In conclusion, an SLP has **real eigenvalues** +and **orthogonal eigenfunctions**: for all $$m$$, $$n$$: $$\begin{aligned} \boxed{ - \inprod{u_m(x)}{w(x) u_n(x)} - = \inprod{u_n}{w u_n} \delta_{nm} + \lambda_n \in \mathbb{R} + } + \qquad\qquad + \boxed{ + \inprod{u_m}{w u_n} = A_n \delta_{nm} } \end{aligned}$$ -When you're solving a differential eigenvalue problem, -knowing that all eigenvalues are real is a *huge* simplification, +When solving a differential eigenvalue problem, +knowing that all eigenvalues are real is a huge simplification, so it is always worth checking whether you are dealing with an SLP. -Another useful fact of SLPs is that they always -have an infinite number of discrete eigenvalues. -Furthermore, the eigenvalues always ascend to $$+\infty$$; -in other words, there always exists a *lowest* eigenvalue $$\lambda_0 > -\infty$$, -known as the **ground state**. +Another useful fact: +it turns out that SLPs always have an infinite number of *discrete* eigenvalues. +Furthermore, there always exists a *lowest* eigenvalue $$\lambda_0 > -\infty$$, +called the **ground state**. -## Completeness +## Complete basis -Not only are the eigenfunctions $$u_n(x)$$ of an SLP orthogonal, they -also form a **complete basis**, meaning that any well-behaved function $$f(x)$$ can be -expanded as a **generalized Fourier series** with coefficients $$a_n$$: +Not only are an SLP's eigenfunctions orthogonal, +they also form a **complete basis**, meaning any well-behaved $$f(x)$$ +can be expanded as a **generalized Fourier series** with coefficients $$a_n$$: $$\begin{aligned} \boxed{ f(x) = \sum_{n = 0}^\infty a_n u_n(x) - \quad \mathrm{for}\: x \in ]a, b[ + \quad \mathrm{for} \: x \in \,\,]a, b[ } \end{aligned}$$ -This series will converge significantly faster if $$f(x)$$ -satisfies the same BCs as $$u_n(x)$$. In that case the -expansion will even be valid for the inclusive interval $$x \in [a, b]$$. +This series converges faster if $$f$$ satisfies the same BCs as $$u_n$$; +in that case the expansion is also valid for the inclusive interval $$x \in [a, b]$$. To find an expression for the coefficients $$a_n$$, -we multiply the above generalized Fourier series by $$w(x) u_m^*(x)$$ for an arbitrary $$m$$: +we multiply the above generalized Fourier series by $$u_m^* w$$ +and integrate it to get inner products on both sides: $$\begin{aligned} - f(x) w(x) u_m^*(x) - &= \sum_{n = 0}^\infty a_n u_n(x) w(x) u_m^*(x) -\end{aligned}$$ - -By integrating we get inner products on both the left and the right: - -$$\begin{aligned} - \int_a^b f(x) w(x) u_m^*(x) \dd{x} - &= \int_a^b \bigg( \sum_{n = 0}^\infty a_n u_n(x) w(x) u_m^*(x) \bigg) \dd{x} + u_m^* w f + &= \sum_{n = 0}^\infty a_n u_m^* w u_n + \\ + \int_a^b u_m^* w f \dd{x} + &= \int_a^b \bigg( \sum_{n = 0}^\infty a_n u_m^* w u_n \bigg) \dd{x} \\ \inprod{u_m}{w f} &= \sum_{n = 0}^\infty a_n \inprod{u_m}{w u_n} @@ -307,9 +313,9 @@ $$\begin{aligned} = a_m A_m \end{aligned}$$ -After isolating this for $$a_n$$, we see that +After isolating this for $$a_m$$, we see that the coefficients are given by the projection of the target -function $$f(x)$$ onto the normalized eigenfunctions $$u_n(x) / A_n$$: +function $$f$$ onto the normalized eigenfunctions $$u_m / A_m$$: $$\begin{aligned} \boxed{ @@ -326,19 +332,20 @@ after inserting the expression for $$a_n$$: $$\begin{aligned} f(x) &= \sum_{n = 0}^\infty \frac{1}{A_n} \inprod{u_n}{w f} u_n(x) - = \int_a^b \bigg(\sum_{n = 0}^\infty \frac{1}{A_n} u_n^*(\xi) w(\xi) f(\xi) u_n(x) \bigg) \dd{\xi} \\ - &= \int_a^b f(\xi) \bigg(\sum_{n = 0}^\infty \frac{1}{A_n} u_n^*(\xi) w(\xi) u_n(x) \bigg) \dd{\xi} + &= \int_a^b \bigg(\sum_{n = 0}^\infty \frac{1}{A_n} u_n^*(\xi) \: w(\xi) \: f(\xi) \: u_n(x) \bigg) \dd{\xi} + \\ + &= \int_a^b f(\xi) \bigg(\sum_{n = 0}^\infty \frac{1}{A_n} u_n^*(\xi) \: w(\xi) \: u_n(x) \bigg) \dd{\xi} \end{aligned}$$ Upon closer inspection, the parenthesized summation must be the [Dirac delta function](/know/concept/dirac-delta-function/) $$\delta(x)$$ for the integral to work out. -This is in fact the underlying requirement for completeness: +In fact, this is the underlying requirement for completeness: $$\begin{aligned} \boxed{ - \sum_{n = 0}^\infty \frac{1}{A_n} u_n^*(\xi) w(\xi) u_n(x) = \delta(x - \xi) + \sum_{n = 0}^\infty \frac{1}{A_n} u_n^*(\xi) \: w(\xi) \: u_n(x) = \delta(x - \xi) } \end{aligned}$$ -- cgit v1.2.3