Laplace's method

From Infogalactic: the planetary knowledge core
(Redirected from Laplace approximation)
Jump to: navigation, search

See also: Additive smoothing (Laplace smoothing) a method of smoothing of a statistical estimator

In mathematics, Laplace's method, named after Pierre-Simon Laplace, is a technique used to approximate integrals of the form

 \int_a^b\! e^{M f(x)} \, dx

where ƒ(x) is some twice-differentiable function, M is a large number, and the integral endpoints a and b could possibly be infinite. This technique was originally presented in Laplace (1774, pp. 366–367).

The idea of Laplace's method

File:Laplaces method.svg
The function e(x), in blue, is shown on top for M = 0.5, and at the bottom for M = 3. Here, ƒ(x) = sin x/x, with a global maximum at x0 = 0. It is seen that as M grows larger, the approximation of this function by a Gaussian function (shown in red) is getting better. This observation underlies Laplace's method.

Assume that the function ƒ(x) has a unique global maximum at x0. Then, the value ƒ(x0) will be larger than other values ƒ(x). If we multiply this function by a large number M, the ratio between (x0) and (x) will stay the same (since (x0)/(x) = ƒ(x0)/ƒ(x)), but it will grow exponentially in the function (see figure)

 e^{M f(x)}. \,

Thus, significant contributions to the integral of this function will come only from points x in a neighborhood of x0, which can then be estimated.

General theory of Laplace's method

To state and motivate the method, we need several assumptions. We will assume that x0 is not an endpoint of the interval of integration, that the values ƒ(x) cannot be very close to ƒ(x0) unless x is close to x0, and that the second derivative f''(x_0)<0.

We can expand ƒ(x) around x0 by Taylor's theorem,


\begin{align}
f(x) & = f(x_0) + f'(x_0)(x-x_0) +
\\
& + \frac{1}{2} f''(x_0)(x-x_0)^2 + R 
\end{align}

where R = O\left((x-x_0)^3\right).

Since ƒ has a global maximum at x0, and since x0 is not an endpoint, it is a stationary point, so the derivative of ƒ vanishes at x0. Therefore, the function ƒ(x) may be approximated to quadratic order

 f(x) \approx f(x_0) - \frac{1}{2} |f''(x_0)| (x-x_0)^2

for x close to x0 (recall that the second derivative is negative at the global maximum ƒ(x0)). The assumptions made ensure the accuracy of the approximation

\int_a^b\! e^{M f(x)}\, dx\approx e^{M f(x_0)}\int_a^b e^{-M|f''(x_0)| (x-x_0)^2/2} \, dx

(see the picture on the right). This latter integral is a Gaussian integral if the limits of integration go from −∞ to +∞ (which can be assumed because the exponential decays very fast away from x0), and thus it can be calculated. We find

\int_a^b\! e^{M f(x)}\, dx\approx \sqrt{\frac{2\pi}{M|f''(x_0)|}}e^{M f(x_0)}  \text { as } M\to\infty. \,

A generalization of this method and extension to arbitrary precision is provided by Fog (2008).

Formal statement and proof:

Assume that  f(x) is a twice differentiable function on  [a,b] with  x_0 \in [a,b] the unique point such that  f(x_0) = \max_{[a,b]} f(x) . Assume additionally that  f''(x_0)<0 .

Then,


\lim_{n \to +\infty} \frac{\int_a^b e^{nf(x)} \, dx}{e^{nf(x_0)} \sqrt{\frac{2 \pi}{n (-f''(x_0))}}} =1

<templatestyles src="Template:Hidden begin/styles.css"/>

Proof

Lower bound:

Let  \varepsilon > 0 . Then by the continuity of  f'' there exists  \delta >0 such that if  |x_0-c|< \delta then  f''(c) \ge f''(x_0) - \varepsilon. . By Taylor's Theorem, for any  x \in (x_0 - \delta, x_0 + \delta) , f(x) \ge f(x_0) + \frac{1}{2}(f''(x_0) - \varepsilon)(x-x_0)^2  .

Then we have the following lower bound:


\begin{align}
& \int_a^b e^{n f(x) } \, dx 
\\
& \ge \int_{x_0 - \delta}^{x_0 + \delta} e^{n f(x)} \, dx 
\\
& \ge  e^{n f(x_0)} \int_{x_0 - \delta}^{x_0 + \delta} e^{\frac{n}{2} (f''(x_0) - \varepsilon)(x-x_0)^2} \, dx
\\
&= e^{n f(x_0)} \sqrt{\frac{1}{n (-f''(x_0) + \varepsilon)}} \cdot
\\
& \cdot
\int_{-\delta \sqrt{n (-f''(x_0) + \varepsilon)} }^{\delta \sqrt{n (-f''(x_0) + \varepsilon)} } e^{-\frac{1}{2}y^2} \, dy
\end{align}

where the last equality was obtained by a change of variables  y= \sqrt{n (-f''(x_0) + \varepsilon)} (x-x_0). Remember that  f''(x_0)<0 so that is why we can take the square root of its negation.

If we divide both sides of the above inequality by  e^{nf(x_0)}\sqrt{\frac{2 \pi}{n (-f''(x_0))}} and take the limit we get:


\begin{align}
& \lim_{n \to +\infty} \frac{\int_a^b e^{nf(x)} \,dx}{ e^{nf(x_0)}\sqrt{\frac{2 \pi}{n (-f''(x_0))}} } \ge
\\
& \ge \lim_{n \to +\infty} \frac{1}{\sqrt{2 \pi}} \int_{-\delta\sqrt{n (-f''(x_0) + \varepsilon)} }^{\delta \sqrt{n (-f''(x_0) + \varepsilon)} } e^{-\frac{1}{2}y^2} \, dy \, \cdot \sqrt{\frac{-f''(x_0)}{-f''(x_0) + \varepsilon}}
\\
&= \sqrt{\frac{-f''(x_0)}{-f''(x_0) + \varepsilon}}
\end{align}

since this is true for arbitrary  \varepsilon we get the lower bound:


\lim_{n \to +\infty} \frac{\int_a^b e^{nf(x)} \, dx}{ e^{nf(x_0)}\sqrt{\frac{2 \pi}{n (-f''(x_0))}} } \ge 1

Note that this proof works also when  a = -\infty or  b= \infty (or both).

Upper bound:

The proof of the upper bound is similar to the proof of the lower bound but there are a few inconveniences. Again we start by picking an  \varepsilon >0 but in order for the proof to work we need  \varepsilon small enough so that  f''(x_0) + \varepsilon < 0 . Then, as above, by continuity of  f'' and Taylor's Theorem we can find  \delta>0 so that if  |x-x_0| < \delta , then  f(x) \le f(x_0) + \frac{1}{2} (f''(x_0) + \varepsilon)(x-x_0)^2 . Lastly, by our assumptions (assuming  a,b are finite) there exists an  \eta >0 such that if  |x-x_0|\ge \delta , then  f(x) \le f(x_0) - \eta .

Then we can calculate the following upper bound:


\begin{align}
& \int_a^b e^{n f(x) } \, dx \le
\\
& \le \int_a^{x_0-\delta} e^{n f(x) } \, dx + \int_{x_0-\delta}^{x_0 + \delta} e^{n f(x) } \, dx +
\\
& + \int_{x_0 + \delta}^b e^{n f(x) } \, dx
\\
& \le (b-a)e^{n (f(x_0) - \eta)} + \int_{x_0-\delta}^{x_0 + \delta} e^{n f(x) } \, dx
\\
& \le (b-a)e^{n (f(x_0) - \eta)} + e^{n f(x_0)} \int_{x_0-\delta}^{x_0 + \delta} e^{\frac{n}{2} (f''(x_0)+\varepsilon)(x-x_0)^2} \, dx
\\
& \le (b-a)e^{n (f(x_0) - \eta)} + e^{n f(x_0)} \int_{-\infty}^{+\infty} e^{\frac{n}{2} (f''(x_0)+\varepsilon)(x-x_0)^2} \, dx
\\
& \le (b-a)e^{n (f(x_0) - \eta)} + e^{n f(x_0)} \sqrt{\frac{2 \pi}{n (-f''(x_0) - \varepsilon)}}
\end{align}

If we divide both sides of the above inequality by  e^{nf(x_0)}\sqrt{\frac{2 \pi}{n (-f''(x_0))}} and take the limit we get:


\begin{align}
& \lim_{n \to +\infty} \frac{\int_a^b e^{nf(x)} \, dx}{ e^{nf(x_0)}\sqrt{\frac{2 \pi}{n (-f''(x_0))}} } \le
\\
& \le \lim_{n \to +\infty} (b-a) e^{-\eta n} \sqrt{\frac{n (-f''(x_0))}{2 \pi}} +
\\
& + \sqrt{\frac{-f''(x_0)}{-f''(x_0) - \varepsilon}} 
\\
& = \sqrt{\frac{-f''(x_0)}{-f''(x_0) - \varepsilon}}
\end{align}

Since  \varepsilon is arbitrary we get the upper bound:


\lim_{n \to +\infty} \frac{\int_a^b e^{nf(x)} \, dx}{ e^{nf(x_0)}\sqrt{\frac{2 \pi}{n (-f''(x_0))}} } \le 1

And combining this with the lower bound gives the result.

Note that the above proof obviously fails when  a = -\infty or  b = \infty (or both). To deal with these cases, we need some extra assumptions. A sufficient (not necessary) assumption is that for  n = 1 , the integral  \int_a^b e^{nf(x)} \, dx is finite, and that the number  \eta as above exists (note that this must be an assumption in the case when the interval  [a,b] is infinite). The proof proceeds otherwise as above, but the integrals


\int_a^{x_0-\delta} e^{n f(x) } \, dx + \int_{x_0 + \delta}^b e^{n f(x) } \, dx

must be approximated by


\begin{align}
& \int_a^{x_0-\delta} e^{n f(x) } \, dx + \int_{x_0 + \delta}^b e^{n f(x) } \, dx \le
\\
& \le \int_a^b e^{f(x)}e^{(n-1)(f(x_0) - \eta)} \, dx 
\\
& =  e^{(n-1)(f(x_0) - \eta)} \int_a^b e^{f(x)} \, dx
\end{align}

instead of  (b-a)e^{n (f(x_0) - \eta)} as above, so that when we divide by  e^{nf(x_0)}\sqrt{\frac{2 \pi}{n (-f''(x_0))}} , we get for this term


\begin{align}
& \frac{e^{(n-1)(f(x_0) - \eta)} \int_a^b e^{f(x)} \, dx }{e^{nf(x_0)}\sqrt{\frac{2 \pi}{n (-f''(x_0))}}} =
\\
& = e^{-(n-1)\eta} \sqrt{n} e^{-f(x_0)} \int_a^b e^{f(x)} \, dx \sqrt{\frac{ -f''(x_0)}{ 2 \pi}}
\end{align}

whose limit as  n \rightarrow \infty is  0 . The rest of the proof (the analysis of the interesting term) proceeds as above.

The given condition in the infinite interval case is, as said above, sufficient but not necessary. However, the condition is fulfilled in many, if not in most, applications: the condition simply says that the integral we are studying must be well-defined (not infinite) and that the maximum of the function at  x_0 must be a "true" maximum (the number  \eta > 0 must exist). There is no need to demand that the integral is finite for  n =1 but it is enough to demand that the integral is finite for some  n = N .

This method relies on 4 basic concepts such as

<templatestyles src="Template:Hidden begin/styles.css"/>

Concepts
1. Relative error

First of all, we need to have an understanding about the so-called “approximation” in this method is related to the relative error instead of the absolute error. Therefore, if we set

 s = \sqrt{\frac{2\pi}{M\left|f''(x_0)\right|}}

, this integration can be written as


\begin{align} 
& \int_a^b\! e^{M f(x)} \, dx =
\\
& = se^{Mf(x_0)} \frac{1}{s}\int_a^b\! e^{M(f(x)-f(x_0))}\, dx 
\\   
& = se^{Mf(x_0)} \int_{(a-x_0)/s}^{(b-x_0)/s}\! e^{M(f(sy+x_0)-f(x_0))}\,dy 
\end{align}

, where s is a small number when M is a large number obviously and the relative error will be


\left| \int_{(a-x_0)/s}^{(b-x_0)/s}  e^{M(f(sy+x_0)-f(x_0))} dy-1 \right|.

Now, let us separate this integration into two parts:  y\in[-D_y,D_y] region and the rest part.

2. function  e^{M\left(f(sy+x_0)-f(x_0) \right)} will tend to  e^{-\pi y^2} around the stationary point when M is large enough

Let’s look at the Taylor expansion of  M\left( f(x)-f(x_0) \right) around x0 and translate x to y because we do the comparison in y-space, we will get


\begin{align} 
& M\left( f(x)-f(x_0) \right) =
\\
& = \frac{Mf''(x_0)}{2}s^2y^2 +\frac{Mf''' (x_0)}{6}s^3y^3+ \cdots 
\\    
& = -\pi y^2 +O\left(\frac{1}{\sqrt{M}}\right). 
\end{align}

Note that  f'(x_0)=0 because x_0 is a stationary point. From this equation you will find that the terms higher than second derivative in this Taylor expansion is suppressed as the order of  \frac{1}{\sqrt{M}} so that  \exp\left(M\left( f(x)-f(x_0) \right) \right) will get closer to the Gaussian function as shown in figure. Besides,

 \int_{-\infty}^{\infty}e^{-\pi y^2} dy =1.

File:For laplace method --- with different M.png
The figure of  e^{M[f(sy+x_0)-f(x_0)]} with  M equals 1, 2 and 3, and the red line is the curve of function  e^{-\pi y^2} .
3. The bigger  M is, the smaller range of  x is related

Because we do the comparison in y-space,  y is fixed in  y\in[-D_y,D_y] which will cause  x\in[-sD_y,sD_y] ; however, s is inversely proportional to  \sqrt{M} , the chosen region of x will be smaller when M is increased.

4. If the integration used by Laplace’s method is converged, the contribution of the region which is not around the stationary point x_0 of the integration of its relative error will tend to zero when M is increased.

Relying on the 3rd concept, even if we choose a very large Dy , sDy will finally be a very small number when M is increased to a huge number. Then, how can we guarantee the integration of the rest part will tend to 0 when  M is large enough?

The basic idea is trying to find a function  m(x) which will  m(x)\ge f(x) and the integration of  e^{Mm(x)} will tend to zero when M is increased. Because the exponential function of Mm(x) will be always larger than zero as long as m(x) is a real number, and this exponential function is proportional to m(x) , the integration of  e^{Mf(x)} will tend to zero. For simplicity, let me choose  m(x) as a tangent through the point  x=sD_y as shown in the figure:

File:For laplace method --- upper limit function m(x).gif
 m(x) is denoted by the two tangent lines passing through  x=\pm sD_y+x_0 . When  sD_y gets smaller, the cover region will be larger.

If the interval of the integration of this method is finite, we will find that no matter  f(x) is continue in the rest region, it will be always smaller than m(x) shown above when M is large enough. By the way, it will be proved later that the integration of  e^{Mm(x)} will tend to zero when  M is large enough.

If the interval of the integration of this method is infinite,  m(x) and  f(x) might always cross to each other. If so, we cannot guarantee that the integration of  e^{Mf(x)} will tend to zero finally. For example, in the case of  f(x)=\frac{sin(x)}{x} ,  \int^{\infty}_{0}e^{Mf(x)} dx will be always diverged. Therefore, we need to require that  \int^{\infty}_{d}e^{Mf(x)} dx can converge for the infinite interval case. If so, this integration will tend to zero when  d is large enough and we can choose this  d as the cross of  m(x) and  f(x) .

You might ask that why not choose \int^{\infty}_{d}e^{f(x)} dx as a convergent integration? Let me use an example to show you the reason. Suppose the rest part of  f(x) is  -\ln x , then  e^{f(x)}=\frac{1}{x} and its integration will diverge; however, when  M=2 ,the integration of  e^{Mf(x)}=\frac{1}{x^2} converges. So, the integrations of some functions will diverge when  M is not a large number, but they will converge when  M is large enough.

Based on these four concepts, we can derive the relative error of this Laplace's method.

Other formulations

Laplace's approximation is sometimes written as


\begin{align}
& \int_a^b\! h(x) e^{M g(x)}\, dx
\\
& \approx \sqrt{\frac{2\pi}{M|g''(x_0)|}} h(x_0) e^{M g(x_0)}  \text { as } M\to\infty \,
\end{align}

where h is positive.

Importantly, the accuracy of the approximation depends on the variable of integration, that is, on what stays in g(x) and what goes into h(x).[1]

<templatestyles src="Template:Hidden begin/styles.css"/>

The derivation of its relative error

First of all, let me set the global maximum is located at x_0=0 which can simplify the derivation and does not lost any important information; therefore, all the derivation inside this sub-section is under this assumption. Besides, what we want is the relative error \left|R \right| as shown below


\begin{align}
& \int_a^b\! h(x) e^{M g(x)}\, dx  
\\
&= h(0)e^{Mg(0)}s \underbrace{\int_{a/s}^{b/s}\frac{h(x)}{h(0)}e^{M\left[ g(sy)-g(0) \right]} dy}_{1+R},
\end{align}

where  s\equiv\sqrt{\frac{2\pi}{M\left| g'' (0) \right|}}. So, if we let  A\equiv \frac{h(sy)}{h(0)}e^{M\left[g(sy)-g(0) \right]} and  A_0\equiv e^{-\pi y^2} , we can get

 
\left| R\right| = \left| \int_{a/s}^{b/s}A\,dy -\int_{-\infty}^{\infty}A_0\,dy   \right|

since  \int_{-\infty}^{\infty}A_0\,dy =1. Now, let us find its upper bound.

Owing to  \left| A+B \right| \le |A|+|B| , we can separate this integration into 5 parts with 3 different types (a), (b) and (c), respectively. Therefore,


\begin{align}
|R| & < \underbrace{\left| \int_{D_y}^{\infty}A_0 dy \right|}_{(a_1)} +  \underbrace{\left| \int_{D_y}^{b/s}A dy \right|}_{(b_1)}+ \underbrace{\left| \int_{-D_y}^{D_y}\left(A-A_0\right) dy \right|}_{(c)} +
\\
& + \underbrace{\left| \int_{a/s}^{-D_y}A dy \right|}_{(b_2)} + \underbrace{\left| \int_{-\infty}^{-D_y}A_0 dy \right|}_{(a_2)}
\end{align}

where  (a_1) and  (a_2) are similar, let us just calculate  (a_1) , and  (b_1) and  (b_2) are similar, too, I’ll just calculate  (b_1) .

For (a_1), after the translation of  z\equiv\pi y^2 , we can get

 (a_1) = \left| \frac{1}{2\sqrt{\pi}}\int_{\pi D_y^2}^{\infty} e^{-z}z^{-1/2} dz\right| <\frac{e^{-\pi D_y^2}}{2\pi D_y}.

This means that as long as  D_y is large enough, it will tend to zero.

For (b_1), we can get

 (b_1)\le\left| \int_{D_y}^{b/s}\left[\frac{h(sy)}{h(0)}\right]_{\text{max}} e^{Mm(sy)}dy \right|

where

 m(x) \ge \frac{1}{M}\ln{\frac{h(x)}{h(0)}}+g(x)-g(0) \,\,\text{as}\,\, x\in [sD_y,b]

and  h(x) should have the same sign of  h(0) during this region. Let us choose  m(x) as the tangent across the point at  x=sD_y , i.e.  m(sy)= g(sD_y)-g(0) +g'(sD_y)\left( sy-sD_y \right) which is shown in the figure

File:For laplace method --- upper limit function m(x).gif
 m(x) is the tangent lines across the point at  x=sD_y .

From this figure you can find that when s or D_y gets smaller, the region satisfies the above inequality will get larger. Therefore, if we want to find a suitable m(x) to cover the whole f(x) during the interval of (b_1), D_y will have an upper limit. Besides, because the integration of  e^{-\alpha x} is simple, let me use it to estimate the relative error contributed by this (b_1).

Based on Taylor expansion, we can get


\begin{align}
& M\left[g(sD_y)-g(0)\right] =
\\
& = M\left[ \frac{g''(0)}{2}s^2D_y^2 +\frac{g'''(\xi)}{6}s^3D_y^3 \right] \,\, \text{as}\,\, \xi\in[0,sD_y] 
\\
& = -\pi D_y^2 +\frac{(2\pi)^{3/2}g'''(\xi)D_y^3}{6\sqrt{M}|g''(0)|^{3/2}},
\end{align}

and


\begin{align}
& Msg'(sD_y) =
\\
& = Ms\left(g''(0)sD_y +\frac{g'''(\zeta)}{2}s^2D_y^2\right), \,\, \text{as} \,\,\zeta\in[0,sD_y] 
\\
& = -2\pi D_y +\sqrt{\frac{2}{M}}\left( \frac{\pi}{|g''(0)|} \right)^{3/2}g'''(\zeta)D_y^2,
\end{align}

and then substitute them back into the calculation of  (b_1) ; however, you can find that the remainders of these two expansions are both inversely proportional to the square root of  M , let me drop them out to beautify the calculation. Keeping them is better, but it will make the formula uglier.


\begin{align}
(b_1)  & \le \left|\left[ \frac{h(sy)}{h(0)} \right]_{\text{max}} e^{-\pi D_y^2}\int_0^{b/s-D_y}e^{-2\pi D_y y} dy \right| 
\\
 & \le  \left|\left[ \frac{h(sy)}{h(0)} \right]_{\text{max}} e^{-\pi D_y^2}\frac{1}{2\pi D_y} \right|.
\end{align}

Therefore, it will tend to zero when  D_y gets larger, but don't forget that the upper bound of  D_y should be considered during this calculation.

About the integration near x=0, we can also use Taylor's Theorem to calculate it. When  h'(0) \ne 0


\begin{align}
(c) & \le \int_{-D_y}^{D_y} e^{-\pi y^2} \left| \frac{sh'(\xi)}{h(0)}y \right|\, dy 
\\
& < \sqrt{\frac{2}{\pi M |g''(0)|}} \left| \frac{h'(\xi)}{h(0)}y \right|_\text{max} \left( 1-e^{-\pi D_y^2} \right)
\end{align}

and you can find that it is inversely proportional to the square root of  M . In fact, (c) will have the same behave when h(x) is a constant.

Conclusively, the integration near the stationary point will get smaller when  \sqrt{M} gets larger, and the rest parts will tend to zero as long as  D_y is large enough; however, we need to remember that  D_y has an upper limit which is decided by whether the function  m(x) is always larger than  g(x)-g(0) during this rest region. However, as long as we can find one  m(x) satisfies this condition, the upper bound of  D_y can be chosen as directly proportional to  \sqrt{M} since m(x) is a tangent across the point of g(x)-g(0) at x=sD_y. So, the bigger  M is, the bigger  D_y can be.

In the multivariate case where \mathbf{x} is a d-dimensional vector and f(\mathbf{x}) is a scalar function of \mathbf{x}, Laplace's approximation is usually written as:


\begin{align}
& \int e^{M f(\mathbf{x})}\, d\mathbf{x} \approx
\\
& \approx \left(\frac{2\pi}{M}\right)^{d/2} |-H(f)(\mathbf{x}_0)|^{-1/2} e^{M f(\mathbf{x}_0)}  
\\
& \text { as } M\to\infty \,
\end{align}

where H(f)(\mathbf{x}_0) is the Hessian matrix of f evaluated at \mathbf{x}_0 and where |\cdot| denotes matrix determinant. Analogously to the univariate case, the Hessian is required to be negative definite.[2]

By the way, although \mathbf{x} denotes a d-dimensional vector, the term  d\mathbf{x} denotes an Infinitesimal volume here, i.e. d\mathbf{x} := dx_1dx_2\cdots dx_d.

Laplace's method extension: Steepest descent

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

In extensions of Laplace's method, complex analysis, and in particular Cauchy's integral formula, is used to find a contour of steepest descent for an (asymptotically with large M) equivalent integral, expressed as a line integral. In particular, if no point x0 where the derivative of ƒ vanishes exists on the real line, it may be necessary to deform the integration contour to an optimal one, where the above analysis will be possible. Again the main idea is to reduce, at least asymptotically, the calculation of the given integral to that of a simpler integral that can be explicitly evaluated. See the book of Erdelyi (1956) for a simple discussion (where the method is termed steepest descents).

The appropriate formulation for the complex z-plane is


\begin{align}
& \int_a^b\! e^{M f(z)}\, dz \approx
\\
& \approx \sqrt{\frac{2\pi}{-Mf''(z_0)}}e^{M f(z_0)}  
\\
& \text{ as } M\to\infty. \,
\end{align}

for a path passing through the saddle point at z0. Note the explicit appearance of a minus sign to indicate the direction of the second derivative: one must not take the modulus. Also note that if the integrand is meromorphic, one may have to add residues corresponding to poles traversed while deforming the contour (see for example section 3 of Okounkov's paper Symmetric functions and random partitions).

Further generalizations

An extension of the steepest descent method is the so-called nonlinear stationary phase/steepest descent method. Here, instead of integrals, one needs to evaluate asymptotically solutions of Riemann–Hilbert factorization problems.

Given a contour C in the complex sphere, a function ƒ defined on that contour and a special point, say infinity, one seeks a function M holomorphic away from the contour C, with prescribed jump across C, and with a given normalization at infinity. If ƒ and hence M are matrices rather than scalars this is a problem that in general does not admit an explicit solution.

An asymptotic evaluation is then possible along the lines of the linear stationary phase/steepest descent method. The idea is to reduce asymptotically the solution of the given Riemann–Hilbert problem to that of a simpler, explicitly solvable, Riemann–Hilbert problem. Cauchy's theorem is used to justify deformations of the jump contour.

The nonlinear stationary phase was introduced by Deift and Zhou in 1993, based on earlier work of Its. A (properly speaking) nonlinear steepest descent method was introduced by Kamvissis, K. McLaughlin and P. Miller in 2003, based on previous work of Lax, Levermore, Deift, Venakides and Zhou.

The nonlinear stationary phase/steepest descent method has applications to the theory of soliton equations and integrable models, random matrices and combinatorics.

Complex integrals

For complex integrals in the form:

  \frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty} g(s)e^{st} \,ds

with t >> 1, we make the substitution t = iu and the change of variable s = c + ix to get the bilateral Laplace transform:

 \frac{1}{2 \pi}\int_{-\infty}^\infty g(c+ix)e^{-ux}e^{icu} \, dx.

We then split g(c+ix) in its real and complex part, after which we recover u = t / i. This is useful for inverse Laplace transforms, the Perron formula and complex integration.

Example 1: Stirling's approximation

Laplace's method can be used to derive Stirling's approximation

N!\approx \sqrt{2\pi N} N^N e^{-N}\,

for a large integer N.

From the definition of the Gamma function, we have

N! = \Gamma(N+1)=\int_0^\infty e^{-x} x^N \, dx.

Now we change variables, letting

x = N z \,

so that

dx = N \, dz.

Plug these values back in to obtain


\begin{align}
N! & = \int_0^\infty e^{-N z} \left(N z \right)^N N \, dz \\
& = N^{N+1}\int_0^\infty e^{-N z} z^N \, dz \\
& = N^{N+1}\int_0^\infty e^{-N z} e^{N\ln z} \, dz \\
& = N^{N+1}\int_0^\infty e^{N(\ln z-z)} \, dz.
\end{align}

This integral has the form necessary for Laplace's method with

f \left( z \right) = \ln{z}-z

which is twice-differentiable:

f'(z) = \frac{1}{z}-1,\,
f''(z) = -\frac{1}{z^2}.\,

The maximum of ƒ(z) lies at z0 = 1, and the second derivative of ƒ(z) has the value −1 at this point. Therefore, we obtain

N! \approx N^{N+1}\sqrt{\frac{2\pi}{N}} e^{-N}=\sqrt{2\pi N} N^N e^{-N}.\,

Example 2: parameter estimation and probabilistic inference

Azevedo-Filho & Shachter 1994 reviews Laplace's method results (univariate and multivariate) and presents a detailed example showing the method used in parameter estimation and probabilistic inference under a Bayesian perspective. Laplace's method is applied to a meta-analysis problem from the medical domain, involving experimental data, and compared to other techniques.

See also

Notes

<templatestyles src="Reflist/styles.css" />

Cite error: Invalid <references> tag; parameter "group" is allowed only.

Use <references />, or <references group="..." />

References

  • Lua error in package.lua at line 80: module 'strict' not found..
  • Lua error in package.lua at line 80: module 'strict' not found..
  • Lua error in package.lua at line 80: module 'strict' not found..
  • Lua error in package.lua at line 80: module 'strict' not found..
  • Lua error in package.lua at line 80: module 'strict' not found..
  • Laplace, P. S. (1774). Memoir on the probability of causes of events. Mémoires de Mathématique et de Physique, Tome Sixième. (English translation by S. M. Stigler 1986. Statist. Sci., 1(19):364–378).
  • Lua error in package.lua at line 80: module 'strict' not found.

This article incorporates material from saddle point approximation on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. Lua error in package.lua at line 80: module 'strict' not found.