German tank problem

From Infogalactic: the planetary knowledge core
Jump to: navigation, search
During World War II, production of German tanks such as the Panther was accurately estimated by Allied intelligence using statistical methods.

In the statistical theory of estimation, the problem of estimating the maximum of a discrete uniform distribution from sampling without replacement is known in English as the German tank problem, due to its application in World War II to the estimation of the number of German tanks.

This analyses shows the approach that was used and illustrate the difference between frequentist inference and Bayesian inference.

Estimating the population maximum based on a single sample yields divergent results, while the estimation based on multiple samples is an instructive practical estimation question whose answer is simple but not obvious.


Example

Suppose an intelligence officer has spotted k = 4 tanks with serial numbers, 2, 6, 7, and 14, with the maximum observed serial number, m = 14. The unknown total number of tanks is called N.

The formula for estimating the total number of tanks suggested by the frequentist approach outlined below is

N \approx m + \frac{m}{k} - 1 = 16.5

Whereas, the Bayesian analysis below yields (primarily) a probability mass function for the number of tanks

\Pr(N=n) = \begin{cases}
   0 &\text{if } n < m \\
   \frac {k - 1}{k}\frac{\binom{m - 1}{k - 1}}{\binom n k} &\text{if } n \ge m
\end{cases}

from which we can estimate the number of tanks according to

\begin{align}
       N &\approx \mu \pm \sigma = 19.5 \pm 10 \\
     \mu &= (m - 1)\frac{k - 1}{k - 2} \\
  \sigma &= \sqrt{\frac{(k-1)(m-1)(m-k+1)}{(k-3)(k-2)^2}}
\end{align}

This distribution has positive skewness, related to the fact that there are at least 14 tanks.

Historical problem

Panther tanks are loaded for transport to frontline units, 1943.

During the course of the war the Western Allies made sustained efforts to determine the extent of German production, and approached this in two major ways: conventional intelligence gathering and statistical estimation. In many cases, statistical analysis substantially improved on conventional intelligence. In some cases, conventional intelligence was used in conjunction with statistical methods, as was the case in estimation of Panther tank production just prior to D-Day.

The allied command structure had thought the Panzer V (Panther) tanks seen in Italy, with their high velocity, long barreled 75 mm/L70 guns, were unusual heavy tanks, and would only be seen in northern France in small numbers, much the same way as the Tiger I was seen in Tunisia. The US Army was confident that the Sherman tank would continue to perform well, as it had versus the Panzer III and Panzer IV tanks in North Africa and Sicily.[N 1] Shortly before D-Day, rumors indicated that large numbers of Panzer V tanks were being used.

To ascertain whether this was true, the Allies attempted to estimate the number of tanks being produced. To do this they used the serial numbers on captured or destroyed tanks. The principal numbers used were gearbox numbers, as these fell in two unbroken sequences. Chassis and engine numbers were also used, though their use was more complicated. Various other components were used to cross-check the analysis. Similar analyses were done on tires, which were observed to be sequentially numbered (i.e., 1, 2, 3, ..., N).[2][lower-alpha 1][3][4]

The analysis of tank wheels yielded an estimate for the number of wheel molds that were in use. A discussion with British road wheel makers then estimated the number of wheels that could be produced from this many molds, which yielded the number of tanks that were being produced each month. Analysis of wheels from two tanks (32 road wheels each, 64 road wheels total) yielded an estimate of 270 produced in February 1944, substantially more than had previously been suspected.[5]

German records after the war showed production for the month of February 1944 was 276.[6][N 2] The statistical approach proved to be far more accurate than conventional intelligence methods, and the phrase "German tank problem" became accepted as a descriptor for this type of statistical analysis.

Estimating production was not the only use of this serial number analysis. It was also used to understand German production more generally, including number of factories, relative importance of factories, length of supply chain (based on lag between production and use), changes in production, and use of resources such as rubber.

Specific data

According to conventional Allied intelligence estimates, the Germans were producing around 1,400 tanks a month between June 1940 and September 1942. Applying the formula below to the serial numbers of captured tanks, the number was calculated to be 256 a month. After the war, captured German production figures from the ministry of Albert Speer showed the actual number to be 255.[3]

Estimates for some specific months are given as:[7]

Month Statistical estimate Intelligence estimate German records
June 1940 169 1,000 122
June 1941 244 1,550 271
August 1942 327 1,550 342

Similar analyses

V-2 rocket production was accurately estimated by statistical methods.

Similar serial number analysis was used for other military equipment during World War II, most successfully for the V-2 rocket.[8]

During World War II, German intelligence analyzed factory markings on Soviet military equipment, and during the Korean War, factory markings on Soviet equipment were analyzed. The Soviets also estimated German tank production during World War II.[9]

In the 1980s, some Americans were given access to the production line of Israel's Merkava tanks. The production numbers were classified, but the tanks had serial numbers, allowing estimation of production.[10]

The formula has been used in non-military contexts, for example to estimate the number of Commodore 64 computers built, where the result (12.5 million) matches the official figures quite well.[11]

Countermeasures

Lua error in package.lua at line 80: module 'strict' not found. To prevent serial number analysis, serial numbers can be excluded, or usable auxiliary information reduced. Alternatively, serial numbers that resist cryptanalysis can be used, most effectively by randomly choosing numbers without replacement from a list that is much larger than the number of objects produced (compare the one-time pad), or produce random numbers and check them against the list of already assigned numbers; collisions are likely to occur unless the number of digits possible is more than twice the number of digits in the number of objects produced (where the serial number can be in any base); see birthday problem.[lower-alpha 2] For this, a cryptographically secure pseudorandom number generator may be used. All these methods require a lookup table (or breaking the cypher) to back out from serial number to production order, which complicates use of serial numbers: a range of serial numbers cannot be recalled, for instance, but each must be looked up individually, or a list generated.

Alternatively, sequential serial numbers can be encrypted with a simple substitution cipher, which allows easy decoding, but is also easily broken by a known-plaintext attack: even if starting from an arbitrary point, the plaintext has a pattern (namely, numbers are in sequence). One example is given in Ken Follett's novel "Code to Zero", where the encryption of the Jupiter C rocket serial numbers is described as:

H U N T S V I L E X
1 2 3 4 5 6 7 8 9 0

The code word here is Huntsville (with repeated letters omitted) to get a 10-letter key. The rocket number 13 was therefore "HN", or the rocket number 24 was "UT".

Strong encryption of serial numbers without expanding them can be achieved with format-preserving encryption. Instead of storing a truly random permutation on the set of all possible serial numbers in a large table, such algorithms will derive a pseudo-random permutation from a secret key. Security can then be defined as the pseudo-random permutation being indistinguishable from a truly random permutation to an attacker who doesn't know the key.

Frequentist analysis

Minimum-variance unbiased estimator

For point estimation (estimating a single value for the total(\hat{N})), the minimum-variance unbiased estimator (MVUE, or UMVU estimator) is given by:[lower-alpha 3]

\hat{N} = m\left(1 + k^{-1}\right) - 1

where m is the largest serial number observed (sample maximum) and k is the number of tanks observed (sample size).[10][12][13] Note that once a serial number has been observed, it is no longer in the pool and will not be observed again.

This has a variance of[10]

 \operatorname{var}(\hat{N}) = \frac{1}{k}\frac{(N-k)(N+1)}{(k+2)} \approx \frac{N^2}{k^2} \text{ for small samples } k \ll N

so a standard deviation of approximately N/k, the (population) average size of a gap between samples; compare m/k above.

Intuition

The formula may be understood intuitively as the sample maximum plus the average gap between observations in the sample, the sample maximum being chosen as the initial estimator, due to being the maximum likelihood estimator,[lower-alpha 4] with the gap being added to compensate for the negative bias of the sample maximum as an estimator for the population maximum,[lower-alpha 5] and written as

\hat{N} = m + \frac{m - k}{k}= m + mk^{-1} - 1 = m\left(1 + k^{-1}\right) - 1

This can be visualized by imagining that the samples are evenly spaced throughout the range, with additional samples just outside the range at 0 and N + 1. If starting with an initial gap between 0 and the lowest sample (sample minimum), the average gap between samples is (m - k)/k; the -k being because the samples themselves are not counted in computing the gap between samples.[lower-alpha 6]

This philosophy is formalized and generalized in the method of maximum spacing estimation.

Derivation

The probability that the sample maximum equals m is \tbinom{m - 1}{k - 1}\big/\tbinom Nk, where \tbinom \cdot\cdot is the binomial coefficient.

Given the total number N and the sample size k, the expected value of the sample maximum is

\begin{align}
            \mu &= \sum_{m=k}^N m\frac{\tbinom{m - 1}{k - 1}}{\tbinom Nk} = \frac{k(N + 1)}{k + 1} 
\end{align}

From this the unknown quantity N can be expressed in terms of expectation and sample size as

\begin{align}
   N &= \mu\left(1 + k^{-1}\right) - 1
\end{align}

By linearity of the expectation it is obtained that

\begin{align}
  \mu\left(1 + k^{-1}\right) - 1 &= E\left[m\left(1 + k^{-1}\right) - 1\right] 
\end{align}

and so an unbiased estimator of N is obtained by replacing the expectation with the observation, so that

\begin{align}
             \hat{N} &= m\left(1 + k^{-1}\right) - 1.
\end{align}

To show that this is the UMVU estimator:

Confidence intervals

Instead of, or in addition to, point estimation, interval estimation can be carried out, such as confidence intervals. These are easily computed, based on the observation that the probability that k samples will fall in an interval covering p of the range (0 ≤ p ≤ 1) is pk (assuming in this section that draws are with replacement, to simplify computations; if draws are without replacement, this overstates the likelihood, and intervals will be overly conservative).

Thus the sampling distribution of the quantile of the sample maximum is the graph x1/k from 0 to 1: the pth to qth quantile of the sample maximum m are the interval [p1/kNq1/kN]. Inverting this yields the corresponding confidence interval for the population maximum of [m/q1/km/p1/k].

For example, taking the symmetric 95% interval p = 2.5% and q = 97.5% for k = 5 yields \scriptstyle 0.025^{1/5} \;\approx\; 0.48,\, \scriptstyle 0.975^{1/5} \;\approx\; 0.995, so a confidence interval of approximately \scriptstyle\left[1.005m,\, 2.08m\right]. The lower bound is very close to m, so more informative is the asymmetric confidence interval from p = 5% to 100%; for k = 5 this yields \scriptstyle 0.05^{1/5} \;\approx\; 0.55, so the interval [m, 1.82m].

More generally, the (downward biased) 95% confidence interval is \scriptstyle\left[m,\, m/0.05^{1/k}\right] \;=\; \left[m,\, m \cdot 20^{1/k}\right]. For a range of k, with the UMVU point estimator (plus 1 for legibility) for reference, this yields:

k point estimate confidence interval
1 \scriptstyle 2m \scriptstyle [m,\, 20m]
2 \scriptstyle 1.5m \scriptstyle [m,\, 4.5m]
5 \scriptstyle 1.2m \scriptstyle [m,\, 1.82m]
10 \scriptstyle 1.1m \scriptstyle [m,\, 1.35m]
20 \scriptstyle 1.05m \scriptstyle [m,\, 1.16m]

Immediate observations are:

  • For small sample sizes, the confidence interval is very wide, reflecting great uncertainty in the estimate.
  • The range shrinks rapidly, reflecting the exponentially decaying likelihood that all samples will be significantly below the maximum.
  • The confidence interval exhibits positive skew, as N can never be below the sample maximum, but can potentially be arbitrarily high above it.

Note that m/k cannot be used naively (or rather (m + m/k − 1)/k) as an estimate of the standard error SE, as the standard error of an estimator is based on the population maximum (a parameter), and using an estimate to estimate the error in that very estimate is circular reasoning.

In some fields, notably futurology, estimation of confidence intervals in this way, based on a single sample – considering it as a randomly sampled quantile (by mediocrity principle) – is known as the Copernican principle. This is particularly applied to estimate lifetimes based on current age, notably in the doomsday argument, which applies it to estimate the expected survival time of the human race.

Bayesian analysis

The Bayesian approach to the German tank problem is to consider the credibility \scriptstyle (N=n\mid M=m, K=k) that the number of enemy tanks \scriptstyle N is equal to the number \scriptstyle n, when the number of observed tanks, \scriptstyle K is equal to the number \scriptstyle k, and the maximum serial number \scriptstyle M is equal to the number \scriptstyle m.

For brevity \scriptstyle (N=n\mid M=m,K=k) is written \scriptstyle (n\mid m,k)

The rule for conditional probability gives

(n\mid m,k) = (m\mid n,k)\frac {(n\mid k)}{(m\mid k)}

The expression \scriptstyle (m\mid n,k)=(M=m\mid N=n,K=k) is the conditional probability that the maximum serial number observed is equal to \scriptstyle m, when the number of enemy tanks is known to be equal to \scriptstyle n, and \scriptstyle k enemy tanks have been observed. It is


  (m\mid n,k) =
  \begin{cases}
    \frac{\binom{m - 1}{k - 1}}{\binom{n}{k}} &\text{if } k \le m \le n\\
    0                                         &\text{otherwise}
  \end{cases}

where the binomial coefficient \scriptstyle \binom n k is the number of \scriptstyle k-sized samples from an \scriptstyle n-sized population.

The expression \scriptstyle (m\mid k)=(M=m\mid K=k) is the probability that the maximum serial number is equal to m once k tanks have been observed but before the serial numbers have actually been observed. \scriptstyle (m\mid k) can be re-written in terms of the other quantities by marginalizing over all possible \scriptstyle n.

\begin{align}
  (m\mid k)
    &=(m\mid k)\cdot 1 \\
    &=(m\mid k){\sum_{n=0}^\infty(n\mid m,k)} \\
    &=(m\mid k){\sum_{n=0}^\infty(m\mid n,k)\frac {(n\mid k)}{(m\mid k)}} \\
    &=\sum_{n=0}^\infty(m\mid n,k)(n\mid k)
\end{align}

The expression \scriptstyle (n\mid k)=(N=n\mid K=k) is the credibility that the total number of tanks is equal to n when k tanks have been observed but before the serial numbers have actually been observed. Assume that it is some discrete uniform distribution


  (n\mid k) = 
  \begin{cases}
    \frac 1{\Omega - k} &\text{if } k \le n < \Omega \\
    0                   &\text{otherwise}
  \end{cases}

The upper limit \Omega must be finite, because the function


  f(n)=\lim_{\Omega\rarr\infty}
  \begin{cases}
    \frac 1{\Omega - k} &\text{if } k \le n < \Omega \\
    0                   &\text{otherwise}
  \end{cases}

is \scriptstyle f(n)=0 which is not a probability mass distribution function.

Then


  (n\mid m,k) = 
  \begin{cases}
    \frac{(m\mid n,k)}{\sum_{n=m}^{\Omega - 1} (m\mid n,k)} &\text{if } m \le n < \Omega \\
    0                                                 &\text{otherwise}
  \end{cases}

If \scriptstyle \sum_{n=m}^\infty(m\mid n,k)<\infty, then the unwelcome variable \scriptstyle \Omega disappears from the expression.


  (n\mid m,k) =
  \begin{cases}
    0                                          &\text{if } n < m \\
    \frac{(m\mid n,k)}{\sum_{n=m}^\infty(m\mid n,k)} &\text{if } n\ge m
  \end{cases}

For k ≥ 1 the mode of the distribution of the number of enemy tanks is m.

For k ≥ 2, the credibility that the number of enemy tanks is equal to n, is


  (N=n\mid M=m\ge k,K=k\ge 2) =
  \begin{cases}
    0                                                      &\text{if } n < m \\
    \frac {k - 1}{k }\frac {\binom{m - 1}{k - 1}}{\binom n k} &\text{if } n \ge m
  \end{cases}

and the credibility that the number of enemy tanks, \scriptstyle N, is greater than \scriptstyle n, is


  (N > n\mid M = m \ge k , K = k \ge 2) = 
  \begin{cases}
    1                                              &\text{if } n < m \\
    \frac {\binom{m - 1}{k - 1}}{\binom n {k - 1}} &\text{if } n \ge m
  \end{cases}

For k ≥ 3, N has the finite mean value:

\frac{(m - 1)(k - 1)}{k - 2}

For k ≥ 4, \scriptstyle N has the finite standard deviation:

\sqrt{\frac{(m - 1)(k - 1)(m + 1 - k)}{(k - 2)^2(k - 3)}}

These formulas are derived below.

Summation formula

The following binomial coefficient identity is used below for simplifying series relating to the German Tank Problem.

\sum_{n=m}^\infty \frac 1 {\binom n k} = \frac k{k - 1}\frac 1 {\binom{m - 1}{k - 1}}

This sum formula is somewhat analogous to the integral formula

\int_{n=m}^\infty \frac {dn}{n^k} = \frac 1{k - 1}\frac 1{m^{k - 1}}

These formulas apply for k > 1.

One tank

Observing one tank randomly out of a population of n tanks gives the serial number m with probability 1/n for m ≤ n, and zero probability for m > n. Using Iverson bracket notation this is written

(M=m\mid N=n,K=1) = (m\mid n) = \frac{[m \le n]}{n}

This is the conditional probability mass distribution function of \scriptstyle m.

When considered a function of n for fixed m this is a likelihood function.

\mathcal{L}(n) = \frac{[n\ge m]}{n}

The maximum likelihood estimate for the total number of tanks is N0 = m.

The total likelihood is infinite, being a tail of the harmonic series.

\sum_n \mathcal{L}(n) = \sum_{n=m}^\infty \frac{1}{n} = \infty

but

\begin{align}
  \sum_n \mathcal{L}(n)[n < \Omega]
    &= \sum_{n=m}^{\Omega - 1} \frac{1}{n} \\
    &= H_{\Omega-1} - H_{m - 1}
\end{align}

where H_n is the harmonic number.

The credibility mass distribution function depends on the prior limit \scriptstyle \Omega:

\begin{align}
       &(N=n\mid M=m,K=1) \\
  = {} &(n\mid m) = \frac{[m\le n]}{n} \frac{[n<\Omega]}{H_{\Omega - 1} - H_{m - 1}}
\end{align}

The mean value of \scriptstyle N is

\begin{align}
  \sum_n n\cdot(n\mid m) &= \sum_{n=m}^{\Omega - 1} \frac{1}{H_{\Omega - 1} - H_{m - 1}} \\
                     &= \frac{\Omega - m}{H_{\Omega - 1} - H_{m - 1}} \\
                     &\approx \frac{\Omega - m}{\log\left(\frac{\Omega - 1}{m - 1}\right)}
\end{align}

Two tanks

If two tanks rather than one are observed, then the probability that the larger of the observed two serial numbers is equal to m, is

(M=m\mid N=n,K=2) = (m\mid n) = [m \le n]\frac{m - 1}{\binom{n}{2}}

When considered a function of n for fixed m this is a likelihood function

\mathcal{L}(n) = [n \ge m]\frac{m - 1}{\binom{n}{2}}

The total likelihood is

\begin{align}
  \sum_{n}\mathcal{L}(n) &= \frac{m - 1}{1} \sum_{n=m}^\infty \frac{1}{\binom n 2} \\
                         &= \frac{m - 1}{1} \cdot \frac{2}{2 - 1} \cdot \frac{1}{\binom {m - 1}{2 - 1}} \\
                         &= 2
\end{align}

and the credibility mass distribution function is

\begin{align}
       &(N=n\mid M=m,K=2) \\
  = {} &(n\mid m) \\
  = {} &\frac{\mathcal{L}(n)}{\sum_n \mathcal{L}(n)} \\
  = {} &[n \ge m]\frac{m - 1}{n(n - 1)}
\end{align}

The median \scriptstyle \tilde{N} satisfies

\sum_n [n \ge \tilde{N}](n\mid m) = \frac{1}{2}

so

\frac{m - 1}{\tilde N - 1} = \frac{1}{2}

and so the median is

\tilde{N} = 2m - 1

but the mean value of N is infinite

\mu = \sum_n n \cdot (n\mid m) = \frac{m - 1}1\sum_{n=m}^\infty \frac{1}{n - 1} = \infty

Many tanks

Credibility mass distribution function

The conditional probability that the largest of k observations taken from the serial numbers {1,...,n}, is equal to m, is

\begin{align}
       &(M=m\mid N=n,K=k\ge 2) \\
  = {} &(m\mid n,k) \\
  = {} &[m\le n]\frac{\binom{m - 1}{k - 1}}{\binom{n}{k}}
\end{align}

The likelihood function of n is the same expression

\mathcal{L}(n) = [n \ge m]\frac{\binom{m - 1}{k - 1}}{\binom{n}{k}}

The total likelihood is finite for k ≥ 2:

\begin{align}
  \sum_n \mathcal{L}(n)
    &= \frac{\binom{m - 1}{k - 1}}{1} \sum_{n=m}^\infty {1 \over \binom n k} \\
    &= \frac{\binom{m - 1}{k - 1}}{1} \cdot \frac{k}{k-1} \cdot \frac{1}{\binom{m - 1}{k - 1}} \\
    &= \frac k{k - 1}
\end{align}

The credibility mass distribution function is

\begin{align}
       &(N=n\mid M=m,K=k \ge 2) = (n\mid m,k) \\
  = {} &\frac{\mathcal{L}(n)}{\sum_n \mathcal{L}(n)} \\
  = {} &[n\ge m]\frac{k-1}{k} \frac{\binom{m - 1}{k - 1}}{\binom n k} \\
  = {} &[n\ge m]\frac{m-1}{n} \frac{\binom{m - 2}{k - 2}}{\binom{n - 1}{k - 1}} \\
  = {} &[n\ge m]\frac{m-1}{n} \frac{m - 2}{n - 1} \frac{k - 1}{k - 2}  \frac{\binom{m - 3}{k - 3}}{\binom{n-2}{k-2}}
\end{align}

The complementary cumulative distribution function is the credibility that N > x

\begin{align}
       &(N>x\mid M=m,K=k) \\
  = {} &\begin{cases}
                                    1 &\text{if }x   < m \\
          \sum_{n=x+1}^\infty (n\mid m,k) &\text{if }x \ge m
        \end{cases} \\
  = {} &[x<m] + [x \ge m]\sum_{n=x+1}^\infty \frac{k - 1}{k}\frac{\binom{m - 1}{k - 1}}{\binom{N}{k}} \\
  = {} &[x<m] + [x \ge m]\frac{k - 1}{k} \frac{\binom{m - 1}{k - 1}}{1} \sum_{n=x+1}^\infty \frac{1}{\binom{n}{k}} \\
  = {} &[x<m] + [x \ge m]\frac{k - 1}{k} \frac{\binom{m - 1}{k - 1}}{1} \cdot \frac{k}{k - 1} \frac{1}{\binom{x}{k - 1}} \\
  = {} &[x<m] + [x \ge m]\frac{\binom{m - 1}{k - 1}}{\binom{x}{k - 1}}
\end{align}

The cumulative distribution function is the credibility that Nx

\begin{align}
       &(N\le x\mid M=m,K=k) \\
  = {} &1 - (N>x\mid M=m,K=k) \\
  = {} &[x \ge m]\left(1 - \frac{\binom{m - 1}{k - 1}}{\binom{x}{k - 1}}\right)
\end{align}

Order of magnitude

The order of magnitude of the number of enemy tanks is

\begin{align}
  \mu &= \sum_n n\cdot(N=n\mid M=m,K=k) \\&
       = \sum_n n [n\ge m]\frac {m-1}n \frac {\binom{m-2}{k-2}}{\binom{n-1}{k-1}}  \\&
       = \frac{m-1}1 \frac{\binom{m-2}{k-2}}1\sum_{n=m}^\infty \frac 1{\binom{n-1}{k-1}}\\&
       = \frac{m-1}1 \frac{\binom{m-2}{k-2}}1 \cdot \frac{k-1}{k-2}\frac {1}{\binom{m-2}{k-2}}\\&
       = \frac{m-1}1 \frac{k-1}{k-2}
\end{align}

Statistical uncertainty

The statistical uncertainty is the standard deviation σ, satisfying the equation

\sigma^2 + \mu^2 = \sum_n n^2 \cdot (N=n\mid M=m,K=k)

So

\begin{align}
  \sigma^2+\mu^2-\mu &= \sum_n n(n-1)\cdot(N=n\mid M=m,K=k)\\&
                      = \sum_{n=m}^\infty n(n-1)\frac{m-1}n \frac{m-2}{n-1} \frac{k-1}{k-2} \frac{\binom{m-3}{k-3}}{\binom{n-2}{k-2}}\\&
                      = \frac{m-1}1 \frac{m-2}1 \frac{k-1}{k-2} \cdot \frac{\binom{m-3}{k-3}}1 \sum_{n=m}^\infty \frac 1{\binom{n-2}{k-2}}\\
& = \frac{m-1}1 \frac{m-2}1 \frac{k-1}{k-2} \frac{\binom{m-3}{k-3}}1 \frac{k-2}{k-3} \frac 1{\binom{m-3}{k-3}}\\
& = \frac{m-1}1 \frac{m-2}1 \frac{k-1}{k-3}\\&
\end{align}

and

\begin{align}
  \sigma &= \sqrt{\frac{m-1}1 \frac{m-2}1 \frac{k-1}{k-3}+\mu-\mu^2} \\&
          = \sqrt{\frac{(k-1)(m-1)(m-k+1)}{(k-3)(k-2)^2}} \\&
\end{align}

The variance-to-mean ratio is simply

\frac{\sigma^2}\mu = \frac{m - k + 1}{(k - 3)(k - 2)}

See also

Other discussions of the estimation

References

Notes

<templatestyles src="Reflist/styles.css" />

Cite error: Invalid <references> tag; parameter "group" is allowed only.

Use <references />, or <references group="..." />

<templatestyles src="Reflist/styles.css" />

Cite error: Invalid <references> tag; parameter "group" is allowed only.

Use <references />, or <references group="..." />
Citations

<templatestyles src="Reflist/styles.css" />

Cite error: Invalid <references> tag; parameter "group" is allowed only.

Use <references />, or <references group="..." />
Bibliography
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  1. AGF policy statement. Chief of staff AGF. November 1943. MHI
  2. Ruggles & Brodie 1947, p. ?.
  3. 3.0 3.1 Lua error in package.lua at line 80: module 'strict' not found.
  4. Lua error in package.lua at line 80: module 'strict' not found.
  5. Lua error in package.lua at line 80: module 'strict' not found.
  6. Ruggles & Brodie 1947, pp. 82–83.
  7. Ruggles & Brodie 1947, p. 89.
  8. Ruggles & Brodie 1947, pp. 90–91.
  9. Volz 2008.
  10. 10.0 10.1 10.2 Johnson 1994.
  11. Lua error in package.lua at line 80: module 'strict' not found., not sufficient.
  12. Lua error in package.lua at line 80: module 'strict' not found.
  13. Lua error in package.lua at line 80: module 'strict' not found.


Cite error: <ref> tags exist for a group named "N", but no corresponding <references group="N"/> tag was found, or a closing </ref> is missing
Cite error: <ref> tags exist for a group named "lower-alpha", but no corresponding <references group="lower-alpha"/> tag was found, or a closing </ref> is missing