diff options
author | Prefetch | 2022-10-20 18:25:31 +0200 |
---|---|---|
committer | Prefetch | 2022-10-20 18:25:31 +0200 |
commit | 16555851b6514a736c5c9d8e73de7da7fc9b6288 (patch) | |
tree | 76b8bfd30f8941d0d85365990bcdbc5d0643cabc /source/know/concept/binomial-distribution | |
parent | e5b9bce79b68a68ddd2e51daa16d2fea73b84fdb (diff) |
Migrate from 'jekyll-katex' to 'kramdown-math-sskatex'
Diffstat (limited to 'source/know/concept/binomial-distribution')
-rw-r--r-- | source/know/concept/binomial-distribution/index.md | 50 |
1 files changed, 25 insertions, 25 deletions
diff --git a/source/know/concept/binomial-distribution/index.md b/source/know/concept/binomial-distribution/index.md index 14ba4cb..1193a93 100644 --- a/source/know/concept/binomial-distribution/index.md +++ b/source/know/concept/binomial-distribution/index.md @@ -9,11 +9,11 @@ layout: "concept" --- The **binomial distribution** is a discrete probability distribution -describing a **Bernoulli process**: a set of independent $N$ trials where +describing a **Bernoulli process**: a set of independent $$N$$ trials where each has only two possible outcomes, "success" and "failure", -the former with probability $p$ and the latter with $q = 1 - p$. +the former with probability $$p$$ and the latter with $$q = 1 - p$$. The binomial distribution then gives the probability -that $n$ out of the $N$ trials succeed: +that $$n$$ out of the $$N$$ trials succeed: $$\begin{aligned} \boxed{ @@ -22,8 +22,8 @@ $$\begin{aligned} \end{aligned}$$ The first factor is known as the **binomial coefficient**, which describes the -number of microstates (i.e. permutations) that have $n$ successes out of $N$ trials. -These happen to be the coefficients in the polynomial $(a + b)^N$, +number of microstates (i.e. permutations) that have $$n$$ successes out of $$N$$ trials. +These happen to be the coefficients in the polynomial $$(a + b)^N$$, and can be read off of Pascal's triangle. It is defined as follows: @@ -33,10 +33,10 @@ $$\begin{aligned} } \end{aligned}$$ -The remaining factor $p^n (1 - p)^{N - n}$ is then just the +The remaining factor $$p^n (1 - p)^{N - n}$$ is then just the probability of attaining each microstate. -The expected or mean number of successes $\mu$ after $N$ trials is as follows: +The expected or mean number of successes $$\mu$$ after $$N$$ trials is as follows: $$\begin{aligned} \boxed{ @@ -49,7 +49,7 @@ $$\begin{aligned} <label for="proof-mean">Proof</label> <div class="hidden" markdown="1"> <label for="proof-mean">Proof.</label> -The trick is to treat $p$ and $q$ as independent until the last moment: +The trick is to treat $$p$$ and $$q$$ as independent until the last moment: $$\begin{aligned} \mu @@ -61,12 +61,12 @@ $$\begin{aligned} = N p (p + q)^{N - 1} \end{aligned}$$ -Inserting $q = 1 - p$ then gives the desired result. +Inserting $$q = 1 - p$$ then gives the desired result. </div> </div> -Meanwhile, we find the following variance $\sigma^2$, -with $\sigma$ being the standard deviation: +Meanwhile, we find the following variance $$\sigma^2$$, +with $$\sigma$$ being the standard deviation: $$\begin{aligned} \boxed{ @@ -79,7 +79,7 @@ $$\begin{aligned} <label for="proof-var">Proof</label> <div class="hidden" markdown="1"> <label for="proof-var">Proof.</label> -We use the same trick to calculate $\overline{n^2}$ +We use the same trick to calculate $$\overline{n^2}$$ (the mean squared number of successes): $$\begin{aligned} @@ -96,7 +96,7 @@ $$\begin{aligned} &= N p + N^2 p^2 - N p^2 \end{aligned}$$ -Using this and the earlier expression $\mu = N p$, we find the variance $\sigma^2$: +Using this and the earlier expression $$\mu = N p$$, we find the variance $$\sigma^2$$: $$\begin{aligned} \sigma^2 @@ -105,11 +105,11 @@ $$\begin{aligned} = N p (1 - p) \end{aligned}$$ -By inserting $q = 1 - p$, we arrive at the desired expression. +By inserting $$q = 1 - p$$, we arrive at the desired expression. </div> </div> -As $N \to \infty$, the binomial distribution +As $$N \to \infty$$, the binomial distribution turns into the continuous normal distribution, a fact that is sometimes called the **de Moivre-Laplace theorem**: @@ -124,8 +124,8 @@ $$\begin{aligned} <label for="proof-normal">Proof</label> <div class="hidden" markdown="1"> <label for="proof-normal">Proof.</label> -We take the Taylor expansion of $\ln\!\big(P_N(n)\big)$ -around the mean $\mu = Np$: +We take the Taylor expansion of $$\ln\!\big(P_N(n)\big)$$ +around the mean $$\mu = Np$$: $$\begin{aligned} \ln\!\big(P_N(n)\big) @@ -134,7 +134,7 @@ $$\begin{aligned} D_m(n) = \dvn{m}{\ln\!\big(P_N(n)\big)}{n} \end{aligned}$$ -We use Stirling's approximation to calculate the factorials in $D_m$: +We use Stirling's approximation to calculate the factorials in $$D_m$$: $$\begin{aligned} \ln\!\big(P_N(n)\big) @@ -143,8 +143,8 @@ $$\begin{aligned} &\approx \ln(N!) - n \big( \ln(n)\!-\!\ln(p)\!-\!1 \big) - (N\!-\!n) \big( \ln(N\!-\!n)\!-\!\ln(q)\!-\!1 \big) \end{aligned}$$ -For $D_0(\mu)$, we need to use a stronger version of Stirling's approximation -to get a non-zero result. We take advantage of $N - N p = N q$: +For $$D_0(\mu)$$, we need to use a stronger version of Stirling's approximation +to get a non-zero result. We take advantage of $$N - N p = N q$$: $$\begin{aligned} D_0(\mu) @@ -161,7 +161,7 @@ $$\begin{aligned} = \ln\!\Big( \frac{1}{\sqrt{2\pi \sigma^2}} \Big) \end{aligned}$$ -Next, we expect that $D_1(\mu) = 0$, because $\mu$ is the maximum. +Next, we expect that $$D_1(\mu) = 0$$, because $$\mu$$ is the maximum. This is indeed the case: $$\begin{aligned} @@ -176,7 +176,7 @@ $$\begin{aligned} = 0 \end{aligned}$$ -For the same reason, we expect that $D_2(\mu)$ is negative. +For the same reason, we expect that $$D_2(\mu)$$ is negative. We find the following expression: $$\begin{aligned} @@ -189,7 +189,7 @@ $$\begin{aligned} = - \frac{1}{\sigma^2} \end{aligned}$$ -The higher-order derivatives tend to zero for $N \to \infty$, so we discard them: +The higher-order derivatives tend to zero for $$N \to \infty$$, so we discard them: $$\begin{aligned} D_3(n) @@ -201,7 +201,7 @@ $$\begin{aligned} \cdots \end{aligned}$$ -Putting everything together, for large $N$, +Putting everything together, for large $$N$$, the Taylor series approximately becomes: $$\begin{aligned} @@ -210,7 +210,7 @@ $$\begin{aligned} = \ln\!\Big( \frac{1}{\sqrt{2\pi \sigma^2}} \Big) - \frac{(n - \mu)^2}{2 \sigma^2} \end{aligned}$$ -Taking $\exp$ of this expression then yields a normalized Gaussian distribution. +Taking $$\exp$$ of this expression then yields a normalized Gaussian distribution. </div> </div> |