summaryrefslogtreecommitdiff
path: root/content/know/concept/renyi-entropy
diff options
context:
space:
mode:
Diffstat (limited to 'content/know/concept/renyi-entropy')
-rw-r--r--content/know/concept/renyi-entropy/index.pdc114
1 files changed, 114 insertions, 0 deletions
diff --git a/content/know/concept/renyi-entropy/index.pdc b/content/know/concept/renyi-entropy/index.pdc
new file mode 100644
index 0000000..073d773
--- /dev/null
+++ b/content/know/concept/renyi-entropy/index.pdc
@@ -0,0 +1,114 @@
+---
+title: "Rényi entropy"
+firstLetter: "R"
+publishDate: 2021-04-11
+categories:
+- Cryptography
+
+date: 2021-04-11T15:13:07+02:00
+draft: false
+markup: pandoc
+---
+
+# Rényi entropy
+
+In information theory, the **Rényi entropy** is a measure
+(or family of measures) of the "suprise" or "information"
+contained in a random variable $X$.
+It is defined as follows:
+
+$$\begin{aligned}
+ \boxed{
+ H_\alpha(X)
+ = \frac{1}{1 - \alpha} \log\!\bigg( \sum_{i = 1}^N p_i^\alpha \bigg)
+ }
+\end{aligned}$$
+
+Where $\alpha \ge 0$ is a free parameter.
+The logarithm is usually base-2, but variations exist.
+
+The case $\alpha = 0$ is known as the **Hartley entropy** or **max-entropy**,
+and quantifies the "surprise" of an event from $X$,
+if $X$ is uniformly distributed:
+
+$$\begin{aligned}
+ \boxed{
+ H_0(X)
+ = \log N
+ }
+\end{aligned}$$
+
+Where $N$ is the cardinality of $X$; the number of different possible events.
+The most famous case, however, is $\alpha = 1$.
+Since $H_\alpha$ is problematic for $\alpha \to 1$, we must take the limit:
+
+$$\begin{aligned}
+ H_1(X)
+ = \lim_{\alpha \to 1} H_\alpha(X)
+ = \lim_{\alpha \to 1} \frac{\log\!\left( \sum_i p_i^\alpha \right)}{1 - \alpha}
+\end{aligned}$$
+
+We then apply L'Hôpital's rule to evaluate this limit,
+and use the fact that all $p_i$ sum to $1$:
+
+$$\begin{aligned}
+ H_1(X)
+ = \lim_{\alpha \to 1} \frac{\dv{\alpha} \log\!\left( \sum_i p_i^\alpha \right)}{\dv{\alpha} (1 - \alpha)}
+ = \lim_{\alpha \to 1} \frac{\sum_i p_i^\alpha \log p_i}{- \sum_i p_i^\alpha}
+ = - \sum_{i = 1}^N p_i \log p_i
+\end{aligned}$$
+
+This quantity is the **Shannon entropy**,
+which is the most general measure of "surprise":
+
+$$\begin{aligned}
+ \boxed{
+ H_1(X)
+ = \lim_{\alpha \to 1} H_\alpha(X)
+ = - \sum_{i = 1}^N p_i \log p_i
+ }
+\end{aligned}$$
+
+Next, for $\alpha = 2$, we get the **collision entropy**, which describes
+the surprise of two independent and identically distributed variables
+$X$ and $Y$ yielding the same event:
+
+$$\begin{aligned}
+ \boxed{
+ H_2(X)
+ = - \log\!\bigg( \sum_{i = 1}^N p_i^2 \bigg)
+ = - \log P(X = Y)
+ }
+\end{aligned}$$
+
+Finally, in the limit $\alpha \to \infty$,
+the largest probability dominates the sum,
+leading to the definition of the **min-entropy** $H_\infty$,
+describing the surprise of the most likely event:
+
+$$\begin{aligned}
+ \boxed{
+ H_\infty(X)
+ = \lim_{\alpha \to \infty} H_\alpha(x)
+ = - \log\!\big( \max_{i} p_i \big)
+ }
+\end{aligned}$$
+
+It is straightforward to convince yourself that these entropies
+are ordered in the following way:
+
+$$\begin{aligned}
+ H_0 \ge H_1 \ge H_2 \ge H_\infty
+\end{aligned}$$
+
+In other words, from left to right,
+they go from permissive to conservative, roughly speaking.
+
+
+## References
+1. P.A. Bromiley, N.A. Thacker, E. Bouhova-Thacker,
+ [Shannon entropy, Rényi entropy, and information](https://www.researchgate.net/publication/253537416_Shannon_Entropy_Renyi_Entropy_and_Information),
+ 2010, University of Manchester.
+2. J.B. Brask,
+ *Quantum information: lecture notes*,
+ 2021, unpublished.