summaryrefslogtreecommitdiff
path: root/content/know/concept/renyi-entropy/index.pdc
blob: 073d7736a9293ea0f89dbf6ba654c867d8e44b6a (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
---
title: "Rényi entropy"
firstLetter: "R"
publishDate: 2021-04-11
categories:
- Cryptography

date: 2021-04-11T15:13:07+02:00
draft: false
markup: pandoc
---

# Rényi entropy

In information theory, the **Rényi entropy** is a measure
(or family of measures) of the "suprise" or "information"
contained in a random variable $X$.
It is defined as follows:

$$\begin{aligned}
    \boxed{
        H_\alpha(X)
        = \frac{1}{1 - \alpha} \log\!\bigg( \sum_{i = 1}^N p_i^\alpha \bigg)
    }
\end{aligned}$$

Where $\alpha \ge 0$ is a free parameter.
The logarithm is usually base-2, but variations exist.

The case $\alpha = 0$ is known as the **Hartley entropy** or **max-entropy**,
and quantifies the "surprise" of an event from $X$,
if $X$ is uniformly distributed:

$$\begin{aligned}
    \boxed{
        H_0(X)
        = \log N
    }
\end{aligned}$$

Where $N$ is the cardinality of $X$; the number of different possible events.
The most famous case, however, is $\alpha = 1$.
Since $H_\alpha$ is problematic for $\alpha \to 1$, we must take the limit:

$$\begin{aligned}
    H_1(X)
    = \lim_{\alpha \to 1} H_\alpha(X)
    = \lim_{\alpha \to 1} \frac{\log\!\left( \sum_i p_i^\alpha \right)}{1 - \alpha}
\end{aligned}$$

We then apply L'Hôpital's rule to evaluate this limit,
and use the fact that all $p_i$ sum to $1$:

$$\begin{aligned}
    H_1(X)
    = \lim_{\alpha \to 1} \frac{\dv{\alpha} \log\!\left( \sum_i p_i^\alpha \right)}{\dv{\alpha} (1 - \alpha)}
    = \lim_{\alpha \to 1} \frac{\sum_i p_i^\alpha \log p_i}{- \sum_i p_i^\alpha}
    = - \sum_{i = 1}^N p_i \log p_i
\end{aligned}$$

This quantity is the **Shannon entropy**,
which is the most general measure of "surprise":

$$\begin{aligned}
    \boxed{
        H_1(X)
        = \lim_{\alpha \to 1} H_\alpha(X)
        = - \sum_{i = 1}^N p_i \log p_i
    }
\end{aligned}$$

Next, for $\alpha = 2$, we get the **collision entropy**, which describes
the surprise of two independent and identically distributed variables
$X$ and $Y$ yielding the same event:

$$\begin{aligned}
    \boxed{
        H_2(X)
        = - \log\!\bigg( \sum_{i = 1}^N p_i^2 \bigg)
        = - \log P(X = Y)
    }
\end{aligned}$$

Finally, in the limit $\alpha \to \infty$,
the largest probability dominates the sum,
leading to the definition of the **min-entropy** $H_\infty$,
describing the surprise of the most likely event:

$$\begin{aligned}
    \boxed{
        H_\infty(X)
        = \lim_{\alpha \to \infty} H_\alpha(x)
        = - \log\!\big( \max_{i} p_i \big)
    }
\end{aligned}$$

It is straightforward to convince yourself that these entropies
are ordered in the following way:

$$\begin{aligned}
    H_0 \ge H_1 \ge H_2 \ge H_\infty
\end{aligned}$$

In other words, from left to right,
they go from permissive to conservative, roughly speaking.


## References
1.  P.A. Bromiley, N.A. Thacker, E. Bouhova-Thacker,
    [Shannon entropy, Rényi entropy, and information](https://www.researchgate.net/publication/253537416_Shannon_Entropy_Renyi_Entropy_and_Information),
    2010, University of Manchester.
2.  J.B. Brask,
    *Quantum information: lecture notes*,
    2021, unpublished.