Categories: Cryptography.

Rényi entropy

In information theory, the Rényi entropy is a measure (or family of measures) of the “suprise” or “information” contained in a random variable XX. It is defined as follows:

Hα(X)=11αlog ⁣(i=1Npiα)\begin{aligned} \boxed{ H_\alpha(X) = \frac{1}{1 - \alpha} \log\!\bigg( \sum_{i = 1}^N p_i^\alpha \bigg) } \end{aligned}

Where α0\alpha \ge 0 is a free parameter. The logarithm is usually base-2, but variations exist.

The case α=0\alpha = 0 is known as the Hartley entropy or max-entropy, and quantifies the “surprise” of an event from XX, if XX is uniformly distributed:

H0(X)=logN\begin{aligned} \boxed{ H_0(X) = \log N } \end{aligned}

Where NN is the cardinality of XX; the number of different possible events. The most famous case, however, is α=1\alpha = 1. Since HαH_\alpha is problematic for α1\alpha \to 1, we must take the limit:

H1(X)=limα1Hα(X)=limα1log ⁣(ipiα)1α\begin{aligned} H_1(X) = \lim_{\alpha \to 1} H_\alpha(X) = \lim_{\alpha \to 1} \frac{\log\!\left( \sum_i p_i^\alpha \right)}{1 - \alpha} \end{aligned}

We then apply L’Hôpital’s rule to evaluate this limit, and use the fact that all pip_i sum to 11:

H1(X)=limα1ddαlog ⁣(ipiα)ddα(1α)=limα1ipiαlogpiipiα=i=1Npilogpi\begin{aligned} H_1(X) = \lim_{\alpha \to 1} \frac{\displaystyle \dv{}{\alpha}\log\!\left( \sum_i p_i^\alpha \right)}{\displaystyle \dv{}{\alpha}(1 - \alpha)} = \lim_{\alpha \to 1} \frac{\sum_i p_i^\alpha \log p_i}{- \sum_i p_i^\alpha} = - \sum_{i = 1}^N p_i \log p_i \end{aligned}

This quantity is the Shannon entropy, which is the most general measure of “surprise”:

H1(X)=limα1Hα(X)=i=1Npilogpi\begin{aligned} \boxed{ H_1(X) = \lim_{\alpha \to 1} H_\alpha(X) = - \sum_{i = 1}^N p_i \log p_i } \end{aligned}

Next, for α=2\alpha = 2, we get the collision entropy, which describes the surprise of two independent and identically distributed variables XX and YY yielding the same event:

H2(X)=log ⁣(i=1Npi2)=logP(X=Y)\begin{aligned} \boxed{ H_2(X) = - \log\!\bigg( \sum_{i = 1}^N p_i^2 \bigg) = - \log P(X = Y) } \end{aligned}

Finally, in the limit α\alpha \to \infty, the largest probability dominates the sum, leading to the definition of the min-entropy HH_\infty, describing the surprise of the most likely event:

H(X)=limαHα(x)=log ⁣(maxipi)\begin{aligned} \boxed{ H_\infty(X) = \lim_{\alpha \to \infty} H_\alpha(x) = - \log\!\big( \max_{i} p_i \big) } \end{aligned}

It is straightforward to convince yourself that these entropies are ordered in the following way:

H0H1H2H\begin{aligned} H_0 \ge H_1 \ge H_2 \ge H_\infty \end{aligned}

In other words, from left to right, they go from permissive to conservative, roughly speaking.

References

  1. P.A. Bromiley, N.A. Thacker, E. Bouhova-Thacker, Shannon entropy, Rényi entropy, and information, 2010, University of Manchester.
  2. J.B. Brask, Quantum information: lecture notes, 2021, unpublished.