Categories:
Cryptography.
Rényi entropy
In information theory, the Rényi entropy is a measure
(or family of measures) of the “suprise” or “information”
contained in a random variable X.
It is defined as follows:
Hα(X)=1−α1log(i=1∑Npiα)
Where α≥0 is a free parameter.
The logarithm is usually base-2, but variations exist.
The case α=0 is known as the Hartley entropy or max-entropy,
and quantifies the “surprise” of an event from X,
if X is uniformly distributed:
H0(X)=logN
Where N is the cardinality of X; the number of different possible events.
The most famous case, however, is α=1.
Since Hα is problematic for α→1, we must take the limit:
H1(X)=α→1limHα(X)=α→1lim1−αlog(∑ipiα)
We then apply L’Hôpital’s rule to evaluate this limit,
and use the fact that all pi sum to 1:
H1(X)=α→1limdαd(1−α)dαdlog(i∑piα)=α→1lim−∑ipiα∑ipiαlogpi=−i=1∑Npilogpi
This quantity is the Shannon entropy,
which is the most general measure of “surprise”:
H1(X)=α→1limHα(X)=−i=1∑Npilogpi
Next, for α=2, we get the collision entropy, which describes
the surprise of two independent and identically distributed variables
X and Y yielding the same event:
H2(X)=−log(i=1∑Npi2)=−logP(X=Y)
Finally, in the limit α→∞,
the largest probability dominates the sum,
leading to the definition of the min-entropy H∞,
describing the surprise of the most likely event:
H∞(X)=α→∞limHα(x)=−log(imaxpi)
It is straightforward to convince yourself that these entropies
are ordered in the following way:
H0≥H1≥H2≥H∞
In other words, from left to right,
they go from permissive to conservative, roughly speaking.
References
- P.A. Bromiley, N.A. Thacker, E. Bouhova-Thacker,
Shannon entropy, Rényi entropy, and information,
2010, University of Manchester.
- J.B. Brask,
Quantum information: lecture notes,
2021, unpublished.