Entropic Uncertainty Relations and their Applications
Abstract
Heisenberg’s uncertainty principle forms a fundamental element of quantum mechanics. Uncertainty relations in terms of entropies were initially proposed to deal with conceptual shortcomings in the original formulation of the uncertainty principle and, hence, play an important role in quantum foundations. More recently, entropic uncertainty relations have emerged as the central ingredient in the security analysis of almost all quantum cryptographic protocols, such as quantum key distribution and twoparty quantum cryptography. This review surveys entropic uncertainty relations that capture Heisenberg’s idea that the results of incompatible measurements are impossible to predict, covering both finite and infinitedimensional measurements. These ideas are then extended to incorporate quantum correlations between the observed object and its environment, allowing for a variety of recent, more general formulations of the uncertainty principle. Finally, various applications are discussed, ranging from entanglement witnessing to waveparticle duality to quantum cryptography.
Contents
 I Introduction
 II Relation to Standard Deviation Approach

III Uncertainty without a Memory System
 III.1 Entropy measures
 III.2 Preliminaries
 III.3 Measuring in two orthonormal bases
 III.4 Arbitrary measurements
 III.5 Statedependent measures of incompatibility
 III.6 Relation to guessing games
 III.7 Multiple measurements
 III.8 Finegrained uncertainty relations
 III.9 Majorization approach to entropic uncertainty
 IV Uncertainty given a Memory System
 V PositionMomentum Uncertainty Relations
 VI Applications
 VII Miscellaneous Topics
 VIII Perspectives
 A Mutually unbiased bases
 B Proof of MaassenUffink’s relation
 C Rényi entropies for joint quantum systems
I Introduction
Quantum mechanics has revolutionized our understanding of the world. Relative to classical mechanics, the most dramatic change in our understanding is that the quantum world — our world — is inherently unpredictable.
By far the most famous statement of unpredictability is Heisenberg’s uncertainty principle Heisenberg (1927), which we treat here as a statement about preparation uncertainty. Roughly speaking, it states that it is impossible to prepare a quantum particle for which both position and momentum are sharply defined. Operationally, consider a source that consistently prepares copies of a quantum particle in the same way, as shown in Fig. 1. For each copy, suppose we randomly measure either its position or its momentum (but we never attempt to measure both quantities for the same particle^{1}^{1}1Section I.1 briefly notes other uncertainty principles that involve consecutive or joint measurements.). We record the outcomes and sort them into two sequences associated with the two different measurements. The uncertainty principle states that it is impossible to predict both the outcome of the position and the momentum measurements: at least one of the two sequences of outcomes will be unpredictable. More precisely, the better such a preparation procedure allows one to predict the outcome of the position measurement, the more uncertain the outcome of the momentum measurement will be, and vice versa.
An elegant aspect of quantum mechanics is that it allows for simple quantitative statements of this idea, i.e., constraints on the predictability of observable pairs like position and momentum. These quantitative statements are known as uncertainty relations. It is worth noting that Heisenberg’s original argument, while conceptually enlightening, was heuristic. The first, rigorouslyproven uncertainty relation for position and momentum is due to Kennard (1927). It establishes that (see also the work of Weyl (1928))
(1) 
where and denote the standard deviation of the position and momentum, respectively, and is the reduced Planck constant.
We now know that Heisenberg’s principle applies much more generally, not only to position and momentum. Other examples of pairs of observables obeying an uncertainty relation include the phase and excitation number of a harmonic oscillator, the angle and the orbital angular momentum of a particle, and orthogonal components of spin angular momentum. In fact, for arbitrary observables^{2}^{2}2More precisely, Robertson’s relation refers to observables with bounded spectrum., and , Robertson (1929) showed that
(2) 
where denotes the commutator. Note a distinct difference between (1) and (2): the righthand side of the former is a constant whereas that of the latter can be statedependent, an issue that we will discuss more in Sec. II.
These relations have a beauty to them and also give conceptual insight. Equation (1) identifies as a fundamental limit to our knowledge. More generally (2) identifies the commutator as the relevant quantity for determining how large the knowledge tradeoff is for two observables. One could argue that a reasonable goal in our studies of uncertainty in quantum mechanics should be to find simple, conceptually insightful statements like these.
If this problem was only of fundamental importance, it would be a wellmotivated one. Yet in recent years there is new motivation to study the uncertainty principle. The rise of quantum information theory has led to new applications of quantum uncertainty, for example in quantum cryptography. In particular quantum key distribution is already commercially marketed and its security crucially relies on Heisenberg’s uncertainty principle. (We will discuss various applications in Sec. VI.) There is a clear need for uncertainty relations that are directly applicable to these technologies.
In the above uncertainty relations, (1) and (2), uncertainty has been quantified using the standard deviation of the measurement results. This is, however, not the only way to express the uncertainty principle. It is instructive to consider what preparation uncertainty means in the most general setting. Suppose we have prepared a state on which we can perform two (or more) possible measurements labeled by . Let us use to label the outcomes of such measurement. We can then identify a list of (conditional) probabilities
(3) 
where denotes the probability of obtaining measurement outcome when performing the measurement on the state . Quantum mechanics predicts restrictions on the set of allowed conditional probability distributions that are valid for all or a large class of states . Needless to say, there are many ways to formulate such restrictions on the set of allowed distributions.
In particular, information theory offers a very versatile, abstract framework that allows us to formalize notions like uncertainty and unpredictability. This theory is the basis of modern communication technologies and cryptography and has been successfully generalized to include quantum effects. The preferred mathematical quantity to express uncertainty in information theory is entropy. Entropies are functionals on random variables and quantum states that aim to quantify their inherent uncertainty. Amongst a myriad of such measures, we mainly restrict our attention to the Boltzmann–Gibbs–Shannon entropy Boltzmann (1872); Gibbs (1876); Shannon (1948) and its quantum generalization, the von Neumann entropy von Neumann (1932). Due to their importance in quantum cryptography, we will also consider Rényi entropic measures Rényi (1961) such as the minentropy. Entropy is a natural measure of uncertainty, perhaps even more natural than the standard deviation, as we argue in Sec. II.
Can the uncertainty principle be formulated in terms of entropy? This question was first brought up by Everett (1957) and answered in the affirmative by Hirschman (1957) who considered the position and momentum observables, formulating the first entropic uncertainty relation. This was later improved by Beckner (1975); BiałynickiBirula and Mycielski (1975), who obtained the relation^{3}^{3}3More precisely, the righthand side of (4) should be , where and are length and momentum scales, respectively, chosen to make the argument of the logarithm dimensionless. Throughout this review, all logarithms are base 2.
(4) 
where is the differential entropy (defined in (7) below). BiałynickiBirula and Mycielski (1975) also showed that (4) is stronger than, and hence implies, Kennard’s relation (1).
The extension of the entropic uncertainty relation to observables with finite spectrum^{4}^{4}4More precisely, the relation applies to nondegenerate observables on a finitedimensional Hilbert space (see Sec. III.2). was given by Deutsch (1983), and later improved by Maassen and Uffink (1988) following a conjecture by Kraus (1987). The result of Maassen and Uffink (1988) is arguably the most wellknown entropic uncertainty relation. It states that
(5) 
where is Shannon’s entropy (see Sec. III.1 for definition), and denotes the maximum overlap between any two eigenvectors of the and observables. Just as (2) established the commutator as an important parameter in determining the uncertainty tradeoff for standard deviation, (5) established the maximum overlap as a central parameter in entropic uncertainty.
While these articles represent the early history of entropic uncertainty relations, there has recently been an explosion of work on this topic. One of the most important recent advances concerns a generalization of the uncertainty paradigm that allows the measured system to be correlated to its environment in a nonclassical way. Entanglement between the measured system and the environment can be exploited to reduce the uncertainty of an observer (with access to the environment) below the usual bounds.
To explain this extension, let us introduce a modern formulation of the uncertainty principle as a socalled guessing game, which makes such extensions of the uncertainty principle natural and highlights their relevance for quantum cryptography. As outlined in Fig. 2, we imagine that an observer, Bob, can prepare an arbitrary state which he will send to a referee, Alice. Alice then randomly chooses to perform one of two (or more) possible measurements, where we will use to denote her choice of measurement. She records the outcome, . Finally, she tells Bob the choice of her measurement, i.e., she sends him . Bob’s task is to guess Alice’s measurement outcome (given ).
The uncertainty principle tells us that if Alice makes two incompatible measurements, then Bob cannot guess Alice’s outcome with certainty for both measurements. This corresponds precisely to the notion of preparation uncertainty. It is indeed intuitive why such uncertainty relations form an important ingredient in proving the security of quantum cryptographic protocols, as we will explore in detail in Sec. VI. In the cryptographic setting will be sent by an adversary trying to break a quantum cryptographic protocol. If Alice’s measurements are incompatible, there is no way for the adversary to know the outcomes of both possible measurements with certainty  no matter what state he prepares.
The formulation of uncertainty relations as guessing games also makes it clear that there is an important twist to such games: What if Bob prepares a bipartite state and sends only the part to Alice? That is, what if Bob’s system is correlated with Alice’s? Or, adopting the modern perspective of information, what if Bob has a nontrivial amount of side information about Alice’s system? Traditional uncertainty relations implicitly assume that Bob has only classical side information. For example, he may possess a classical description of the state or other details about the preparation. However, modern uncertainty relations—for example those derived by Berta et al. (2010) improving on work by Christandl and Winter (2005) and Renes and Boileau (2009)—allow Bob to have quantum rather than classical information about the state. As was already observed by Einstein et al. (1935), Bob’s uncertainty can vanish in this case (in the sense that he can correctly guess Alice’s measurement outcome in the game described above).
We will devote Sec. IV to such modern uncertainty relations. It is these relations that will be of central importance in quantum cryptography, where the adversary may have gathered quantum and not just classical information during the course of the protocol that may reduce his uncertainty.
i.1 Scope of this review
Two survey articles partially discuss the topic of entropic uncertainty relations. BiałynickiBirula and Rudnicki (2011) take a physics perspective and cover continuous variable entropic uncertainty relations and some discretized measurements. In contrast, Wehner and Winter (2010) take an informationtheoretic perspective and discuss entropic uncertainty relations for discrete (finite) variables with an emphasis on relations that involve more than two measurements.
These reviews predate many recent advances in the field. For example, both reviews do not cover entropic uncertainty relations that take into account quantum correlations with the environment of the measured system. Moreover, applications of entropic uncertainty relations are only marginally discussed in both of these reviews. Here, we discuss both physical and informationbased applications. We therefore aim to give a comprehensive treatment of all of these topics in one reference, with the hope of benefiting some of the quickly emerging technologies that exploit quantum information.
There is an additional aspect of the uncertainty principle known as measurement uncertainty, see, e.g., Ozawa (2003); Hall (2004); Busch et al. (2007, 2014a). This includes (1) joint measurability, the concept that there exists pairs of observables that cannot be measured simultaneously, and (2) measurement disturbance, the concept that there exist pairs of observables for which measuring one causes a disturbance of the other. Measurement uncertainty is a debated topic of current research. We focus our review article on the concept of preparation uncertainty, although we briefly mention entropic approaches to measurement uncertainty in Sec. VII.3.
Ii Relation to Standard Deviation Approach
Traditional formulations of the uncertainty principle, for example the ones due to Kennard and Robertson, measure uncertainty in terms of the standard deviation. In this section we argue why we think entropic formulations are preferable. For further discussion we refer to Uffink (1990).
ii.1 Position and momentum uncertainty relations
For the case of position and momentum observables, the strength of the entropic formulation can be seen from the fact that the entropic uncertainty relation in (4) is stronger, and in fact implies, the standard deviation relation (1). Following BiałynickiBirula and Mycielski (1975), we formally show that
(6) 
for all states, where here and henceforth in this article we work in units such that . Let us consider a random variable governed by a probability density , and the differential entropy
(7) 
In the following we assume that this quantity is finite. Gaussian probability distributions,
(8) 
where denotes the mean, are special in the following sense: for a fixed standard deviation , distributions of the form of (8) maximize the entropy in (7). It is a simple exercise to show this, e.g., using variational calculus with Lagrange multipliers.
It is furthermore straightforward to insert (8) into (7) to calculate the entropy of a Gaussian distribution
(9) 
Since Gaussians maximize the entropy, the following inequality holds in general
(10) 
Now consider an arbitrary quantum state for a particle’s translational degree of freedom, which gives rise to random variables and for the position and momentum, respectively. Let us insert the resulting relations into (4) to find
(11)  
(12)  
(13) 
By comparing the left and righthand sides of (11) and noting that the logarithm is a monotonic function, we see that (11) implies (1), and hence so does (4).
It is worth noting that (10) is a strict inequality if the distribution is nonGaussian, and hence (4) is strictly stronger than (1) if the quantum state is nonGaussian. While quantum mechanics textbooks often present (1) as the fundamental statement of the uncertainty principle, it is clear that (4) is stronger and yet not much more complicated. Furthermore, as discussed in Sec. IV the entropic formulation is more robust, allowing the relation to be easily generalized to situations involving correlations with the environment.
ii.2 Finite spectrum uncertainty relations
As noted above, both the standard deviation and the entropy have been applied to formulate uncertainty relations for observables with a finite spectrum. However, it is largely unclear how the most popular formulations, Robertson’s (2) and MaassenUffink’s (5), are related. It remains an interesting open question whether there exists a formulation that unifies these two formulations. However, there is an important difference between (2) and (5) in that the former has a bound that depends on the state, while the latter only depends on the two observables.
Example 1.
Consider (2) for the case of a spin1/2 particle, where and , corresponding to the  and axes of the Bloch sphere. Then the commutator is proportional to the Pauli operator and the righthand side of (2) reduces to . Hence, (2) gives a trivial bound for all states that lie in the plane of the Bloch sphere. For the eigenstates of and , this bound is tight since one of the two uncertainty terms is zero, and hence the trivial bound is a (perhaps undesirable) consequence of the fact that the lefthand side involves a product (rather than a sum) of uncertainties. However, for any other states in the plane, neither uncertainty is zero. This implies that (2) is not tight for these states.
This example illustrates a weakness of Robertson’s relation for finitedimensional systems — it gives trivial bounds for certain states, even when the lefthand side is nonzero. Schrödinger (1930) slightly strengthened Robertson’s bound by adding an additional statedependent term that helps to get rid of the artificial trivial bound discussed in Ex. 1. Likewise, Maccone and Pati (2014) recently proved a statedependent bound on the sum (not the product) of the two variances, and this bound also removes the trivial behavior of Robertson’s bound. Furthermore, one still may be able to obtain a nonvanishing stateindependent bound using standard deviation uncertainty measures in the finitedimensional case. For example, Busch et al. (2014b) considered the qubit case and obtained a stateindependent bound on the sum of the variances.
The statedependent nature of Robertson’s bound was noted, e.g., by Deutsch (1983) and used as motivation for entropic uncertainty relations, which do not suffer from this weakness. However, the above discussion suggests that this issue might be avoided while still using standard deviation as the uncertainty measure. On the other hand, there are more important issues that we now discuss.
ii.3 Advantages of entropic formulation
From a practical perspective, a crucial advantage of entropic uncertainty relations are their applications throughout quantum cryptography. However, let us now mention several other reasons why we think that the entropic formulation of the uncertainty principle is advantageous over the standard deviation formulation.
ii.3.1 Counterintuitive behavior of standard deviation
While the standard deviation is, of course, a good measure of deviation from the mean, its interpretation as a measure of uncertainty has been questioned. It has been pointed out by several authors, for example by BiałynickiBirula and Rudnicki (2011), that the standard deviation behaves somewhat strangely for some simple examples.
Example 2.
Consider a spin1 particle with equal probability to have each of the three possible values of angular momentum, . The standard deviation of the angular momentum is . Now suppose we gain information about the spin such that we now know that it definitely does not have the value . The new probability distribution is , . We might expect the uncertainty to decrease, since we have gained information about the spin, but in fact the standard deviation increases, the new value being .
We remark that the different behavior of standard deviation and entropy for spin angular momentum was recently highlighted by Dammeier et al. (2015), in the context of states that saturate the relevant uncertainty relation.
BiałynickiBirula and Rudnicki (2011) noted an example for a particle’s spatial position that is analogous to the above example.
Example 3.
Consider a long box of length , centered at , with two small boxes of length attached to the two ends of the long box, as depicted in Fig. 3. Suppose we know that a classical particle is confined to the two small end boxes, i.e., with equal probability it is one of the two small boxes. The standard deviation of the position is , assuming that . Now suppose the barriers that separate the end boxes from the middle box are removed, and the particle is allowed to move freely between all three boxes. Intuitively one might expect that the uncertainty of the particle’s position is now larger, since we now know nothing about where the particle is inside the three boxes. However, the new standard deviation is actually smaller: .
Entropies on the other hand do not have this counterintuitive behavior, due to properties discussed below. Finally, let us note a somewhat obvious issue that, in some cases, a quantitative label (and hence the standard deviation) does not make sense, as illustrated in the following example.
Example 4.
Consider a neutrino’s flavor, which is often modeled as a threeoutcome observable with outcomes “electron”, “muon”, or “tau”. As this is a nonquantitative observable, the standard deviation does not make sense in this context. Nevertheless, it is of interest to quantify the uncertainty about the neutrino flavor, i.e., how difficult it is to guess the flavor, which is naturally captured by the notion of entropy.
ii.3.2 Intuitive entropic properties
Deutsch (1983) emphasized that the standard deviation can change under a simple relabeling of the outcomes. For example, if one were to assign quantitative labels to the outcomes in Ex. 4 and then relabel them, the standard deviation would change. In contrast, the entropy is invariant under relabeling of outcomes, because it naturally captures the amount of information about a measurement outcome.
Furthermore, there is a nice monotonic property of entropy in the following sense. Suppose one does a random relabeling of the outcomes. One can think of this as a relabeling plus added noise, which naturally tends to spread the probability distribution out over the outcomes. Intuitively, a relabeling with the injection of randomness should never decrease the uncertainty. This property — nondecreasing under random relabeling — was highlighted by Friedland et al. (2013) as a desirable property of an uncertainty measure. Indeed, entropy satisfies this property. On the other hand, the physical process in Ex. 3 can be modeled mathematically as a random relabeling. Hence, we see the contrast in behavior between entropy and standard deviation.
Monotonicity under random relabeling is actually a special case of an even more powerful property. Think of the random relabeling as due to the fact that the observer is denied access to an auxiliary register that stores the information about which relabeling occurred. If the observer had access to the register, then their uncertainty would remain the same, but without access their uncertainty could potentially increase, but never decrease! More generally, this idea (that losing access to an auxiliary system cannot reduce one’s uncertainty) is a desirable and powerful property of uncertainty measures known as the dataprocessing inequality. It is arguably a defining property of entropy measures, or more precisely, conditional entropy measures as discussed in Sec. IV.2. Furthermore this property is central in proving entropic uncertainty relations Coles et al. (2012).
ii.3.3 Framework for correlated quantum systems
Entropy provides a robust mathematical framework that can be generalized to deal with correlated quantum systems. For example, the entropy framework allows us to discuss the uncertainty of an observable from the perspective of an observer who has access to part of the environment of the system, or to quantify quantum correlations like entanglement between two quantum systems. This requires measures of conditional uncertainty, namely conditional entropies. We highlight the utility of this framework in Sec. IV. A similar framework for standard deviation has not been developed.
ii.3.4 Operational meaning and information applications
Perhaps the most compelling reason to consider entropy as the uncertainty measure of choice is that it has operational significance for various informationprocessing tasks. The standard deviation, in contrast, does not play a significant role in information theory. This is because entropy abstracts from the physical representation of information, as one can see from the following example.
Example 5.
Consider the two probability distributions in Fig. 4. They have the same standard deviation but different entropy. The distribution in Fig. 4(a) has one bit of entropy since only two events are possible and occur with equal probability. If we want to record data from this random experiment this will require exactly one bit of storage per run. On the other hand, the distribution in Fig. 4(b) has approximately 3 bits of entropy and the recorded data cannot be compressed to less than 3 bits per run. Clearly, entropy has operational meaning in this context while standard deviation fails to distinguish these random experiments.
Entropies have operational meaning for tasks such as randomness extraction (extracting perfect randomness from a partially random source) and data compression (sending minimal information to someone to help them guess the output of a partially random source). It is precisely these operational meanings that make entropic uncertainty relations useful for proving the security of quantum key distribution and other cryptographic tasks. We discuss such applications in Sec. VI.
The operational significance of entropy allows one to frame entropic uncertainty relations in terms of guessing games (see Sec. III.6 and IV.4.1). These are simple yet insightful tasks where, e.g., one party is trying to guess the outcome of another party’s measurements (see the description in Fig. 2). Such games make it clear that the uncertainty principle is not just abstract mathematics; rather it is relevant to physical tasks that can be performed in a laboratory.
Iii Uncertainty without a Memory System
Historically, entropic uncertainty relations were first studied for position and momentum observables. However, to keep the discussion mathematically simple we begin here by introducing entropic uncertainty relations for finitedimensional quantum systems, and we defer the discussion of infinite dimensions to Sec. V. It is worth noting that many physical systems of interest are finitedimensional, such as photon polarization, neutrino flavor, and spin angular momentum.
In this section, we consider uncertainty relations for a single system . That is, there is no memory system. We emphasize that all uncertainty relations with a memory system can also be applied to the situation without.
iii.1 Entropy measures
Let us consider a discrete random variable distributed according to the probability distribution . We assume that takes values in a finite set . For example, this set could be binary values or spin states . In general, we will associate the random variable with the outcome of a particular measurement. This random variable can take values , where is a specific instance of a measurement outcome that can be obtained with probability . However, entropies only depend on the probability law and not on the specific labels of the elements in the set . Thus, we will in the following just assume this set to be of the form , where stands for the cardinality of the set .
iii.1.1 Surprisal and Shannon entropy
Following Shannon (1948), we first define the surprisal of the event distributed according to as , often also referred to as information content. As its name suggests, the information content of gets larger when the event is less likely, i.e., when is smaller. In particular, deterministic events have no information content at all, which is indeed intuitive since we learn nothing by observing an event that we are assured will happen with certainty. In contrast, the information content of very unlikely events can get arbitrarily large. Based on this intuition, the Shannon entropy is defined as
(14) 
and quantifies the average information content of . It is therefore a measure of the uncertainty of the outcome of the random experiment described by . The Shannon entropy is by far the bestknown measure of uncertainty, and it is the one most commonly used to express uncertainty relations.
iii.1.2 Rényi entropies
However, for some applications it is important to consider other measures of uncertainty that give more weight to events with high or low information content, respectively. For this purpose we employ a generalization of the Shannon entropy to a family of entropies introduced by Rényi (1961). The family includes several important special cases which we will discuss individually. These entropies have found many applications in cryptography and information theory (see Sec. VI) and have convenient mathematical properties.^{5}^{5}5Another family of entropies that are often encountered are the Tsallis entropies Tsallis (1988). They have not found an operational interpretation in cryptography or information theory. Thus, we defer the discussion of Tsallis entropies until Sec. VII.1.
The Rényi entropy of order is defined as
(15) 
and as the corresponding limit for . For the limit yields the Shannon entropy^{6}^{6}6It is a simple exercise to apply L’Hôpital’s rule to (III.1.2) in the limit ., and the Rényi entropies are thus a proper generalization of the Shannon entropy.
The Rényi entropies are monotonically decreasing as a function of . Entropies with give more weight to events with high surprisal. The collision entropy, , is given by
(16) 
is the collision probability, i.e., the probability that two independent instances of are equal. The minentropy , is of special significance in many applications. It characterizes the optimal probability of correctly guessing the value of in the following sense
(17) 
Clearly, the optimal guessing strategy is to bet on the most likely value of , and the winning probability is then given by the maximum in (III.1.2). The minentropy can also be seen as the minimum surprisal of .
The Rényi entropies with give more weight to events with small surprisal. Noteworthy examples are the maxentropy, , and
(18) 
where the latter is simply the logarithm of the support of .
iii.1.3 Examples and properties
For all the Rényi entropies, if and only if the distribution is perfectly peaked, i.e., for some particular value . On the other hand, the distribution is uniform if and only if the entropy takes its maximal value .
The Rényi entropies can take on very different values depending on the parameter as the following example, visualized in Fig. 5, shows.
Example 6.
Consider a distribution of the form
(19) 
so that we have
(20) 
is arbitrarily large as increases. This is of particular relevance in cryptographic applications where — and not — characterizes how difficult it is to guess a secret . As we will see later, determines precisely the number of random bits that can be obtained from .
Consider two probability distributions, and , and define . Now let us reorder the probabilities in into a vector such that , padding with zeros if necessary. Analogously arrange the probabilities in into a vector . We say majorizes and write if
(21) 
Intuitively, the fact that majorizes means that is less spread out than . For example, the distribution majorizes every other distribution, while the uniform distribution is majorized by every other distribution.
One of the most fundamental properties of the Rényi entropies is that they are Schurconcave Marshall et al. (2011), meaning that they satisfy
(22) 
This has an important consequence. Let for some (deterministic) function . In other words, is obtained by processing using the function . The random variable is then governed by the push forward of , that is
(23) 
Clearly and thus we have . This corroborates our intuition that the input of a function is at least as uncertain as its output. If is just a reordering of , or more generally if is injective, then the two entropies are equal.
Finally we note that if two random variables and are independent, we have
(24) 
This property is called additivity.
iii.2 Preliminaries
iii.2.1 Physical setup
The physical setup used throughout the remainder of this section is as follows. We consider a quantum system, , that is measured in either one of two (or more) bases. The initial state of the system is represented by a density operator, , or more formally a positive semidefinite operator with unit trace acting on a finitedimensional Hilbert space . The measurements, for now, are given by two orthonormal bases of . An orthonormal basis is a set of unit vectors in that are mutually orthogonal and span the space . The two bases are denoted by sets of rank projectors,
(25) 
We use projectors to keep the notation consistent as we will later consider more general measurements. This induces two random variables, and , corresponding to the measurement outcomes that result from measuring in the bases and , respectively. These are governed by the following probability laws, given by the Born rule. We have
(26) 
respectively. We also note that , which is the dimension of the Hilbert space .
iii.2.2 Mutually unbiased bases (MUBs)
Before delving into uncertainty relations, let us consider pairs of observables such that perfect knowledge about observable implies complete ignorance about observable . We say that such observables are unbiased, or mutually unbiased. For any finitedimensional space there exist pairs of orthonormal bases that satisfy this property. More precisely, two orthonormal bases and are mutually unbiased bases (MUBs) if
(27) 
In addition, a set of orthonormal bases is said to be a set of MUBs if each basis is mutually unbiased to every other basis , with , in the set.
Example 7.
For a qubit the eigenvectors of the Pauli operators,
(28)  
(29)  
(30) 
form a set of 3 MUBs.
iii.3 Measuring in two orthonormal bases
iii.3.1 Shannon entropy
Based on the pioneering work by Deutsch (1983) and following a conjecture of Kraus (1987), Maassen and Uffink (1988) formulated entropic uncertainty relations for measurements of two complementary observables. Their best known relation uses the Shannon entropy to quantify uncertainty. It states that, for any state ,
(31) 
where the measure of incompatibility is a function of the maximum overlap of the two measurements, namely
(32) 
Note that is stateindependent, i.e., independent of the initial state . This is in contrast to Robertson’s bound in (2).
The bound is nontrivial as long as and do not have any vectors in common. In this case, (31) shows that for any input density matrix there is some uncertainty in at least one of the two random variables and , quantified by the Shannon entropies and , respectively. In general we have
(33) 
For the extreme case that and are MUBs, as defined in (27), the overlap matrix is flat: for all and , and the lower bound on the uncertainty then becomes maximal
(34) 
Note that this is a necessary and sufficient condition: if and only if the two bases are MUBs. Hence, MUBs uniquely give the strongest uncertainty bound here.
For general observables and the overlap matrix is not necessarily flat and the asymmetry of the matrix elements is quantified in (32) by taking the maximum over all . In order to see why the maximum entry provides some (fairly coarse) measure of the flatness of the whole matrix, note that if the maximum entry of the overlap matrix is , then all entries in the matrix must be . Alternative measures of incompatibility will be discussed in Secs. III.3.5 and III.3.6.
iii.3.2 Rényi entropies
Maassen and Uffink (1988) also showed that the above relation (31) holds more generally in terms of Rényi entropies. For any with , we have
(35) 
It is easily checked that the relation (31) in terms of the Shannon entropy is recovered for . For with we get another interesting special case of (35) in terms of the min and maxentropy
(36) 
Since the minentropy characterizes the probability of correctly guessing the outcome , it is this type of relation that becomes most useful for applications in quantum cryptography and quantum information theory (see Sec. VI).
iii.3.3 Proof of MaassenUffink
The original proof of (35) by Maassen and Uffink makes use of the RieszThorin interpolation theorem (see, e.g., Bergh and Löfström (1976)). Recently an alternative proof was formulated by Coles et al. (2011, 2012) using the monotonicity of the relative entropy under quantum channels. The latter approach is illustrated in App. B, where we prove the special case of the Shannon entropy relation (31). The proof is simple and straightforward. Hence, we highly recommend the interested reader to study App. B. The Rényi entropy relation (35) follows from a more general line of argument given in App. C.3.
iii.3.4 Tightness and extensions
Given the simple and appealing form of the MaassenUffink relations (35) a natural question to ask is how tight these relations are. It is easily seen that if and are MUBs, then they are tight for any of the states or . Thus, there cannot exist a better stateindependent bound if and are MUBs. However, for general orthonormal bases and the relations (35) are not necessarily tight. This issue is addressed in the following subsections, where we also note that (31) can be tightened for mixed states with a statedependent bound.
iii.3.5 Tighter bounds for qubits
Various attempts have been made to strengthen the Maassen–Uffink bound, particularly in the Shannonentropy form (31). Let us begin by first discussing improvements upon (31) in the qubit case and then move on to arbitrary dimensions.
For qubits the situation is fairly simple since the overlap matrix only depends on a single parameter, which we can take as the maximum overlap . Hence, the goal is to find the largest function of that still lowerbounds the entropic sum. Significant progress along these lines was made by SánchezRuiz (1998), who noted that the MaassenUffink bound, , could be replaced by the stronger bound
(37) 
Here, denotes the binary entropy.
Later work by Ghirardi et al. (2003) attempted to find the optimal bound. They simplified the problem to a singleparameter optimization as
(38) 
where . While it is straightforward to perform this optimization, Ghirardi et al. (2003) noted that an analytical solution could only be found for . They showed that this analytical bound is given by
(39)  
(40) 
Fig. 6 shows a plot of , , and . In addition, this plot also shows the bound obtained from a majorization technique discussed in Sec. III.9.
iii.3.6 Tighter bounds in arbitrary dimension
Extending the qubit result from (38), de Vicente and SánchezRuiz (2008) found an analytical bound in the large overlap (i.e., large ) regime
(41) 
which is stronger than the MU bound over this range, and they also obtained a numerical improvement over MU for the range .
However, the situation for is more complicated than the qubit case. For the overlap matrix depends on more parameters than simply the maximum overlap . Recent work has focused on exploiting these other overlaps to improve upon the MU bound. For example, Coles and Piani (2014b) derived a simple improvement on that captures the role of the secondlargest entry of , denoted , with the bound
(42) 
Consider the following qutrit example where .
Example 8.
Let and consider the two orthonormal bases and related by the unitary transformation,
(43) 
We have while .
More recently, a bound similar in spirit to was obtained by Rudnicki et al. (2014), of the form
(44) 
Note that . However, there is no clear relation between and .
iii.3.7 Tighter bounds for mixed states
Notice that (31) can be quite loose for mixed states. For example, if , then the lefthand side of (31) is , whereas the righthand side is at most . This looseness can be addressed by introducing a statedependent bound that gets larger as becomes more mixed. The mixedness of can be quantified by the von Neumann entropy , which we also denote by , defined by
(45) 
where an eigenvalue decomposition of the state is given by . Note that , where for pure states and for maximally mixed states. In the literature, the von Neumann entropy is sometimes also denoted using . However, here we will follow the more common convention in quantum information theory. We note that the entropy never decreases when applying a projective measurement to , that is,
(46) 
Equation (31) was strengthened for mixed states by Berta et al. (2010), with the bound
(47) 
A proof of (47) is given in App. B; see also Frank and Lieb (2012) for a direct matrix analysis proof. When and are MUBs, this bound is tight for any state that is diagonal in either the or basis.
iii.4 Arbitrary measurements
Many interesting measurements are not of the orthonormal basis form. For example, coarsegrained (degenerate) projective measurements are relevant to probing macroscopic systems. Also, there are other measurements that are informationally complete in the sense that their statistics allow one to reconstruct the density operator.
The most general description of measurements in quantum mechanics is that of positive operatorvalued measures (POVMs). A POVM on a system is a set of positive semidefinite operators that sum to the identity, . The number of POVM elements in the set can be much larger or much smaller than the Hilbert space dimension of the system. Physically, a POVM can be implemented as a projective measurement on an enlarged Hilbert space, e.g., as a joint measurement on the system of interest with an ancilla system.
For two POVMs and , the general Born rule now induces the distributions
(48) 
Krishna and Parthasarathy (2002) proposed an incompatibility measure for POVMs using the operator norm. Namely, they considered
(49) 
where denotes the operator norm (i.e., the maximal singular value). Using this measure they generalized (31) to the case of POVMs. That is, we still have
(50) 
but now using the generalized version of in (49). More recently, Tomamichel (2012) noted that an alternative generalization to POVMs is obtained by replacing with
(51) 
and the author conjectured that always provides a stronger bound than .
Indeed this conjecture was proved by Coles and Piani (2014b):
(52) 
Hence, , implying that provides a stronger bound on entropic uncertainty than .
Example 9.
Consider two POVMs given by
(53) 
For these POVMs we find , but is strictly smaller.
Interestingly, a general POVM can have a nontrivial uncertainty relation on its own. That is, for some POVM , there may not exist any state that has . Krishna and Parthasarathy (2002) noted this and derived the single POVM uncertainty relation
(54) 
In fact the proof is straightforward: simply apply (50) to the case where is the trivial POVM. The relation (54) can be further strengthened by applying this approach to in (51), instead of .
iii.5 Statedependent measures of incompatibility
In most uncertainty relations we have encountered so far, the measure of incompatibility, for example the overlap , is a function of the measurements employed but is independent of the quantum state prior to measurement. The sole exception is the strengthened MaassenUffink relation in (47) where the lower bound is the sum of an ordinary, stateindependent measure of incompatibility and the entropy of . In the following, we review some uncertainty relations that use measures of incompatibility that are state dependent.
It was shown by Tomamichel and Hänggi (2013) that the MaassenUffink relation (31) also holds when the overlap is replaced by an effective overlap, denoted . Informally, is given by the average overlap of the two measurements on different subspaces of the Hilbert space, averaged over the probability of finding the state in the subspace. We refer the reader to the paper mentioned above for a formal definition of . Here, we discuss a simple example showing that statedependent uncertainty relations can be significantly tighter.
Example 10.
Let us apply one out of two projective measurements, either in the orthonormal basis^{7}^{7}7The diagonal states are .
(55) 
on a state which has the property that ‘’ is measured with probability at most . The MaassenUffink relation (31) gives a trivial bound as the overlap of the two bases is due to the vector that appears in both bases. Still, our intuitive understanding is that the uncertainty about the measurement outcome is high as long as is small. The effective overlap Tomamichel and Hänggi (2013) captures this intuition:
(56) 
This formula can be interpreted as follows: with probability we are in the subspace spanned by and , where the overlap is , and with probability we measure and have full overlap.
An alternative approach to statedependent uncertainty relations was introduced by Coles and Piani (2014b). They showed that the factor in the MaassenUffink relation (31) can be replaced by the statedependent factor
(57)  
(58) 
and is defined analogously to , but with and interchanged. Here, and are given by (26) and (32), respectively. Note that this strengthens the MaassenUffink bound, , since averaging over all is larger than minimizing it over all . In many cases is significantly stronger than .
Recently, Kaniewski et al. (2014) derived entropic uncertainty relations in terms of the effective anticommutator of arbitrary binary POVMs and . Namely, the quantity
(59) 
binary observables corresponding to the POVMs and , respectively. In (III.5), we use the notation to denote the anticommutator. We note that . This results, for example, in the following uncertainty relation for the Shannon entropy:
(60) 
We refer the reader to Kaniewski et al. (2014) for similar uncertainty relations in terms of Rényi entropies as well as extensions to more than two measurements. Finally, for measurements acting on qubits, we find that , and (60) hence reduces to the SanchezRuiz bound (37).
iii.6 Relation to guessing games
Let us now explain in detail how some of the relations above can be interpreted in terms of a guessing game. We elaborate on the brief discussion of guessing games in Sec. I, and we refer the reader back to Fig. 2 for an illustration of the game.
The game is as follows. Suppose that Bob prepares system in state . He then sends to Alice, who randomly performs either the or measurement. The measurement outcome is a bit denoted as , and Bob’s task is to guess , given that he received the basis choice denoted by from Alice.
We can rewrite the MaassenUffink relation (31) in the following way such that the connection to the above guessing game becomes transparent. Denote the standard basis on as , and let and respectively be unitaries that map this basis to the and bases, i.e.,
(61) 
Then, we have
(62) 
with the conditional probability distribution
(63) 
and similarly for . Alternatively we can also write this as
(64) 
in terms of the conditional Shannon entropy
(65)  
(66) 
of the bipartite distribution
(67) 
That is, each measurement labeled is chosen with equal probability and we condition on this choice. Notice that the form in (64) is connected to the guessing game in Fig. 2. Regardless of the state that Bob prepares, the uncertainty relation (64) implies that he will not be able to perfectly guess if . In this sense, the MaassenUffink relation is a fundamental constraint on one’s ability to win a guessing game.
Actually, in the context of guessing games, the minentropy is more operationally relevant than the Shannon entropy. For example, a diligent reading of Deutsch (1983) reveals the relation
(68) 
for orthonormal bases and , where is defined in (40). This relation gives an upper bound on the product of the guessing probabilities (or equivalently, a lower bound on the sum of the minentropies) associated with and . However, to make a more explicit connection to the guessing game described above, one would like to upper bound the sum (or average) of the guessing probabilities, namely the quantity
(69) 
Indeed, the quantity (69) can easily be upperbounded as Schaffner (2007)
(70)  
(71) 
Example 11.
For the Pauli qubit measurements the minentropy uncertainty relation (71) becomes
(72) 
We emphasize that is precisely the probability for winning the game described in Fig. 2. Hence, the entropic uncertainty relation (71) gives the fundamental limit on winning the game. Finally, we remark that (71) is stronger than Deutsch’s relation (68), due to the following argument. For the minentropy, conditioning on the measurement choice is defined as
(73)  
in contrast to the Shannon entropy in (65). However, in analogy to (66), we have
(74) 
due to the concavity of the logarithm. For a general discussion of conditional entropies we point to Sec. IV.2.
iii.7 Multiple measurements
So far we have only considered entropic uncertainty relations quantifying the complementarity of two measurements. However, there is no fundamental reason for restricting to this setup, and in the following we discuss the more general case of measurements. We mostly focus on special sets of measurements that generate strong uncertainty relations. This is of particular interest for various applications in quantum cryptography (see Sec. VI.3).
The notation introduced for guessing games in Sec. III.6 is particularly useful in the multiple measurements setting. In this notation, for larger sets of measurements we are interested in finding lower bounds of the form
(75) 
where, similarly to (III.6),
(76) 
Again the lefthand side of (75) might alternatively be written as
(77) 
where the conditional probability distribution is defined analogously to (63).
iii.7.1 Bounds implied by two measurements
It is important to realize that, e.g., the MaassenUffink relation (31) already implies bounds for larger sets of measurements. This is easily seen by just applying (31) to all possible pairs of measurements and adding the corresponding lower bounds.
Example 12.
For the qubit Pauli measurements we find by an iterative application of the tightened MaassenUffink bound (47) for the measurement pairs , , and that
(78) 
The goal of this section is to find uncertainty relations that are stronger than any bounds that can be derived directly from relations for two measurements.
iii.7.2 Complete sets of MUBs
A promising candidate for deriving strong uncertainty relations are complete sets of MUBs, i.e., sets of MUBs (which we only know to exist in certain dimensions, see Appendix A for elaboration). Consider the qubit case in the following example.
Example 13.
For the qubit Pauli measurements, we have from SánchezRuiz (1995, 1998) that
(79) 
Moreover, from Coles et al. (2011) we can add an entropy dependent term on the righthand side,
(80) 
Note that (80) is never a worse bound than (78) which just followed from the tightened MaassenUffink relation for two measurements (47). Moreover, the relation (79) becomes an equality for any eigenstate of the Pauli measurements, while (80) becomes an equality for any state that is diagonal in the eigenbasis of one of the Pauli measurements.
More generally, for a full set of MUBs in dimension , Larsen (1990); Ivanovic (1992); SánchezRuiz (1993) showed that,
(81) 
This is a strong bound since the entropic term on the lefthand side can become at most for any number and choice of measurements. The relation (81) can be derived from an uncertainty equality for the collision entropy . Namely, for any quantum state on a dimensional system and a full set of MUBs, we have Ivanovic (1992); Brukner and Zeilinger (1999); Ballester and Wehner (2007)
(82) 
where for the collision entropy the conditioning on the measurement choice is defined as