Discrete Layered Entropy, Conditional Compression and a Tighter Strong Functional Representation Lemma.

Cheuk Ting LI (The Chinese University of Hong Kong)

Wed Feb 25, 16:00-17:15 (5 days from now)

Abstract: Given two pieces of information, we can take the "union" of them (the joint random variable), but the natural definition of the difference between them (the information in X but not in Y) is less clear. The conditional entropy H(X|Y) is often interpreted as the amount of information in X but not in Y, but H(X|Y) cannot be regarded as the entropy of the "conditional random variable X|Y", i.e., conditional entropy is not a special case of entropy. In this talk, we discuss a notion of difference motivated by a conditional compression problem, and a quantity Lambda(X), called discrete layered entropy. Lambda(X) has properties similar to the Shannon entropy H(X), and is close to H(X) within a logarithmic gap. Moreover, Lambda(X|Y) is indeed the discrete layered entropy of the "conditional random variable X|Y", so conditional discrete layered entropy is a special case of discrete layered entropy (unlike Shannon entropy). These properties make Lambda(X) useful for analyzing conditional compression problems such as channel simulation with common randomness. In particular, it can give a tighter bound for the strong functional representation lemma.

Computer scienceMathematics

Audience: researchers in the discipline


Seminar on Algorithmic Aspects of Information Theory

Series comments: This online seminar is a follow up of the Dagstuhl Seminar 22301, www.dagstuhl.de/en/program/calendar/semhp/?semnr=22301.

Organizer: Andrei Romashchenko*
*contact for this listing

Export talk to