# Nexus of Information and Computation Theories Fundamental Inequalities and Lower Bounds Theme February 15 - 26, 2016

### Titles, Abstracts, and Videos

Salman Avestimehr (USC)   (Video)
Coded MapReduce

Abstract: We discuss the fundamental tradeoff between the ’'communication load’’ and the ’'computation load’’ in distributed computing.

Paul Beame (University of Washington)   (Video: Part 1, Part 2, Part 3)
Branching Programs and Lower Bounds

Abstract: Branching programs are clean and simple non-uniform models of computation that capture both time and space simultaneously. We present the best methods known for obtaining lower bounds on the size of (length-restricted) branching programs and the applications of these methods to derive general time-space tradeoff lower bounds for specific natural problems.

Nigel Boston (University of Wisconsin)   (Video)
Ingleton-violating Points in the Entropy Region

Abstract: After recalling the entropy region and Ingleton inequalities, I shall describe ways to obtain points in the entropy region for 4 discrete random variables that violate this inequality and describe measures of how far the inequality is violated.

Terence Chan (University of South Australia)   (Video: Part 1, Part 2, Part 3)
Fundamental aspects of information inequalities

Abstract: Information inequalities are very important tools and are often needed to characterise fundamental limits and bounds in many communications problems such as data transmission and compressions. They are also very closely related to different mathematical areas including matroid theory, group theory, linear algebra, algorithmic complexity, determinantal inequalities etc. In this talk, we will examine some interesting properties about entropies and information inequalities. Our focus will be on the interplay among codes, entropies and groups.

First, we will explore the one-to-one correspondence relations among information inequalities, inequalities for groups and subspace rank inequalities. Next, we will examine the relations between entropies and codes. By treating a code (either a classical error control code, a secret sharing code, or a network code) as a set of random variables, many properties of a code can be re-interpreted “information-theoretically”. One example is the Generalised Greene Theorem, which says that the weight enumerating polynomial of a code can be determined by the entropies of the codeword symbol random variables. By exploiting the link between codes and entropies, we have extended Delsarte's linear programming bound for a boarder class of codes including the secret sharing codes. By constructing random variables (and hence the corresponding code) from subgroups, we obtained a new group inequality. Finally, we will consider their applications in areas of network coding and distributed data storage systems.

Abstract: Patrascu showed a connection between the problem of proving lower bounds on dynamic data-structure and proving lower bounds for an intriguing communication problem with advice. He made tempting conjectures on the communication problem that if true would yield polynomial lower bounds for dynamic data-structure. In this talk, we will show that the communication problem is more delicate than what one may believe on first sight. In particular, the strongest form of Patrascu's conjecture turns out to be false!

Joint work with Jeff Edmonds, Faith Ellen and Toniann Pitassi.

Qi Chen (The Chinese University of Hong Kong)   (Video)
Partition-Symmetrical Entropy Functions

Abstract: Let $$N=\{1,\ldots,n\}$$. Let $$p=\{N_1,\ldots,N_t\}$$ be a $$t$$-partition of $$N$$. An entropy function $$h$$ is called $$p$$-symmetrical if for all $$A$$, $$B \subset N$$, $$h(A)=h(B)$$ whenever $$|A \cap N_i|=|B \cap N_i|$$, $$i=1,\ldots,t$$. We prove that the closure of the set of $$p$$-symmetrical entropy functions is completely characterized by Shannon-type information inequalities if and only if $$p$$ is the $$1$$-partition or a $$2$$-partition with one of its blocks being a singleton. The characterization of the partition-symmetrical entropy functions can be useful for solving some information theory and related problems where symmetry exists in the structure of the problems.

Fan Cheng (National University of Singapore)   (Video)
Beyond Shannon EPI: a Completely Monotone Conjecture

Abstract: Let $$X$$ be an arbitrary random variable and $$Z$$ be an independent Gaussian random $$\sim$$ $$\mathcal{N}(0,1)$$. The differential entropy $$h(X+\sqrt(t)Z)$$, $$t>0$$ plays a fundamental role in information theory. In Costa's entropy power inequality, $$e^{2h(X+\sqrt{t}Z)}$$ is showed to be concave in $$t$$. Noting that Fisher information $$I(X+\sqrt{t}Z)$$ is the first derivative of $$h(X+\sqrt{t}Z)$$. In our work, we introduce a conjecture on the signs of all the derivatives of $$I(X+\sqrt{t}Z)$$: $$(-1)^n \frac{\partial^n}{\partial t^n} I(X+\sqrt{t}Z) \geq 0$$, for $$n = 0,1,2,\ldots$$ That is, $$I(X+\sqrt{t}Z)$$ is completely monotone in $$t$$. The conjecture may date back to a problem studied by McKean in mathematical physics in 1966, which remained unknown to information theory until very recently. In this talk, we will introduce the background, the progress, and the implication of the completely monotone conjecture.

Raphael Clifford (University of Bristol)   (Video: Part 1, Part 2, Part 3)
Unconditional lower bounds for streaming problems

Abstract: It has become possible in recent years to provide unconditional lower bounds on the time needed to perform a number of basic computational operations. I will discuss some of the main techniques involved and show how one in particular, the information transfer method, can be exploited to give unconditional time lower bounds for computation on streaming data.

I will then discuss the main open problem in this area that derives from the fact that existing techniques fail when the arriving stream consists of single bits. I will set out a recently developed method that allows to make some progress in this setting and then introduce the mathematical hurdles we still need to overcome.

László Csirmaz (Central European University, Budapest)   (Video: Part 1, Part 2, Part 3)
Geometry of the entropy region

Abstract: A three-lecture series covering some recent research on the geometry of the entropy region. The lectures will cover: 1) Shannon inequalities; the case of one, two and three variables. Reducing dimensions. 2) The closure of the entropy region is a closed convex cone. 3) Defining distributions: partition, vector, group based distributions. The “ringing bell” distribution. 4) Finding new entropy inequalities, the methods of Zhang-Yeung, Makcharychev, Matus, Maximum entropy method. Using ITIP to check Shannon inequalities. 5) Private information, dimension reduction, symmetrization. 6) The case of four random variables. Ingleton coordinate; structure of $$\Gamma^*_4$$. Generating new inequalities. 7) $$\Gamma^*_4$$ is not polyhedral – Matus’ proof 8) Systematic search for new inequalities: methods, algorithms, results.

Randall Dougherty (Center for Communications Research)   (Video)
Entropy inequalities and linear rank inequalities

Abstract: Entropy inequalities (Shannon and non-Shannon) have been used to obtain bounds on the solutions to a number of problems. When the problems are restricted to the linear case, these bounds can sometimes be improved using linear rank inequalities such as the Ingleton inequality, which hold for entropy vectors coming from linear variables (also known as representable polymatroids) but not necessarily for all entropy vectors. This talk will describe methods for producing linear rank inequalities, or finding counterexamples to putative linear rank inequalities, comparing them to the methods for proving non-Shannon entropy inequalities, and show the differences between known results for the entropy region and for its linear analogue (actually analogues, since one has to divide into cases based on the characteristic of the underlying field).

Salim El Rouayheb (Illinois Institute of Technology)   (Video)
Secret Sharing in Distributed Storage Systems

Abstract: Distributed storage systems (DSSs) store large amount of data and make it accessible online, anywhere and anytime. To protect against data loss, the data in DSSs is stored redundantly, either by replication, or erasure codes as recently introduced into Facebook and Microsoft clouds.

In this talk, I want to highlight the tension between security and reliability in DSSs, especially when erasure codes are used to save storage space. This tension is the consequence of the dynamic nature of these systems in which nodes are frequently leaving or joining the system. This leads to a continuous exchange of data in the DSS to maintain the same redundancy level. Unfortunately, this makes the DSS more vulnerable to eavesdropping and malicious attacks. Classical solutions, such as Shamir’s secret sharing, are broken here and cannot achieve security. I will focus on achieving information theoretic security of data in DSSs, and describe on-going efforts for characterizing the fundamental limits of security in DSS. I will present bounds on the secure capacity, i.e., the maximum amount of information that can be stored securely in the system, and describe secure code constructions that can achieve these bounds in certain regimes. I will conclude with a discussion on the numerous open problems and challenges in this area.

Faith Ellen (University of Toronto)
Data Structure Lower Bounds

Abstract: This introductory talk will discuss how asymmetric communication complexity can be used to obtain lower bounds on for both static and dynamic data structure problems in the cell probe model.

Anna Gal (University of Texas at Austin)
Lower bound methods for the cell probe complexity of succinct data structures

Abstract: Succinct data structures use redundancy much smaller than the size of the actual data, that is their space requirement is close to the information theoretic minimum. In this talk we survey the known methods for provinglower bounds on the cell probe complexity of succinct data structures. We present lower bounds on query time that are superlinear with respect to query size. Such lower bounds cannot be obtained by the communication complexity approach.

Babak Hassibi (Caltech)    (Video)
Simple Algorithms and Guarantees for Low Rank Matrix Completion over $$GF(2)$$

Abstract: Let $$X$$ be a $$n_1$$-by-$$n_2$$ matrix with entries in $$GF(2)$$ and rank $$r<\min(n_1,n_2)$$. We consider the problem of reconstructing $$X$$ given only a subset of its entries. This problem has recently found numerous applications, most notably in network and index coding, where finding optimal linear codes (over some field $$GF(q)$$ can be reduced to finding the minimum rank completion of a matrix with a subset of revealed entries. The problem of matrix completion over reals also has many applications and in recent years several polynomial-time algorithms with provable recovery guarantees have been developed. However, to date, such algorithms do not exist in the finite-field case. We propose a linear algebraic algorithm, based on inferring low-weight relations among the rows and columns of $$X$$, to attempt to complete $$X$$ given a random subset of its entries. We establish conditions on the row and column spaces of $$X$$ under which the algorithm runs in polynomial time and can successfully complete $$X$$ with high probability from a vanishing fraction of its entries. We then propose a linear programming-based extension of our basic algorithm, and evaluate it empirically. Our simulations suggest that this linear programming-based method can successfully complete n-by-n random rank $$r$$ matrices (for $$n<100$$ and $$r<10$$) from at most $$3\log{N_{n,r}}$$ entries where $$N_{n,r}$$ is the cardinality of the set of $$n$$-by-$$n$$ binary matrices of rank $$r$$.

Joint work with James Saunderson and Maryam Fazel.

Yuchong Hu (Huazhong University of Science and Technology)   (Video)
Network Coding for Inter-cloud Storage

Abstract: Inter-cloud (or Cloud of Clouds) is viewed as the next revolution in the cloud computing paradigm wherein the computational and data infrastructure for handling scientific,business and enterprise applications spans across multi-type and multi-brand clouds, and inter-cloud storage is exactly one of key parts of inter-cloud. Due to high transmission cost among inter-cloud storage nodes, it makes the difficulty of promotion for inter-cloud storage. However, it lacks efficient schemes to reduce the bandwidth cost consuming by system operations of inter-cloud storage. Motivated by this, we start our studies based on inter-cloud storage, use the information flow graph and product-matrix framework as the main modeling methodologies, adopt random linear network coding as the main technique, and study some key issues of inter-cloud storage.

Tarik Kaced (Université de Paris-Est, LACL, UPEC)   (Video)
Dual Information Inequalities

Abstract: The duality for entropic matroids and polymatroids is discussed under the lens of information inequalities. We define the dual of an information inequality and ask whether the new dual inequality is a valid information inequality or not. We will argue the case of rank inequalities, Shannon-type inequalities and finally the case of non-Shannon type inequalites (the most interesting case).

Young-Han Kim (UCSD)   (Video)
Fundamental Inequalities and Capacity Upper Bounds

Abstract: Index coding is a canonical problem in network information theory that provides a simple yet powerful platform to develop new coding techniques and capacity bounds. We discuss upper bounds on multiple-unicast capacity of a general multiple-unicast network through the lens of index coding.

P. Vijay Kumar (Indian Institute of Science, University of Southern California)   (Video)
On Outer Bounds for the Storage-Repair-Bandwidth Tradeoff of Exact-Repair Regenerating Codes

Abstract: It has been known for some that it is not possible to achieve the storage-repair bandwidth tradeoff of regenerating codes under exact repair. This then gave rise to the question as to whether or not it is possible to asymptotically approach the tradeoff under exact repair. This has since been shown to be not possible through the derivation of tighter outer bounds on the tradeoff under exact repair. We will present an overview of these results.

Kasper Green Larsen (Aarhus University)   (Video: Part 1, Part 2, Part 3)
Data Structure Lower Bounds

Abstract: In this mini-course, we survey the various techniques developed for proving data structure lower bounds. On the dynamic data structures side, we cover the Chronogram Technique of Fredman and Saks for proving $$\log(n)/\log\log(n)$$ lower bounds, the Information Transfer technique of Patrascu and Demaine for proving logn lower bounds, and the mixture of the Chronogram technique and Cell Sampling technique introduced by Larsen for proving $$(\log(n)/\log(\log(n)))^2$$ lower bounds. On the static data structures side, we first see Miltersen et al.'s reduction to communication complexity, resulting in $$\log(n)/\log(S)$$ lower bounds. We then see the refined reduction by Patrascu and Thorup for proving $$\log(n)/\log(S\log(n)/n)$$ lower bounds and finally we see the Cell Sampling technique first introduced by Panigrahy et al. and later refined by Larsen to prove $$\log(n)/\log(S/n)$$ lower bounds.

Mokshay Madiman (University of Delaware)   (Video)
The Stam region, or the differential entropy region for sums of independent random vectors

Abstract: Define the Stam region as the subset of the positive orthant in $$\mathbb{R}^{2^N-1}$$ that arises from considering entropy powers of subset sums of N independent random vectors taking values in some Euclidean space. It is shown that the fractionally superadditive set functions give an outer bound for the Stam region, but that the supermodular set functions do not. In addition, various structural properties of the Stam region are explored.

František Matúš (Institute of Information Theory and Automation)   (Video: Part 1, Part 2, Part 3)
Entropy region and convolution

Abstract: The entropy region is constructed from vectors of random variables by collecting Shannon entropies of all subvectors. We will review results on its shape using polymatroidal and matroidal constructions, esp. the convolution. For the region of four random variables, results of computer experiments that approximate the shape will be presented. (joint work with L. Csirmaz)

Frédérique Oggier (Nanyang Technological University)   (Video)
Abelian Group Representability and Fundamental Inequalities

Abstract: Group theory and information theory have unexpectedly met around the concept of entropic vectors. We will discuss the notion of abelian group representability, how it helps in identifying families of groups which produce interesting entropic vectors, and its connection to fundamental inequalities.

Søren Riis (Queen Mary University of London)   (Video: Part 1, Part 2, Part 3)
Finite Dynamical Systems, Causal Networks and Information Inequalities

Abstract: The properties of finite dynamical systems have been investigated in the context of coding theoretic problems, such as network coding and index coding, and in the context of hat guessing games played on graphs. A causal network is an acyclic directed graph that represents the dependency of a collection of stochastic variables. The dependencies are assumed to be unknown, and one important question is to determine local rules that maximise the entropy of the system as a whole. We show that this problem is directly linked to fix-point problems in finite dynamical systems.

Relying heavily on computer calculations we show that there is a deep link between fix points in finite dynamic systems, causal networks and information inequalities.

Benjamin Sach (University of Bristol)   (Video)
Cell-probe bounds for Hamming distance

Abstract: We give a tight cell-probe bound for the time to compute Hamming distance in a stream. The cell probe model is a particularly strong computational model and subsumes, for example, the popular word RAM model. The task is to output the Hamming distance between a fixed $$n$$-dimensional vector and a vector of the n most recent values from a stream. One symbol of the stream arrives at a time and the each output must be computed before the next symbols arrives. In this talk we will give a lower bound of $$\Omega((\delta/(w+\log\log n))\log n)$$ time on average per output, where $$\delta$$ is the number of bits needed to represent an input symbol and $$w$$ is the cell or word size (which may be as small as a single bit). We will also argue that this is bound is in fact tight within the cell probe model.

Kenneth Shum (The Chinese University of Hong Kong)   (Video)
Bounds on secure regenerating codes

Abstract: Regenerating codes are coding schemes with applications in distributed storage systems, with the aim of repairing an erased storage node efficiently. For functional-repair regenerating codes, the fundamental tradeoff between the repair bandwidth and the amount of data stored in a storage node is characterized by formulating the problem as a single-source multi-cast problem in network coding. For exact-repair regenerating codes, the problem of characterizing the tradeoff between repair bandwidth and storage in general remains a challenging problem, and some breakthrough was obtained recently in some small cases. In this talk, we consider exact-repair regenerating codes in the presence of eavesdroppers, who are capable of reading the bits stored in a storage node. Bound on the rate of regenerating codes such that the eavesdroppers can get no information about the data file is obtained. The bound is shown to be tight in some small cases. This is a joint work with Fangwei Ye.

T. S. Jayram (IBM Almaden)   (Video)
A Composition Theorem for Conical Juntas with Applications to Query and Communication Complexity

Abstract: We describe a general method of proving degree lower bounds for conical juntas (nonnegative combinations of conjunctions) that compute recursively defined boolean functions. Such lower bounds are known to carry over to communication complexity. We give two applications:

AND-OR trees: We show a near-optimal $$\Omega (n^{0.753\ldots})$$ randomized communication lower bound for the recursive NAND function (a.k.a. AND-OR tree). This answers an open question posed by Beame and Lawry 1992,1993.

Recursive Majority: We show an $$\Omega(2.59^k)$$ randomized communication lower bound for the 3-majority tree of height $$k$$. This improves over the state-of-the-art already in the context of randomised decision tree complexity.

Joint with Mika Goos (U. Toronto).

Mikkel Thorup (Copenhagen University)
Don't Rush Into a Union: Take Time to Find Your Roots

We present a new threshold phenomenon in data structure lower bounds where slightly reduced update times lead to exploding query times. Consider incremental connectivity, letting $$t_u$$ be the time to insert an edge and $$t_q$$ be the query time. For $$t_u = \Theta(t_q)$$, the problem is equivalent to the well-understood union–find problem: $$\mathrm{InsertEdge}(s,t)$$ can be implemented by $$\mathrm{Union}(\mathrm{Find}(s),\mathrm{Find}(t))$$. This gives worst-case time $$t_u = t_q = O(\lg{n}∕\lg\lg{n})$$ and amortized $$t_u = t_q = O(\alpha(n))$$. By contrast, we show that if $$t_u = o(\lg{n}∕\lg\lg{n})$$, the query time explodes to $$t_q ≥ n^{1-o(1)}$$. In other words, if the data structure doesn’t have time to find the roots of each disjoint set (tree) during edge insertion, there is no effective way to organize the information! For amortized complexity, we demonstrate a new inverse-Ackermann type trade-off in the regime $$t_u = o(t_q)$$. A similar lower bound is given for fully dynamic connectivity, where an update time of $$o(\lg{n})$$ forces the query time to be $$n^{1-o(1)}$$. This lower bound allows for amortization and Las Vegas randomization, and comes close to the known $$O(\lg{n} \cdot (\lg\lg{n})O(1))$$ upper bound.

Chao Tian (The University of Tennessee Knoxville)   (Video)
Finding the fundamental limits of information systems: A computational approach

Abstract: Traditionally, information theoretic limits of information systems are identified and proved analytically. As such systems become more sophisticated, such an approach becomes rather unwieldy. In this talk, we describe our recent effort in applying a computational approach to accomplish task. Particularly, we will provide results we obtained using this approach on exact-repair regenerating codes, multilevel diversity coding and caching problems, and also discuss in some depth the techniques used and the remaining challenges.

Suresh Venkatasubramanian (University of Utah)   (Video: Part 1, Part 2, Part 3)
From Pigeons to Fano, and beyond

Abstract: Fano's inequality can be viewed as capturing a deep interplay between information and computation. It links storage, reconstruction and transmission in one inequality, generalizing arguments as simple as the pigeonhole principle, and leading us to inequalities by Assouad and LeCam. In this presentation I'll explain how Fano's inequality gets used to reason about data structures, statistics and inference.

John Walsh (Drexel University)    (Video)
Rate Regions for Network Coding: Computation, Symmetry, and Hierarchy

Abstract: This talk identifies a number of methods and algorithms we have created for determining fundamental rate regions and efficient codes for network coding problems, demonstrates their use via novel software tools we have implemented, then discusses a computationally oriented research agenda we have followed building on this capability. The ability to train a computer to calculate fundamental rate regions for network coding and computing problems, previously determined primarily by hand, inspires several new research directions, both algorithmic and theoretical. First of all, while conceptually well-posed, the process of directly computing certain polyhedral outer and inner bounds to rate regions has a complexity that grows very rapidly in the size of the network. In light of this, in order to enable the computer to solve as many networks and as large networks as possible, we define an appropriate notion of problem minimality, as well as notions of symmetry and equivalence formalized via the language of group actions. Even after accounting for these reductions that exploit the known problem symmetries, only somewhat small problems can be solved with the software tools. However, the variety in even these small problems is so great that one can rapidly generate lists of network coding problem equivalence classes far too vast for a human to exhaustively read and interpret. In order to learn from this combinatorial database, and to be able to harness it to understand networks at scale, we develop a structural theory inspired by the notion of graph and matroid minors. In particular, combination and embedding operators and associated theories are created that enable the rate region and its properties of a large network to be directly deduced from small networks which it contains. The embedding operators enable us to learn from the massive database of solved network coding problems by identifying small numbers of minimal forbidden network minors for sufficiency of certain classes of codes to exhaust the capacity region. Additionally, the combinations and embedding operators coupled with the database enable the solution of certain network coding problems of arbitrary size and scale.

Distributed Extremization: Fundamental Limits and Practical Schemes   (Video)

Udi Wieder (VMware Research)   (Video)
How to Approximate a Set without Knowing it's Size in Advance

Abstract: The dynamic approximate membership problem asks to represent a set $$S$$ of size $$n$$, whose elements are provided in an on-line fashion, supporting membership queries without false negatives and with a false positive rate at most $$\epsilon$$. That is, the membership algorithm must be correct on each $$x \in S$$, and may err with probability at most $$\epsilon$$ on each $$x \notin S$$. We study a well-motivated, yet insufficiently explored, variant of this problem where the size $$n$$ of the set is not known in advance. Existing optimal approximate membership data structures require that the size is known in advance, but in many practical scenarios this is not a realistic assumption. Moreover, even if the eventual size $$n$$ of the set is known in advance, it is desirable to have the smallest possible space usage also when the current number of inserted elements is smaller than $$n$$. We show a super-linear gap between the space complexity when the size is known in advance and the space complexity when the size is not known in advance. When the size is known in advance, it is well-known that $$\Theta(n\log(1/\epsilon))$$ bits of space are necessary and sufficient. However, when the size is not known in advance, we prove that at least $$n log(1/\epsilon)+ \Omega(n \log \log n)$$ bits of space must be used. In particular, the average number of bits per element must depend on the size of the set. If time allows we will show that this space lower bound is tight, and can be matched by an efficient data structure. The data structure supports membership queries in constant time in the worst case with high probability, and supports insertions in expected amortized constant time.

Advances in Characterizing Turnstile Streaming Algorithms as Linear Sketches

Abstract: I will cover recent developments in characterizing the space-optimal turnstile data stream algorithm for computing any relation of an underlying frequency vector as a linear sketch. I'll first discuss the original proof of this result, then several optimizations which have applications to obtaining optimal lower bounds for counting with additive error, dynamic graph matching, and frequency moments, which we do not know how to prove in another way. I'll also mention multi-pass extensions.

Based on joint works with Yuqing Ai, Wei Hu, Yi Li, Huy Nguyen, and Omri Weinstein

Raymond Yeung (The Chinese University of Hong Kong)   (Video: Part 1, Part 2, Part 3)
Shannon's Information Measures and Markov Structures

Abstract: Most studies of finite Markov random fields assume that the underlying probability mass function (pmf) of the random variables is strictly positive. With this assumption, the pmf takes the form of a Gibbs measure which possesses many nice properties.

In general, non-strictly positive pmf’s have very complicated conditional independence structure and are difficult to handle. An alleviation of this difficulty is to use conditional mutual information to characterize conditional mutual independencies. Specifically, for random variables $$X$$, $$Y$$, and $$Z$$, $$X$$ and $$Y$$ are independent conditioning on $$Z$$ if and only if $$I(X;Y|Z) = 0$$, regardless of whether the underlying pmf is strictly positive or not.

In the 1990’s, the theory of I-Measure was developed as a full-fledged set-theoretic interpretation of Shannon’s information measures. In this course, we first give an overview of this theory. Then we discuss a set of tools developed on the I-Measure that is most suitable for studying a special Markov structure called full conditional mutual independence (FCMI), which turns out to be a building block for Markov random fields. One application of these tools is to show that the I-Measure of a Markov chain (a special case of a Markov random field) exhibits a very simple structure and is always nonnegative.

In the last part of the course, we discuss some recent results along this line: i. a characterization of the Markov structure of a subfield of a Markov random field; ii. the Markov chain being the only Markov random field such that the I-Measure is always nonnegative.

Yitong Yin (Nanjing University)   (Video)
Rectangle inequalities for data structure lower bounds

Abstract: The richness lemma is a classic rectangle-based technique for asymmetric communication complexity and cell-probe lower bounds. The technique was enhanced by the Patrascu-Thorup’s direct-sum approach to prove higher cell-probe lower bounds. In this talk, I will give an improved richness lemma which generalizes the previous ones and may support even higher cell-probe lower bounds. Combined with the isoperimetric inequalities, we give an $$\Omega(d)$$ cell-probe lower bound for the nearest neighbor search problem when the data structure is of linear size.

Qin Zhang (Indiana University Bloomington)   (Video)
Lower Bound Techniques for Multiparty Communication Complexity

Abstract: In this talk we will discuss multiparty communication complexity in the $$k$$-site model, where we have $$k$$ machines, each having a piece of data and they want to jointly compute some function defined on the union of the $$k$$ data sets via communication. The communication is point-to-point, and the goal is to minimize the total communication between the $$k$$ sites. This model captures all point-to-point distributed computational models. We will give an introduction of the new techniques developed in the last few years for proving communication lower bounds in the $$k$$-site model. We believe that these techniques will have a wide applicability.