Nexus of Information and Computation Theories
Secrecy and Privacy Theme
March 21 – April 1, 2016

Titles, Abstracts, and Videos

Amos Beimel (Ben Gurion University)
Secret Sharing Schemes and Distribution Designs

Abstract: A secret-sharing scheme is a method by which a dealer distributes shares to parties such that only authorized subsets of parties can reconstruct the secret and all other sets get no information on the secret. Secret-sharing schemes are an important tool in cryptography and they are used as a building box in many secure protocols, e.g., general protocol for multiparty computation, Byzantine agreement, threshold cryptography, access control, attribute-based encryption, and generalized oblivious transfer.

In this talk, I will describe the most important constructions of secret-sharing schemes. In particular, I will describe linear secret sharing schemes, which are schemes where the shares are computed using a linear mapping. Most known secret-sharing schemes are linear.

I will then describe distribution designs – a generalization of secret sharing schemes. The goal of distribution design is to find a joint distribution on n random variables that satisfies a given set of constraints on the marginal distributions.

Matthieu Bloch (Georgia Tech)   (Video)
Secure and covert communications over noisy channels

Abstract: The benefits offered by ubiquitous communication networks are now mitigated by the relative ease with which malicious users can interfere or tamper with sensitive data. The past decade has thus witnessed a growing concern for the issues of privacy, confidentiality, and integrity of communications. In particular, users in a communication network now often wish to communicate without being detected by others.

In this talk, we will present a framework to analyze the fundamental limits of covert communication over noisy channels based on the concepts of source and channel resolvability. Source and channel resolvability are canonical information-theoretic problems that exploit error-control codes as a means to shape the distribution of stochastic processes. This conceptual approach allows us to extend prior work by developing a complete characterization of the fundamental limits of covert communications for point-to-point channels. In particular, we show that, irrespective of the quality of the channels, it is possible to communicate \(O(\sqrt{n})\) reliable and covert bits over \(n\) channel uses if the transmitter and the receiver share a key of size \(O(\sqrt{n})\); this improves upon earlier results requiring a key of size \(O(\sqrt{n}\log n)\) bits. Second, we show that, under certain conditions, it is possible to communicate \(O(\sqrt{n})\) reliable and covert bits over \(n\) channel uses without a secret key; this generalizes an earlier result established for binary symmetric channels.

The main technical problem that we address is how to develop concentration inequalities for “low-weight” sequences; the crux of our approach is to define suitably modified typical sets that are amenable to concentration inequalities. We will also highlight how the conceptual approach allows us to analyze the fundamental limits of covert communication in multi-terminal problems.

Elette Boyle (IDC Herzliya)
Is There an Oblivious RAM Lower Bound?

Abstract: An Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (JACM 1996), is a (probabilistic) RAM that hides its access pattern, i.e. for every input the observed locations accessed are similarly distributed. Great progress has been made in recent years in minimizing the overhead of ORAM constructions, with the goal of obtaining the smallest overhead possible.

We revisit the lower bound on the overhead required to obliviously simulate programs, due to Goldreich and Ostrovsky. While the lower bound is fairly general, including the offline case, when the simulator is given the reads and writes ahead of time, it does assume that the simulator behaves in a “balls and bins” fashion. That is, the simulator must act by shuffling data items around, and is not allowed to have sophisticated encoding of the data.

We prove that for the OFFLINE case, showing a lower bound without the above restriction is related to the size of the circuits for sorting. Our proof is constructive, and uses a bit-slicing approach which manipulates the bit representations of data in the simulation. This implies that without obtaining yet unknown superlinear lower bounds on the size of such circuits, we cannot hope to get lower bounds on offline (unrestricted) ORAMs.

Kamalika Chaudhuri (UC San Diego)   (Video)
Privacy-preserving Analysis of Correlated Data

Abstract: Many modern machine learning applications involve private and sensitive data that are highly correlated. Examples are mining of time series of physical activity measurements, or mining user data from social networks. Unfortunately, differential privacy, the standard notion of privacy in statistical databases, does not apply directly to such data, and as a result, there is a need for privacy mechanisms that work on correlated data and can still provide privacy.

In this talk, we consider Pufferfish, a generalization of differential privacy, that applies to correlated data. We provide novel privacy mechanisms for Pufferfish, and establish performance guarantees on them. Finally we look at a case study – analyzing aggregate statistics of a time series of physical activity measurements – with Pufferfish privacy.

Imre Csiszár (Renyi Institute, Budapest)   (Video)
Secret Key Capacities for Source and Channel Models

Abstract: This talk surveys fundamental limits for the problem of generating secret key (SK) for two or several parties, i.e., common randomness concealed from an eavesdropper. The largest rate at which SK can be generated, using resources specified by a given model such as correlated randomness and permissible communication, is the SK capacity of that model.

Attention will be restricted to models that admit the eavesdropper listen to but not interfere with the communication of the legal parties. The (strong) SK capacity will be established for several two-party and multi-party models of this kind. We focus on models where the providers of correlated randomness are memoryless sources or channels, but give hints also to models involving general sources or channels. Conclusive results are available primarily for models where the adversary has no other information than the communication she observes.

Achievablity results are proved via the familiar two-step approach: First, common randomness is generated not requiring secrecy, and in the next ‘‘privacy amplification’’ step a (deterministic) extractor is applied to yield a function of this common randomness concealed from the eavesdropper in a strong sense. Converse results are proved via formal manipulations of information quantities, using the Csisz'ar-K"orner sum identity.

Main reference: I. Csisz'ar and J. K"orner, Information Theory: Coding Theorems for Discrete Memoryless Systems, Second Edition, Cambridge University Press, 2011.

Yevgeniy Dodis (NYU)   (Video)
Randomness in Cryptography

Abstract: Unlike many other fields in computer science, randomness is essential for cryptography: secrets must have uncertainty to the attacker, and many cryptographic algorithms must be randomized (e.g., two stateless encryptions of the same message must look different). Traditionally, one assumes the existence of perfect randomness. However, this assumption is often unrealistic. In this talk I will survey what is know about basing cryptography of various (realistic) sources of randomness. We will ask the following questions:

1) Does Cryptography need nearly perfect (“extractable”) sources of randomness, or is entropy sufficient?
2) What if the secret key is imperfect but “local” (or public) perfect randomness is available?

As we will see, the answer to the first question is largely negative, while the second questions leads to many positive answers, some of which found many applications beyond cryptography.

Cynthia Dwork (Microsoft Research)    (Video)
Differentially Private False Discovery Control

Stefan Dziembowski (University of Warsaw)   (Video)
Modelling Side-Channel Leakage

Abstract: Physical side-channel attacks that exploit leakage emitting from devices are an important threat for cryptographic implementations. A recent trend in cryptography is to construct cryptographic algorithms that are secure given leakage model. Over the past 15 years there has been a number of such models proposed in the literature, starting with the probing model of Ishai et al [CRYPTO 2003], where the computation is modelled as a Boolean circuit, and the adversary can learn a limited number of them. Other models studied in the theory community include the bounded-leakage paradigm [Dziembowski, TCC 2006],[Akavia et al, TCC 2009], theonly computation leaks model [Micali and Reyzin, TCC 2004], the independent leakage model [Dziembowski and Pietrzak, FOCS 2008], the auxiliary input model [Dodis et al, TCC 2010], and many others.

Some of these models have been received with skepticism by the practitioners, who often argued that it is much more realistic to model leakage as a noisy function of the secret data. The first model for such noisy leakagewas proposed by Chari et al, [CRYPTO’99], and fully formalized by Prouff and Rivain [Eurocrypt 2013]. Somewhat surprisingly, recently Duc, Dziembowski, and Faust [Eurocrypt 2014] have shown that in fact the noisy leakage model of Prouff and Rivain can be reduced the probing model (i.e.: every noisy leakage function can be simulated be a probing function), which, in particular, greatly simplifies several proofs in the noisy leakage model, and can be viewed as closing the gap between theory and practice in this area.

In this talk we will give an overview of the leakage models used in the literature and present the reduction from the Duc et al paper. If time permits we will also talk about the follow-up work of Dziembowski, Faust and Skórski [Eurocrypt 2015].

Elza Erkip (Polytechnic Institute of NYU)
Anonymity in Social Networks: Fundamental Bounds and Connections to Community Detection

Abstract: Social networks provide abundant amount of data for research on population dynamics and enhancing the user experience, such as individualized advertisements. An important concern while analyzing the network data is protecting the privacy of the users. Recent attacks on disclosed network data suggest that keeping the users anonymous is not enough for protecting the identities due to potential help of other publicly available information for successful de-anonymization.

Social networks are known to follow a community structure, where nodes within a community have a higher chance of being connected among themselves than to nodes outside of their community. There has recently been significant interest in understanding the fundamental limits of and designing computationally efficient algorithms for detecting communities in social networks. Community detection problem has implications well beyond social networks, in areas including machine learning, genetics, and ecology, among others.

In the talk, we treat the de-anonymization problem to recover user identities in a social network as a matching problem between two samples of a random graph conforming to a community structure. We formalize the optimal estimate of the attacker and find fundamental bounds on de-anonymizability. The context we introduce allows us to explore the relationship between the problems of de-anonymization and community detection, and suggests a number of interesting future research directions.

Joint work with Efe Onaran and Siddharth Garg

Iftach Haitner (Tel Aviv University)   (Video: Part 1, Part 2, Part 3)
Leo Reyzin (Boston University)
Pseudoentropy and pseudorandomness

Abstract: If you see a cryptographic hash of my password, how can I quantify your uncertainty about the password? Entropy – a traditional measure of uncertainty – is of little help: conditioned on your knowledge of the hash value, the distribution of my passwords has small support, and yet your knowledge should not of be of much help to you in guessing my password. Conversely, if you see a cryptographic hash of a long document, how can you quantify my certainty about the document? Entropy is again of little help: there are many possible documents that hash to the same value, and yet you can be fairly certain that I know at most one of them.

Computational analogues of entropy provide answers to these questions and enable a host of cryptographic applications. In fact, the right notion of entropy and a toolbox of lemmas can make for beautifully simple proofs. This talk will survey some of the notions, lemmas, and applications.

Masahito Hayashi (Nagoya University)
Shannon Theoretic Analysis for Classical and Quantum Information Security

Abstract: Generating secure key from correlated random number is fundamental and important task in information theory. In the i.i.d. setting, it is known that the asymptotic optimal secure key generation rate is the conditional entropy when the leaked information is classical or quantum. On the other hand, source and channel coding problems, there are many results for the exponential decreasing rate of decoding error probability. In this talk, we adopt two kinds of security criteria, One is the modified mutual information, which is defined by the relative enetropy between the ideal distribution and true distribution. We show the validity of this criterion. The other is the L_1 norm criterion, which is defined as the variational distance between the same distributions as the above. The former is used in the information theory community and the latter is used in the community of cryprography. As our result, we derive the optimal exponential decreasing rates for both criteria in the i.i.d. case when the leaked information is classical. In the latter security criterion, the collision entropy plays a more important role than the minimum entropy. Further, in this case, we generalize the modified mutual information, and derive the asymptotic optimal secure key generation rate and the optimal exponential decreasing rates under the generalization. Finally, we also consider the quantum case, i.e., the case when the leaked information is quantum. We discuss how can we generalize our results to the quantum case partially. Further, we also discuss what kinds of hash function is effective for our purpose. We also discuss the extension to the Markovian case. Finally, we discuss the relation with blind computation.

These results contain collaborations with Vincent Tan, Shun Watanabe, Toyohiro Tsurumaru, and Tomoyuki Morimae

Yuval Ishai (Technion)    (Video: Part 1, Part 2, Part 3, Part 4)
Manoj Prabhakaran (University of Illinois)
A minicourse on Secure Multiparty Computation - Part I

Abstract: Secure multiparty computation allows two or more parties to perform a distributed computation on their local inputs while hiding the inputs from each other. This part of the minicourse will give a broad overview of research in the area, covering definitions, known results, connections with other problems, and open questions.

A minicourse on Secure Multiparty Computation - Part I

Abstract: In this part we will cover in greater detail certain information theoretic aspects of secure multiparty computation (MPC). Specifically we will discuss capacity questions related to MPC.

Delaram Kahrobaei (City University of New York)   (Video)
Cryptosystems Based on Group-Theoretic Problems: A Survey, New Results, Open Problems

Abstract: In this talk I will survey some of the cryptosystems based on group theoretic problems and their computational complexity such as Conjugacy, Membership, Endomorphism, Word, Twisted Conjugacy, and Geodesic Length Problems. I will speak about some non-abelian groups that have been proposed as platforms for such cryptosystems: Braid, Polycyclic, Metabelian, Grigorchuk, Thompson, Matrix, Hyperbolic, Small Cancellation, right angled Artin Groups and free nilpotent p-groups. The focus of the talk will be on infinite polycyclic group-based cryptosystems as well as a cryptosystem based on semidirect product of (semi)-groups. The latter is a joint work with V. Shpilrain. There will be open problems related to both computational complexity of group theoretic problems and cryptographic problems that I will mention at the end of the talk.

Negar Kiyavash (UIUC)   (Video)
Privacy-preserving Data Analytics on Social Networks: Limits of De-anonymizablity

Abstract: Social networks are an integral part of our lives. They provide significant amount of explicit and implicit information about their users. On one hand this data could be analyzed and harnessed for multitude of purposes: conducting polls, market analysis, etc. On the other hand, release of data could seriously compromise user privacy. Thus, before its release, the data must be processed in a manner that minimizes the risk of sharing private information of the users while allowing for performing the desired data analytics.

In this talk we study the limits of de-anonymizability of social networks by casting the problem as graph matching. Specifically, for the class of Erdos-Renyi random graph models, we ask when does the correlation induced by the structural properties of the graph allow the users to be de-anonymized? We provide achievability and converse bounds that differ by a factor of two throughout the parameter space.

Joerg Kliewer (New Jersey Institute of Technology)    (Video)
Lossy Compression with Privacy Constraints: Optimality of Polar Codes

Yingbin Liang (Syracuse University)   (Video)
Kernel-based Detection of Anomalous Structures over Large Networks

Abstract: This talk will focus on a type of problems, the goal of which is to detect existence of an anomalous object over a network. An anomalous object, if it exists, corresponds to a cluster of nodes in the network that take data samples generated by an anomalous distribution q whereas all other nodes in the network receive samples generated by a distinct distribution p. Such a problem models a variety of applications such as detection of an anomalous intrusion via sensor networks and detection of an anomalous segment in a DNA sequence. All previous studies of this problem have taken parametric models, i.e., distributions p and q are known. Our work studies the nonparametric model, in which distributions can be arbitrary and unknown a priori.

In this talk, I will first introduce the approach that we apply, which is based on mean embedding of distributions into a reproducing kernel Hilbert space (RKHS). In particular, we adopt the quantity of maximum mean discrepancy (MMD) as a metric of distance between mean embeddings of two distributions. I will then present our construction of MMD-based tests for anomalous detection over networks and our analysis of performance guarantee of the proposed tests. I will finally present a number of numerical results to demonstrate our results. Towards the end of the talk, I will discuss some related problems and conclude with a few future directions.

Huijia Lin (University of California, Santa Barbara)   (Video)
Zero Knowledge Protocols

Abstract: Zero-knowledge protocols, introduced by Goldwasser, Micali, and Rackoff [STOC 1985], are fascinating constructs in cryptography: They provide the paradoxical guarantee that a player, the prover, can convince another player, the verifier, of the validity of a mathematical statement, without revealing any information beyond the validity. Since their introduction, zero-knowledge protocols have played an important role in the development of many cryptographic tools and systems; most typically, zero-knowledge protocols are used to force malicious adversaries to follow the specified protocol without deviation.

In this talk, we will present the definition of zero-knowledge protocols, their basic constructions, and some recent developments on variants of zero-knowledge protocols.

Moni Naor (Wezimann Institute of Science)   (Video)
How to Share a Secret, Infinitely

Abstract: Secret sharing schemes allow a dealer to distribute a secret piece of information among several parties such that only qualified subsets of parties can reconstruct the secret. The collection of qualified subsets is called an access structure. The best known example is the \(k\)-threshold access structure, where the qualified subsets are those of size at least \(k\). When \(k=2\) and there are \(n\) parties, there are schemes where the size of the share each party gets is roughly \(\log n\) bits, and this is tight even for secrets of \(1\) bit. In these schemes, the number of parties n must be given in advance to the dealer.

We consider the case where the set of parties is not known in advanced and could potentially be infinite. Our goal is to give the party arriving at step \(t\) a small share as possible as a function of \(t\). Our main result is such a scheme for the \(k\)-threshold access structure where the share size of party \(t\) is \((k-1) \log t\) plus \(o(\log t) \mathrm{poly}(k)\). For \(k=2\) we observe an equivalence to prefix codes and present matching upper and lower bounds of the form \(\log t + \log \log t + \log \log \log t + \cdots + O(1)\).

Finally, we show that for any access structure there exists such a secret sharing scheme with shares of size \(2^{t-1}\).

Joint work with Ilan Komargoski and Eylon Yogev

Aleksandar Nikolov (University of Toronto)
Optimal Mechanism for Small Databases Using Convex Programming

Abstract: We study the problem of answering a workload of linear queries Q on a database of size at most n drawn from a universe U under the constraint of (approximate) differential privacy. Nikolov, Talwar, and Zhang (STOC13) proposed an efficient mechanism that, for any given Q and n, answers the queries with average error that is at most a factor polynomial in log |Q| and log |U| worse than the best possible. We improve on this guarantee and give a mechanism whose competitiveness ratio is at most polynomial in log n and log |U|, and has no dependence on the number of queries. Our mechanism is based on the projection mechanism of Nikolov, Talwar, and Zhang, but in place of an ad-hoc noise distribution, we use a distribution which is in a sense optimal for the projection mechanism, and analyze it using convex duality and the restricted invertibility principle.

Kobbi Nissim (Ben-Gurion University)   (Video)
Accessing Data while Preserving Privacy

Sirin Nitinawarat (Qualcomm Technologies, Inc.)   (Video)
Duality in Combinatorial Optimization and Information Theoretic Secrecy

Abstract: We show that an old result from Nash-Williams and Tutte on the duality in maximal spanning tree packing in multigraphs carries an information-theoretic significance in the context of largest multiterminal secret key generation from pairwise independent secret keys. This information theoretic interpretation leads to a new form for the dual term in the result of Nash-Williams and Tutte. We then discuss a natural generalization of the multiterminal secret key generation with the presence of a helper. In an interesting case with the ‘‘weak’’ helper, we show that the duality generalizes to maximal Steiner tree packing, which is NP-hard.

Sewoong Oh (University of Illinois)   (Video)
Operational interpretation of differential privacy and its applications.

Abstract: Interactive querying of a database degrades the privacy level. In this paper we answer the fundamental question of characterizing the level of differential privacy degradation as a function of the number of adaptive interactions and the differential privacy levels maintained by the individual queries. Our solution is complete: the privacy degradation guarantee is true for every privacy mechanism and, further, we demonstrate a sequence of privacy mechanisms that do degrade in the characterized manner. The key innovation is the introduction of an operational interpretation (involving hypothesis testing) to differential privacy and the use of the corresponding data processing inequalities. Our result improves over the state of the art and has immediate applications to several problems studied in the literature.

Krzysztof Pietrzak (IST Austria)   (Video)
Recent Advances and Challenges in Pseudoentropy

Abstract: We present some recent results and open problems concerning computational notions of entropy, aka. pseudoentropy.

Manoj Prabhakaran (University of Illinois at Urbana-Champaign)   (Video)
Questions in Cryptographic Complexity

Abstract: I will briefly survey some of the results on measuring “cryptographic complexity” of (multi-party) functions, and describe some important open problems in this area.

Lalitha Sankar (Arizona State University)   (Video)
Information-theoretic Privacy: The Utility of an Average-Case Approach

Abstract: As information about individuals and enterprises moves to an entirely digital medium, keeping certain aspects of the data confidential (even) from the legitimate data users, i.e., information privacy, is becoming an important and immediate societal problem. While the benefits (utility) of electronic data are multi-fold, there is a need to provide precise guarantees and limits on the private data leaked and quantify the tradeoff between utility and privacy, irrespective of the application. In this talk, we introduce an information-theoretic (IT) framework to formulate and study the utility-privacy tradeoff problem. We illustrate the application of the framework to three broad classes of privacy problems: (i) database privacy in which the privacy of the individuals in the database has to be preserved while ensuring statistical utility of the data; (ii) consumer privacy in which we bound the inferences possible from streaming time-series data used to remotely monitoring consumers; and (iii) competitive (interactive) privacy in which distributed agents (with possibly competing interests) seek to exchange data minimally for a desired system performance while preserving the confidentiality/privacy of specific aspects of their data – for this interactive setting, we also present a composition rule. Finally, we compare the utility-privacy performance of IT-privacy with the worst-case privacy guarantees offered by differential privacy for a data set whose statistics are not precisely known but restricted to a class of distributions and for a utility constraint quantified by Hamming distortion.

Adam Smith (Penn State)   (Video: Part 1, Part 2, Part 3)
Tutorial on Differential Privacy

Abstract: The tutorial will introduce differential privacy, a widely used definition of privacy for statistical databases.

We will begin with the motivation for rigorous definitions of privacy in statistical databases, covering several examples of how seemingly aggregate, high-level statistics can leak information about individuals in a data set. We will then define differential privacy, illustrate the definition with several examples, and discuss its properties. The bulk of the tutorial will cover the principal techniques used for the design of differentially private algorithms. Time permitting, we will touch on applications of differential privacy to problems having no immediate connection to privacy, such as equilibrium selection in game theory and adaptive data analysis in statistics and machine learning.

Stefano Tessaro (University of California, Santa Barbara)   (Video)
A Cryptographic Perspective on Information-theoretic Security

Abstract: Over the last few decades, the information theory and the cryptography communities have often studied related security problems. However, they have more than often adopted very different approaches and techniques.

The main purpose of this talk is to contribute to bridging this gap, by providing a cryptographic viewpoint on some security problems considered in the information-theory and physical-layer communities. The hope is to convey how to cryptographic approach (both on security metrics and proofs) can be beneficial to information theorists, but also, conversely, how information-theoretic insights can be helpful to cryptographers. I will start this talk with the classical wiretap setting, and then hopefully move on to other topics.

(The talk is partially based on joint works with Mihir Bellare and Alexander Vardy.)

Luca Trevisan (University of California, Berkeley)   (Video)
Quantum lower bound for inverting a permutation with advice

Abstract: Hellman proved that any permutation can be inverted non-trivially fast given pre-computed information. Any permutation \(f: [N] -> [N]\) can be inverted in time T given a pre-computed data structure of size \(N/T\), up to lower-order terms. In the black-box model, Yao proved the optimality of this trade-off. In particular, a data structure of size of the order of \(\sqrt{N}\) allows inversion time of the order of \(\sqrt{N}\).

Using Grover's algorithm, a quantum algorithm is able to invert any permutation \(f: [N] -> [N]\) in time order of \(\sqrt{N}\). What gain is possible given pre-computed data? Could we hope for time and data size to be both of the order of \(N^{1/4}\)?

In the black-box model, we show that, given data of size \(S\), the time as to be at least of the order of \(\sqrt{N/S}\), and so either time or size have to be at least \(N^{1/3}\).

Our lower bound is not known to be tight, and probably it is not: we believe that, in the black-box model, there is no quantum algorithm that achieves time less than \(\sqrt{N}\) using data size less than \(\sqrt{N}\).

(Joint work with Nayebi, Aaronson and Belov)

Himanshu Tyagi (Indian Institute of Science, Bangalore)   (Video: Part 1, Part 2, Part 3)
Shun Watanabe (Tokyo University of Agriculture and Technology)
Information Theoretic Secrecy and Interactive Communication

Abstract: Information theoretic secrecy refers to unconditional secrecy against a computationally unbounded adversary with any statistical procedure at his or her disposal. As is well-known, in absence of any additional resources, information theoretic secrecy is either not feasible from scratch. However, if the legitimate parties share some correlated randomness, many of the classical cryptographic primitives can be realized with information theoretic secrecy. Such correlated randomness can be accessed, for instance, from physical observations such as the fade of the wireless communication channel shared by the parties or can be purchased from trusted servers as in commodity-based cryptography. In this tutorial talk, we shall consider multiparty secret key agreement and secure multiparty (as well as two-party) computing problems, when the parties can access correlated randomness. In addition, the parties are allowed to communicate interactively over an authenticated, noiseless public channel. We will review the basic results that emerged over the last two decades in the information theory literature, which exhibit a close connection between secrecy generation and compression using interactive communication. The class of problems considered is very rich and connects to many basic problems in information theory, cryptography, and communication complexity. The list of topics to be covered include notions of secrecy, two party secret key agreement (information reconciliation and privacy amplification), multiparty secret key agreement and common randomness decomposition, two party secure computing, and multiparty secure computing with trusted parties. We shall also discuss some of our own results, namely a new upper bound for secret key length using interactive public communication and its role in characterising the minimum amount of communication needed for simulation of interactive protocols.

Jonathan Ullman (Northeastern University)   (Video)
Privacy and the Complexity of Simple Queries

Abstract: Abstract: I will present some new, nearly-optimal lower bounds on the amount of data required to release differentially private statistics on high-dimensional datasets, both in information-theoretic and computational settings. These results show that there is a significant “price of differential privacy” in high-dimensional datasets. We prove these lower bounds using two closely-related cryptographic primitives fingerprinting codes (in information theoretic setting) and traitor-tracing schemes (in the computational setting) that we show are closely connected to differentially private data analysis. I will also discuss how these lower bounds are related to realistic attacks on released datasets.

Sennur Ulukus (University of Maryland)   (Video)
Physical Layer Security with No Eavesdropper Channel State Information

Abstract: While information-theoretic physical-layer security is a powerful technique that guarantees unconditional security, early work in this field developed secrecy capacity and secure degrees of freedom results based on perfect knowledge of the channel state of the adversary. Eavesdropping adversaries are passive, and it is hard or impossible to attain information about their channel state. In this talk, we will summarize recent results on optimal signaling schemes when no channel state information is available about the adversary’s channel.

Alex Vardy (University of California, San Diego)   (Video)
Private Information Retrieval without Storage Overhead: Coding Instead of Replication

Abstract: Private information retrieval protocols allow a user to retrieve a data item from a database without revealing any information about the identity of the item being retrieved. Specifically, in information-theoretic \(k\)-server PIR, the database is replicated among \(k\) non-communicating servers, and each server learns nothing about the item retrieved by the user. The effectiveness of PIR protocols is usually measured in terms of their communication complexity, which is the total number of bits exchanged between the user and the servers. However, another important cost parameter is the storage overhead, which is the ratio between the total number of bits stored on all the servers and the number of bits in the database. Since single-server information-theoretic PIR is impossible, the storage overhead of all existing PIR protocols is at least \(2\) (or \(k\), in the case of \(k\)-server PIR).

In this work, we show that information-theoretic PIR can be achieved with storage overhead arbitrarily close to the optimal value of \(1\), without sacrificing the communication complexity. Specifically, we prove that all known \(k\)-server PIR protocols can be efficiently emulated, while preserving both privacy and communication complexity but significantly reducing the storage overhead. To this end, we distribute the \(n\) bits of the database among \(s+r\) servers, each storing \(n/s\) coded bits (rather than replicas). Notably, our coding scheme remains the same, regardless of the specific \(k\)-server PIR protocol being emulated. For every fixed \(k\), the resulting storage overhead \((s+r)/s\) approaches \(1\) as \(s\) grows; explicitly we have \(r \le k \sqrt{s}\bigl(1 + o(1)\bigr)\). Moreover, in the special case \(k = 2\), the storage overhead is only \(1 + \frac{1}{s}\).

In order to achieve these results, we introduce and study a new kind of binary linear codes, called here \(k\)-server PIR codes. We then show how such codes can be constructed from Steiner systems, from one-step majority-logic decodable codes, from constant-weight codes, and from certain locally recoverable codes. We also establish several bounds on the parameters of \(k\)-server PIR codes, and tabulate the results for all \(s \le 32\) and \(k \le 16\).

[Based on joint work with Arman Fazeli, Sankeerth Rao, and Eitan Yaakobi]

Ye Wang (Mitsubishi Electric Research Laboratories)   (Video)
On the Role of Common Information in Security and Privacy

Abstract: Common information aims to characterize the common randomness between two random variables. Two distinct, but related, definitions for common information were proposed in the 1970s by Gács and Körner and by Wyner, each with operational significance for particular coding problems. In this talk, we will show how these classical measures are useful tools in the characterization and derivation of several recent results in security and privacy. In particular, common information characterizes which channels are complete for secure two-party computation, as well as playing a key role in the simplified converse proof. Common information also characterizes which joint distributions are trivial to securely sample, which has applications in realizing correlated equilibria for games preceded by communication. In the problem of privacy-preserving data release, common information characterizes whether the best privacy-utility tradeoffs attainable by an output perturbation mechanism will be generally optimal or strictly suboptimal.

Hoeteck Wee (ENS, Paris)   (Video)
Functional Encryption and Information-Theoretic Cryptography

Abstract: Can we encrypt data while enabling fine-grained access control and selective computation? In this talk, we will survey how addressing this question led to new connections and questions in information-theoretic cryptography.

Stephanie Wehner (Delft University of Technology)   (Video)
Device-independence in quantum cryptography

Abstract: While quantum cryptography offers interesting security guarantees, it is challenging to build good quantum devices. In practise, we will therefore typically rely on devices built by someone else, whose properties are difficult to characterize and which exhibit inherent imperfections that may threaten security.

The goal of device independent quantum cryptography is to develop tests that allow (partially) unknown and uncharacterized quantum devices to be used safely. Specifically, we imagine that quantum devices act as black boxes, which we can only ask to perform certain quantum measurements and collect the resulting measurement outcomes. However, the quantum state or actual measurements performed by the device are unknown. Security guarantees should then be given purely as a function of such classical input/output behaviour.

This talk will be an introduction to the concept of device independence in quantum cryptography that does not assume any knowledge of quantum information. We briefly review results in device independent quantum key distribution, before exhibiting very recent work on device independent two-party quantum cryptography. Finally, we will discuss some of the many open challenges in certifying properties of quantum devices based on classically observed statistics.

Daniel Wichs (Northeastern)
Adaptively Secure Garbled Circuits from One-Way Functions

Abstract: A garbling scheme is used to garble a circuit C and an input x in a way that reveals the output C(x) but hides everything else. In many settings, the circuit can be garbled off-line without strict constraints on efficiency, but the input must be garbled very efficiently on-line, in time which is much smaller than the circuit size |C|. Yao's garbling scheme only achieves this under one-way functions in the selective security setting. It has remained as an open problem to achieve the stronger notion of adaptive security where the adversary can choose the input x adaptively after seeing the garbled circuit.

In this work, we modify Yao's scheme in a way that maintains all of its desirable features, while allowing us to prove adaptive security in various parameter regimes under one-way functions. As our main instantiation, we get a scheme where the size of the garbled input is only proportional to the width of the circuit, which is related to the space complexity of the computation, but independent of the circuit's depth. More broadly, we develop a connection between adaptive security in our framework and a certain type of pebble complexity}

As our main tool, of independent interest, we develop a new notion of somewhere equivocal encryption, which allows us to efficiently equivocate on a small subset of the message bits.

Frans Willems (Eindhoven University of Technology) Information Leakage in Security Systems Based on SRAM-PUFs

Abstract: In this lecture we discuss hardware-intrinsic security systems based on physical unclonable functions (PUFs) realised by static random-access memory (SRAM). We first focus on the fingerprinting behaviour of SRAM during power-up. We then connect this behaviour with the notion of secret-key capacity (Mauer (1993), Ahlswede & Csiszar (1993)). In the unbiased case the SRAM can be modelled as a binary symmetric channel and the secret key capacity can be achieved in principle with the XOR-method and linear error-correcting codes.

We first study the multiple-observation statistics of SRAM and discuss a notion of symmetry in multiple-observation settings. This notion allows us to investigate, for the XOR-method, the information leakage, i.e., the information that the helper data contains about the secret key. We show that in multiple-enrolment scenario’s with new keys there is no leakage, and even if the same key is enrolled several times the leakage is zero, if the SRAM satisfies our notion of symmetry. Leakage however can occur if two keys are enrolled on the same SRAM with different codes, producing two helper data streams. We show this by an example.

Andreas Winter (Universitat Autonoma de Barcelona)   (Video)
Constructions and Limitations of Quantum Data Hiding

Abstract: Quantum data hiding, originally invented as a limitation on local operations and classical communications (LOCC) in distinguishing globally orthogonal states, is actually a phenomenon arising generically in statistics whenever comparing a 'strong’ set of measurements (i.e., decision rules) with a 'weak’ one. The classical statistical analogue of this would be secret sharing, in which two perfectly distinguishable multi-partite hypotheses appear to be indistinguishable when accessing only a marginal. The quantum versions are richer in that for example LOCC allows for state tomography, so the states cannot be come perfectly indistinguishable but only nearly so, and hence the question is one of efficiency. The issues covered in the talk are going to be the following: 1. We will revisit a construction by Hayden/Leung/Smith for multi-party LOCC data hiding, and as a new result show that it meets a universal bound on the information efficiency of any data hiding scheme. 2. Gaussian operations and classical computation (GOCC): Not very surprisingly, GOCC cannot distinguish optimally even two coherent states of a single mode (Takeoka & Sasaki, PRA 78:022320, 2008). But we can find states, each a mixture of multi-mode coherent states, which are almost perfectly distinguishable by suitable measurements, by when restricted to GOCC, i.e. linear optics and postprocessing, the states appear almost identical. The construction is random and relies on coding arguments. Open questions include whether one can give a constructive version of the argument, and whether for instance even thermal states can be used, or how efficient the hiding is.