### About Me

I will be joining George Mason University as an assistant professor in Fall, 2015. From 2012 until 2015, I was a research scientist at Applied Communication Sciences (ACS), where I did research in cryptography and cyber security. Prior to that, I was a postdoc at Columbia University with Tal Malkin, as a recipient of the Computing Innovation Fellowship. I received my PhD in July 2010 with Jonathan Katz in the computer science department at the University of Maryland. Here's my curriculum vitae (PDF).

### Research Interests

I am primarily interested in cryptography, both in the (in)feasability of constructing protocols that achieve desired cryptographic guarantees, and in developing methods for realizing such guarantees in today's computing environments. My thesis (PDF) is a good example of the former. There I explore whether it is possible to achieve fairness in secure computation, ensuring that neither party learns the output before the other. My work on performing secure computation to the RAM model is a good example of the latter (see below). There we studied the practical benefits of executing secure computations in the RAM model, demonstrating a performance improvement over circuit-based constructions when computing on just 20 Megabytes of input.### Publications

*Click to read the abstract and download the paper, if available.*

Multi-Input Functional Encryption
Eurocrypt 2014 (in submission)

Functional encryption (FE) is a powerful primitive enabling fine-grained access to encrypted data. In an FE scheme, secret keys (“tokens”) correspond to functions; a user in possession of a ciphertext ct = Enc(x) and a token TKf for the function f can compute f(x) but learn nothing else about x. An active area of research over the past few years has focused on the development of ever more expressive FE schemes.
In this work we introduce the notion of multi-input functional encryption. Here, informally, a user in possession of a token TKf for an n-ary function f and multiple ciphertexts ct1 = Enc(x1 ),... , ct_n = Enc(x_n) can compute f(x1,...,xn) but nothing else about the {xi}. Besides introducing the notion, we explore the feasibility of multi-input FE in the public-key and symmetric-key settings, with respect to both indistinguishability-based and simulation-based definitions of security. Download the paper here.

At TCC 2013, Choi et al. introduced the notion of multi-client verifiable computation in which a set of clients outsource to an untrusted server the computation of a function f over their collective inputs in a sequence of time periods. In that work, the authors defined and realized multi-client verifiable computation satisfying soundness against a malicious server and privacy against the semi-honest corruption of a single client.

We explore the possibility of achieving stronger security guarantees in this setting, in several respects. We begin by introducing a simulation-based notion of security in the universal com- posability framework, which provides a clean way of defining soundness and privacy in a single definition. We show the notion is impossible to achieve, even in the semi-honest case, if client- server collusion is allowed. Faced with this result, we explore several meaningful relaxations and give constructions realizing them.

We explore the possibility of achieving stronger security guarantees in this setting, in several respects. We begin by introducing a simulation-based notion of security in the universal com- posability framework, which provides a clean way of defining soundness and privacy in a single definition. We show the notion is impossible to achieve, even in the semi-honest case, if client- server collusion is allowed. Faced with this result, we explore several meaningful relaxations and give constructions realizing them.

On the Relationship between Functional Encryption, Obfuscation, and Fully Homomorphic Encryption
IMA Conference on Cryptography and Coding 2013

We investigate the relationship between Functional Encryption (FE) and Fully Homomorphic Encryption (FHE), demonstrating that, under certain assumptions, a Functional Encryption scheme supporting evaluation on two ci- phertexts implies Fully Homomorphic Encryption. We first introduce the notion of Randomized Functional Encryption (RFE), a generalization of Functional En- cryption dealing with randomized functionalities of interest in its own right, and show how to construct an RFE from a (standard) semantically secure FE. For this we define the notion of entropically secure FE and use it as an intermediary step in the construction. Finally we show that RFEs constructed in this way can be used to construct FHE schemes thereby establishing a relation between the FHE and FE primitives. We conclude the paper by recasting the construction of RFE schemes in the context of obfuscation.

Multi-party Computation of Polynomials and Branching Programs without Simultaneous Interaction.
Eurocrypt 2013

Halevi, Lindell, and Pinkas (CRYPTO 2011) recently proposed a model for secure computation that captures communication patterns that arise in many practical settings, such as secure computation on the web. In their model, each party interacts only once, with a single centralized server. Parties do not interact with each other; in fact, the parties need not even be online simultaneously.

In this work we present a suite of new, simple and efficient protocols for secure computation in this "one-pass" model. We give protocols that obtain optimal privacy for the following general tasks: -- Evaluating any multivariate polynomial $F(x_1, \ldots ,x_n)$ (modulo a large RSA modulus N), where the parties each hold an input $x_i$. -- Evaluating any read once branching program over the parties' inputs.

As a special case, these function classes include all previous functions for which an optimally private, one-pass computation was known, as well as many new functions, including variance and other statistical functions, string matching, second-price auctions, classification algorithms and some classes of finite automata and decision trees. Download the paper here.

In this work we present a suite of new, simple and efficient protocols for secure computation in this "one-pass" model. We give protocols that obtain optimal privacy for the following general tasks: -- Evaluating any multivariate polynomial $F(x_1, \ldots ,x_n)$ (modulo a large RSA modulus N), where the parties each hold an input $x_i$. -- Evaluating any read once branching program over the parties' inputs.

As a special case, these function classes include all previous functions for which an optimally private, one-pass computation was known, as well as many new functions, including variance and other statistical functions, string matching, second-price auctions, classification algorithms and some classes of finite automata and decision trees. Download the paper here.

Traditional approaches to generic secure computation begin by representing the function f being computed as a circuit. If f depends on each of its input bits, this implies a protocol with complexity at least linear in the input size. In fact, linear running time is inherent for non-trivial functions since each party must “touch” every bit of their input lest information about the other party’s input be leaked. This seems to rule out many applications of secure computation (e.g., database search) in scenarios where inputs are huge.

Adapting and extending an idea of Ostrovsky and Shoup, we present an approach to secure two-party computation that yields protocols running in sublinear time, in an amortized sense, for functions that can be computed in sublinear time on a random-access machine (RAM). Moreover, each party is required to maintain state that is only (essentially) linear in its own input size. Our protocol applies generic secure two-party computation on top of oblivious RAM (ORAM). We present an optimized version of our protocol using Yao's garbled-circuit approach and a recent ORAM construction of Shi et al.

We describe an implementation of this protocol, and evaluate its performance for the task of obliviously searching a database with over 1 million entries. Because of the cost of our basic steps, our solution is slower than Yao on small inputs. However, our implementation outperforms Yao already on DB sizes of 2^18 entries (a quite small DB by today's standards). Download PDF. Note that this proceedings version is considerably different from the ePrint version.

Adapting and extending an idea of Ostrovsky and Shoup, we present an approach to secure two-party computation that yields protocols running in sublinear time, in an amortized sense, for functions that can be computed in sublinear time on a random-access machine (RAM). Moreover, each party is required to maintain state that is only (essentially) linear in its own input size. Our protocol applies generic secure two-party computation on top of oblivious RAM (ORAM). We present an optimized version of our protocol using Yao's garbled-circuit approach and a recent ORAM construction of Shi et al.

We describe an implementation of this protocol, and evaluate its performance for the task of obliviously searching a database with over 1 million entries. Because of the cost of our basic steps, our solution is slower than Yao on small inputs. However, our implementation outperforms Yao already on DB sizes of 2^18 entries (a quite small DB by today's standards). Download PDF. Note that this proceedings version is considerably different from the ePrint version.

A Group Signature Scheme From Lattice Assumptions
Asiacrypt 2010

Group signature schemes allow users to sign messages on behalf of a group while (1) main- taining anonymity (within that group) with respect to an observer, yet (2) ensuring traceability of a signer (by the group manager) when needed. In this work we give the first construction of a group signature scheme based on lattices (more precisely, the learning with errors assump- tion), in the random oracle model. Toward our goal, we construct a new algorithm for sampling a random superlattice of a given modular lattice together with a short basis, that may be of independent interest.

Partial Fairness in Secure Two-Party Computation
Eurocrypt 2010

A seminal result of Cleve (STOC '86) is that, in general, \emph{complete} fairness is impossible to achieve
in two-party computation. In light of this, various techniques for
obtaining \emph{partial} fairness have been suggested in the
literature. We propose a definition of partial fairness within the
standard real-/ideal-world paradigm that addresses deficiencies of
prior definitions. We also show broad feasibility results with
respect to our definition:~partial fairness is possible for any
(randomized) functionality $f:X \times Y \rightarrow Z_1 \times Z_2$
at least one of whose domains or ranges is polynomial in size. Our
protocols are always private, and when one of the domains has
polynomial size our protocols also simultaneously achieve the usual
notion of security with abort. In contrast to some prior work, we
rely on standard assumptions only.
We also show that, as far as general feasibility is concerned, our results are \emph{optimal} (with respect to our definition).
Specifically, there exist functions with super-polynomial domain and range for which it is impossible to achieve our definition.
Download PDF.

On Complete Primitives for Fairness
TCC 2010

For secure two-party and multi-party computation with abort,
classification of which primitives are {\em complete} has been
extensively studied in the literature. However, for \emph{fair} secure
computation, where (roughly speaking) either all parties learn the
output or none do, the question of complete primitives has remained
largely unstudied.
In this work, we initiate a rigorous study of completeness for
primitives that allow fair computation. We show the following
results:
- \textbf{No ``short'' primitive is complete for fairness.}
In surprising contrast to other notions of security for secure
two-party computation, we show that for fair secure two-party
computation, no primitive of size $O(\log k)$ is complete, where $k$
is a security parameter. This is the case even if we can enforce
parallelism in calls to the primitives (i.e., the adversary does not
get output from any primitive in a parallel call until it sends input
to all of them). This negative result holds regardless of any
computational assumptions.
- \textbf{Coin Flipping and Simultaneous Broadcast are not
complete for fairness.} The above result rules out the completeness
of two natural candidates: coin flipping (for any number of coins) and
simultaneous broadcast (for messages of arbitrary length).
- \textbf{Positive results.} To complement the negative results,
we exhibit a $k$-bit primitive that \emph{is} complete for two-party
fair secure computation. This primitive implements a ``fair
reconstruction'' procedure for a secret sharing scheme with some
robustness properties. We show how to generalize this result to the
multi-party setting.
- \textbf{Fairness combiners.} We also introduce the question of
constructing a protocol for fair secure computation from primitives
that may be faulty. We show a simple functionality that is complete
for two-party fair computation when the majority of its instances are
honest. On the flip side, we show that this result is tight: no
functionality is complete for fairness if half (or more) of the
instances can be malicious.

We consider the following problem: can we construct constant-round
zero-knowledge proofs (with negligible soundness) for $\NP$ assuming
only the existence of one-way permutations? We answer the question
in the negative for fully black-box constructions (using only
black-box access to both the underlying primitive and the cheating
verifier) that satisfy a natural restriction on the ``adaptivity''
of the simulator's queries. Specifically, we show that only languages in $\coAM$ have
constant-round zero-knowledge proofs of this kind.

Authenticated Broadcast with a Partially Compromised Public-Key Infrastructure
Symposium on Stabilization, Safety and Security of Distributed Systems, 2010

Given a public-key
infrastructure (PKI) and digital signatures, it is possible to construct
broadcast protocols tolerating any number of corrupted parties.
Almost all existing protocols, however, do not distinguish between \emph{corrupted} parties (who do not follow the protocol),
and \emph{honest} parties whose secret (signing) keys have been compromised (but who continue to behave honestly).
We explore conditions under which it is possible to construct
broadcast protocols that still provide the usual guarantees (i.e., validity/agreement) to the latter.
Consider a network of $n$ parties, where an adversary
has compromised the secret keys of up to $t_c$ honest
parties and, in addition, fully controls the behavior of up to
$t_a$ other parties. We show that for any fixed $t_c > 0$, and any fixed $t_a$, there exists an efficient protocol for
broadcast if and only if $2t_a
+ \min(t_a, t_c) < n$. (When $t_c = 0$, standard results imply feasibility.)
We also show that if $t_c, t_a$ are not fixed, but are only guaranteed to satisfy the bound above, then
broadcast is impossible to achieve except for a few specific values of~$n$; for these ``exceptional'' values of~$n$,
we demonstrate a broadcast protocol.
Taken together, our results give a
complete characterization of this problem.
Invited for a special issue in Elsevier's Information and Computation journal.

Complete Fairness in Multi-Party Computation without an Honest Majority
Theory of Cryptography Conference, 2009

Gordon et al.\ recently showed that certain (non-trivial) functions
can be computed with complete fairness in the
\emph{two-party} setting. Motivated by their results, we
initiate a study of complete fairness in the \emph{multi-party} case and
demonstrate the first completely-fair protocols for non-trivial
functions in this setting. We also provide evidence
that achieving fairness is "harder" in the multi-party setting, at
least with regard to round complexity.

Complete Fairness in Secure Two-Party Computation
ACM Symposium on Theory of Computing (STOC) 2008

In the setting of secure two-party computation, two mutually
distrusting parties wish to compute some function of their inputs
while preserving, to the extent possible, various security
properties such as privacy, correctness, and more. One desirable
property is \emph{fairness}, which guarantees that if either party
receives its output, then the other party does too. Cleve
(STOC~1986) showed that complete fairness cannot be achieved
\emph{in general} in the two-party setting; specifically, he showed
(essentially) that it is impossible to compute Boolean XOR with
complete fairness. Since his work, the accepted folklore has been
that \emph{nothing} non-trivial can be computed with complete
fairness, and the question of complete fairness in secure two-party
computation has been treated as closed since the late '80s.
In this paper, we demonstrate that this widely held folklore belief
is \emph{false} by showing completely-fair secure protocols for
various non-trivial two-party functions including Boolean AND/OR as
well as Yao's ``millionaires' problem''. Surprisingly, we show that
it is even possible to construct completely-fair protocols for
certain functions containing an ``embedded XOR'', although in this
case we also prove a lower bound showing that a super-logarithmic
number of rounds are necessary. Our results demonstrate that the
question of completely-fair secure computation without an honest
majority is far from closed.

Rational Secret Sharing, Revisited
Security and Cryptography for Networks 2006

We consider the problem of secret sharing among $n$ rational
players. This problem was introduced by Halpern and Teague (STOC
2004), who claim that a solution is \emph{impossible} for $n=2$ but
show a solution for the case $n\geq 3$. Contrary to their claim, we
show a protocol for rational secret sharing among $n=2$
players; our protocol extends to the case $n\geq 3$, where it is
simpler than the Halpern-Teague solution and also offers a number
of other advantages. We also show how to avoid the continual involvement of the dealer,
in either our own protocol or that of Halpern and Teague.
Our techniques extend to the case of rational players trying to securely compute an arbitrary function, under certain
assumptions on the utilities of the players.

### Teaching

- MATH 199 Math, Game Theory and the Theory of Games