«Limits of Provable Security From Standard Assumptions Rafael Pass∗ Cornell University rafael April 2, 2011 Abstract We show that the ...»
Limits of Provable Security From Standard Assumptions
April 2, 2011
We show that the security of some well-known cryptographic protocols, primitives and assumptions (e.g., the Schnorr identiﬁcation scheme, commitments secure under adaptive selectivedecommitment, the “one-more” discrete logarithm assumption) cannot be based on any standard
assumption using a Turing (i.e., black-box) reduction. These results follow from a general result showing that Turing reductions cannot be used to prove security of constant-round sequentially witness-hiding special-sound protocols for unique witness relations, based on standard assumptions; we emphasize that this result holds even if the protocol makes non-black-box use of the underlying assumption.
∗ First version from November 4, 2010. This revision contains new results on blind signatures (Section 9). Pass is supported in part by a Alfred P. Sloan Fellowship, Microsoft New Faculty Fellowship, NSF CAREER Award CCF-0746990, AFOSR Award FA9550-08-1-0197, BSF Grant 2006317.
Contents 1 Introduction 2
1.1 Our Results......................................... 3
1.2 Other Related Work.................................... 5
1.3 Proof Techniques...................................... 6
1.4 On Non-black-box Reductions............................... 8
1.5 Overview.......................................... 8 2 Preliminaries 8
2.1 Notation........................................... 8
2.2 Basic Deﬁnitions...................................... 9
2.3 Black-box reductions........
1 Introduction Modern Cryptography relies on the principle that cryptographic schemes are proven secure based on mathematically precise assumptions; these can be general —such as the existence of one-way functions—or speciﬁc—such as the hardness of factoring products of large primes. The security proof is a reduction that transforms any attacker A of the scheme into a machine that breaks the underlying assumption (e.g., inverts an alleged one-way function). This study has been extremely successful, and during the past four decades many cryptographic tasks have been put under rigorous treatment and numerous constructions realizing these tasks have been proposed under a number of well-studied complexity-theoretic intractability assumptions. But there are some well-known protocols, primitives, and assumptions, that have resisted security reductions under well-studied intractability assumptions.
The Schnorr identiﬁcation scheme Schnorr’s identiﬁcation [Sch91] scheme is one of the most well-known and commonly-used identiﬁcation schemes. For instance, recent usage includes the BlackBerry Router protocol. Schnorr showed security of this scheme under a passive (eavesdropper) attack based on the discrete logarithm assumption. But what about active (malicious) attacks? In particular, can security under active—even just sequential—attacks be based on the discrete logarithm assumption, or any other “standard” intractability assumption?1 The adaptive selective decommitment problem Assume a polynomial-time adversary receives a number of commitments and may then adaptively request openings of, say, half of them. Do the unopened commitments still remain hiding? This problem was ﬁrst formalized by Dwork, Naor, Reingold and Stockmeyer [DNRS03] but according to them it arouse over 25 years ago in the context of distributed computing.2 As noted by Dwork et al, random-oraclebased commitment schemes easily satisfy this property. Can we construct non-interactive (or even two-round) commitment schemes that provably satisfy this property based on any standard intractability assumption?3 One-more inversion assumptions Can a polynomial-time algorithm A, given a prime-order group G and a generator g for G, ﬁnd the discrete logarithm (w.r.t to the generator g) to “target” points y1,..., y ∈ G if it may make − 1 queries to a discrete logarithm oracle (for the group G and generator g)? The “one-more” discrete-logarithm assumption states that no such algorithm exists. Assumptions of this “one-more inversion” type were introduced by Bellare, Namprempre, Pointcheval and Semanko [BNPS03] and have been used to prove the security of a number of practical schemes; for instance, Bellare and Palacio [BP02] have shown active security of Schnorr’s identiﬁcation scheme under the one-more discrete logarithm assumption. But can the security of these type of one-more inversion assumptions be based on more standard-type assumptions?
Unique Blind Signatures In 1982, Chaum [Cha82] introduced the concept of a “blind signature”— roughly speaking, a signature scheme where a user may ask a signer S to sign a message m As we shall discuss shortly, Bellare and Palacio [BP02] have demonstrated that Schnorr’s scheme is secure under active attacks under a new type of “one-more inversion” assumption.
We remark that DNRS focused their treatment on a non-adaptive version of this game where the adversary must select all the commitments to be opened up in a single shot; the general adaptive version is considered in Remark
7.1 in [DNRS03].
Interestingly, the related question of constructing encryption schemes secure under selective decryption based on “standard-type assumptions” has recently been solved [BHY09].
while keeping the content of m secret from S—and provided its ﬁrst implementation. His scheme is non-interactive (just as traditional signature schemes) and has the desirable property that for every message there is a unique valid signature. Can the security of his scheme, or more generally, any “unique non-interactive blind signature” (i.e., non-interactive blind signature schemes with unique signatures), be based on standard assumptions?
Witness hiding of parallelized versions of classical zero-knowledge protocols It is well-known that parallelized versions of the classic three-round zero-knowledge protocols of [GMR89, GMW91, Blu86] are witness indistinguishable [FS90]. As shown by Feige and Shamir [FS90], for certain languages with multiple witnesses, they are also witness hiding under sequential (and even concurrent) composition. Can we prove that these protocols also are witness hiding under sequential composition for languages with unique witnesses based on any standard assumption?
In this paper, we present negative answers to the above questions for a very liberal deﬁnition of “standard intractability assumptions”, if we restrict to Turing (i.e., black-box) reductions—that is, reductions that use the alleged attacker A as a black-box.
More precisely, following Naor [Nao03] (see also [DOP05, HH09, RV10]), we model an intractability assumption as an arbitrary game between a (potentially) unbounded challenger C, and an attacker A.4 A is said to break the assumption C with respect to the threshold t if it can make C output 1 with probability non-negligibly higher than the threshold t. The only restriction we put on C is that it communicates with A in an a priori ﬁxed polynomial number of rounds; we refer to such assumptions as polynomial-round intractability assumptions. All traditional cryptographic hardness assumptions (e.g., the hardness of factoring, the hardness of the discrete logarithm problem, the decisional Diﬃe-Hellman problem etc.) can be modeled as 2-round challengers C with the threshold t being either 0 (in case of the factoring or discrete logarithm problems) or 1/2 (in case of the decisional Deﬃe-Hellman problem). But also the “one-more discrete logarithm assumption” for a ﬁxed polynomial l, or the assumption that a speciﬁc protocol (P, V ) is witness-hiding under a single, or even an a priori bounded number of, interactions can be modeled as polynomial-round challengers C (with the threshold t = 0).
1.1 Our Results Our main result shows that it is impossible to (Turing) reduce any polynomial-round intractability assumption to the witness hiding under sequential composition property of certain types of constantround arguments of knowledge protocols—called “computationally special-sound” protocols—for languages with unique witnesses. All our speciﬁc lower-bounds follow as corollaries of this result.
Recall that a three-round public-coin interactive proof is said to be special-sound [CDS94], if a valid witness to the statement x can be eﬃciently computed from any two accepting prooftranscripts of x which have the same ﬁrst message but diﬀerent second messages. We consider a relaxation of this notion—which we simply call computational special-soundness—where a) the number of communication rounds is any constant (instead of just three), b) the extractor may need a polynomial number of accepting transcripts (instead of just two), and c) extraction need only succeed if the transcripts are generated by communicating with a computationally-bounded prover. All traditional constant-round public-coin proofs of knowledge protocols (such as [GMR89, In contrast to [Nao03], we do not insist that the challenger is eﬃcient; since our aim is to prove lower bounds, this only strengthens our results.
GMW91, Blu86, Sch91], as well as instantiations of [GMW91, Blu86] using statistically-hiding commitments) satisfy this property, and continue to do so also under parallel repetition.
Theorem 1 (Main Theorem - Informally stated). Let (P, V ) be a constant-round computationally special-sound interactive argument with super logarithmic-length veriﬁer messages for the language L with unique witnesses. Assume there exists a polynomial-time Turing reduction R such that RA breaks the r(·)-round assumption C (where r is a polynomial) w.r.t. the threshold t for every A that breaks sequential witness hiding of (P, V ). Then C with respect to the threshold t can be broken in polynomial-time.
Our main theorem is most closely related to the work of Haitner, Rosen and Shaltiel [HRS09].
As we explain in more detail in Section 1.2, [HRS09] also present lower bounds for using Turing reductions to demonstrate witnes hiding for constant-round public-coin protocols. They consider a more general class of protocols than we do, and they present a lower-bound for “stand-alone” (as opposed to sequential) witness hiding. However, their lower bound only applies to restricted classes of Turing reductions, whereas our lower bound applies to arbitrary Turing reductions.
Our main theorem directly rules out using Turing reductions for demonstrating sequential witness hiding of parallelized versions of classical zero-knowledge protocols for languages with unique witnesses based on polynomial-round intractability assumptions. We next show that all the previously mentioned questions can be restated in the language of “sequential witness hiding for unique witness languages”, and we can thus use our main theorem to provide negative answers also to those questions.
Schnorr’s identiﬁcation scheme: To rule out Turing reductions for proving security of the Schnorr identiﬁcation scheme, note that Schnorr’s protocol is a special-sound proof for a unique witness language. Next, if Schnorr’s protocol is not sequentially witness hiding, then it cannot be a secure identiﬁcation scheme: The witness in this proof is the identiﬁer’s secretkey, so if an attacker can recover it after hearing polynomially many proofs, we can trivially violate the security of the identiﬁcation scheme. (We mention that, on the other hand, the related scheme by Okamoto’s scheme [Oka92], can be proven secure based on the discrete logarithm assumption. However, the language considered in Okamoto’s protocol does not have a unique witnesses.) Adaptive selective decommitment: To rule out solutions to the selective-decommitment problem, we extend one of the results from [DNRS03] to show that if implementing the commitment scheme in GMW’s Graph 3-Coloring protocol with non-interactive (or two-round) commitments that are secure under adaptive selective decommitment, then the resulting protocol is sequentially witness hiding for any eﬃciently samplable hard language with unique witnesses;5 so assuming the existence of an eﬃciently samplable hard language with unique witnesses, Turing reductions cannot be employed to reduce adaptive selective-decommitment security of such commitments to any polynomial-round intractability assumption.
One more inversion assumptions: As mentioned, Bellare and Palacio [BP02] have shown that the security of the Schnorr scheme can be based on the “one-more discrete log” assumption, so by their work, we directly get that polynomial-round intractability assumptions cannot be Turing-reduced to the one-more discrete log assumption. By directly constructing appropriate Dwork, Naor, Reingold and Stockmeyer [DNRS03] show that the GMW protocol instantiated with “plain” (i.e., non-adaptive) selective-decommitment secure commitment satisﬁes some weak notions of zero-knowledge which, in particular, imply single-instance witness-hiding for unique witness relations.
special-sound protocols, we can generalize this result to rule out even weaker types of “manymore” discrete logarithm assumptions (where we require that it is hard to ﬁnd inverses when having access to only inverses, where 0.) Using the same approach, we get that polynomial-round intractability assumptions cannot be Turing reduced to many-more variants of the RSA problem either (and more generally, any family of certiﬁed permutations that is additive homomorphic).
Unique blind signatures: The notion of unforgeability for blind signature schemes [PS00] requires that no attacker having requested signatures (where is an arbitrary polynomial) can come up with +1 signatures. We show that the existence of a unique non-interactive blind signatures implies that for every polynomial, there exists a computationally special-sound protocol for a unique witness language that is witness hiding under sequential repetitions;
this suﬃces for applying our main theorem to rule out using Turing reductions for basing the security of such blind signature schemes on polynomial-round intractability assumptions.