Papers updated in last 183 days (1504 results)

Last updated:  2024-03-29
Faster Amortized FHEW bootstrapping using Ring Automorphisms
Gabrielle De Micheli, Duhyeong Kim, Daniele Micciancio, and Adam Suhl
Amortized bootstrapping offers a way to simultaneously refresh many ciphertexts of a fully homomorphic encryption scheme, at a total cost comparable to that of refreshing a single ciphertext. An amortization method for FHEW-style cryptosystems was first proposed by (Micciancio and Sorrell, ICALP 2018), who showed that the amortized cost of bootstrapping n FHEW-style ciphertexts can be reduced from $O(n)$ basic cryptographic operations to just $O(n^{\epsilon})$, for any constant $\epsilon>0$. However, despite the promising asymptotic saving, the algorithm was rather inpractical due to a large constant (exponential in $1/\epsilon$) hidden in the asymptotic notation. In this work, we propose an alternative amortized boostrapping method with much smaller overhead, still achieving $O(n^\epsilon)$ asymptotic amortized cost, but with a hidden constant that is only linear in $1/\epsilon$, and with reduced noise growth. This is achieved following the general strategy of (Micciancio and Sorrell), but replacing their use of the Nussbaumer transform, with a much more practical Number Theoretic Transform, with multiplication by twiddle factors implemented using ring automorphisms. A key technical ingredient to do this is a new "scheme switching" technique proposed in this paper which may be of independent interest.
Last updated:  2024-03-29
Introducing Clapoti(s): Evaluating the isogeny class group action in polynomial time
Aurel Page and Damien Robert
In this short note, we present a simplified (but slower) version Clapoti of Clapotis, whose full description will appear later. Let 𝐸/𝔽_𝑞 be an elliptic curve with an effective primitive orientation by a quadratic imaginary order 𝑅 ⊂ End(𝐸). Let 𝔞 be an invertible ideal in 𝑅. Clapoti is a randomized polynomial time algorithm in 𝑂 ((log Δ_𝑅 + log 𝑞)^𝑂(1) ) operations to compute the class group action 𝐸 ↦ 𝐸_𝔞 ≃ 𝐸/𝐸[𝔞].
Last updated:  2024-03-29
Certified Everlasting Secure Collusion-Resistant Functional Encryption, and More
Taiga Hiroka, Fuyuki Kitagawa, Tomoyuki Morimae, Ryo Nishimaki, Tapas Pal, and Takashi Yamakawa
We study certified everlasting secure functional encryption (FE) and many other cryptographic primitives in this work. Certified everlasting security roughly means the following. A receiver possessing a quantum cryptographic object (such as ciphertext) can issue a certificate showing that the receiver has deleted the cryptographic object and information included in the object (such as plaintext) was lost. If the certificate is valid, the security is guaranteed even if the receiver becomes computationally unbounded after the deletion. Many cryptographic primitives are known to be impossible (or unlikely) to have information-theoretical security even in the quantum world. Hence, certified everlasting security is a nice compromise (intrinsic to quantum). In this work, we define certified everlasting secure versions of FE, compute-and-compare obfuscation, predicate encryption (PE), secret-key encryption (SKE), public-key encryption (PKE), receiver non-committing encryption (RNCE), and garbled circuits. We also present the following constructions: - Adaptively certified everlasting secure collusion-resistant public-key FE for all polynomial-size circuits from indistinguishability obfuscation and one-way functions. - Adaptively certified everlasting secure bounded collusion-resistant public-key FE for $\mathsf{NC}^1$ circuits from standard PKE. - Certified everlasting secure compute-and-compare obfuscation from standard fully homomorphic encryption and standard compute-and-compare obfuscation - Adaptively (resp., selectively) certified everlasting secure PE from standard adaptively (resp., selectively) secure attribute-based encryption and certified everlasting secure compute-and-compare obfuscation. - Certified everlasting secure SKE and PKE from standard SKE and PKE, respectively. - Certified everlasting secure RNCE from standard PKE. - Certified everlasting secure garbled circuits from standard SKE.
Last updated:  2024-03-29
MicroSecAgg: Streamlined Single-Server Secure Aggregation
Yue Guo, Antigoni Polychroniadou, Elaine Shi, David Byrd, and Tucker Balch
This work introduces MicroSecAgg, a framework that addresses the intricacies of secure aggregation in the single-server landscape, specifically tailored to situations where distributed trust among multiple non-colluding servers presents challenges. Our protocols are purpose-built to handle situations featuring multiple successive aggregation phases among a dynamic pool of clients who can drop out during the aggregation. Our different protocols thrive in three distinct cases: firstly, secure aggregation within a small input domain; secondly, secure aggregation within a large input domain; and finally, facilitating federated learning for the cases where moderately sized models are considered. Compared to the prior works of Bonawitz et al. (CCS 2017), Bell et al. (CCS 2020), and the recent work of Ma et al. (S&P 2023), our approach significantly reduces the overheads. In particular, MicroSecAgg halves the round complexity to just 3 rounds, thereby offering substantial improvements in communication cost efficiency. Notably, it outperforms Ma et al. by a factor of n on the user side, where n represents the number of users. Furthermore, in MicroSecAgg the computation complexity of each aggregation per user exhibits a logarithmic growth with respect to $n$, contrasting with the linearithmic or quadratic growth observed in Ma et al. and Bonawitz et al., respectively. We also require linear (in n) computation work from the server as opposed to quadratic in Bonawitz et al., or linearithmic in Ma et al. and Bell et al. In the realm of federated learning, a delicate tradeoff comes into play: our protocols shine brighter as the number of participating parties increases, yet they exhibit diminishing computational efficiency as the sheer volume of weights/parameters increases significantly. We report an implementation of our system and compare the performance against prior works, demonstrating that MicroSecAgg significantly reduces the computational burden and the message size.
Last updated:  2024-03-28
On Computing the Multidimensional Scalar Multiplication on Elliptic Curves
Walid Haddaji, Loubna Ghammam, Nadia El Mrabet, and Leila Ben Abdelghani
A multidimensional scalar multiplication ($d$-mul) consists of computing $[a_1]P_1+\cdots+[a_d]P_d$, where $d$ is an integer ($d\geq 2)$, $\alpha_1, \cdots, \alpha_d$ are scalars of size $l\in \mathbb{N}^*$ bits, $P_1, P_2, \cdots, P_d$ are points on an elliptic curve $E$. This operation ($d$-mul) is widely used in cryptography, especially in elliptic curve cryptographic algorithms. Several methods in the literature allow to compute the $d$-mul efficiently (e.g., the bucket method~\cite{bernstein2012faster}, the Karabina et al. method~\cite{hutchinson2019constructing, hisil2018d, hutchinson2020new}). This paper aims to present and compare the most recent and efficient methods in the literature for computing the $d$-mul operation in terms of with, complexity, memory consumption, and proprieties. We will also present our work on the progress of the optimisation of $d$-mul in two methods. The first method is useful if $2^d-1$ points of $E$ can be stored. It is based on a simple precomputation function. The second method works efficiently when $d$ is large and $2^d-1$ points of $E$ can not be stored. It performs the calculation on the fly without any precomputation. We show that the main operation of our first method is $100(1-\frac{1}{d})\%$ more efficient than that of previous works, while our second exhibits a $50\%$ improvement in efficiency. These improvements will be substantiated by assessing the number of operations and practical implementation.
Last updated:  2024-03-28
Improving Generic Attacks Using Exceptional Functions
Xavier Bonnetain, Rachelle Heim Boissier, Gaëtan Leurent, and André Schrottenloher
Over the past ten years, the statistical properties of random functions have been particularly fruitful for generic attacks. Initially, these attacks targeted iterated hash constructions and their combiners, developing a wide array of methods based on internal collisions and on the average behavior of iterated random functions. More recently, Gilbert et al. (EUROCRYPT 2023) introduced a forgery attack on so-called duplex-based Authenticated Encryption modes which was based on exceptional random functions, i.e., functions whose graph admits a large component with an exceptionally small cycle. In this paper, we expand the use of such functions in generic cryptanalysis with several new attacks. First, we improve the attack of Gilbert et al. from O(2^3c/4) to O(2^2c/3), where c is the capacity. This new attack uses a nested pair of functions with exceptional behavior, where the second function is defined over the cycle of the first one. Next, we introduce several new generic attacks against hash combiners, notably using small cycles to improve the complexities of the best existing attacks on the XOR combiner, Zipper Hash and Hash-Twice. Last but not least, we propose the first quantum second preimage attack against Hash-Twice, reaching a quantum complexity O(2^3n/7).
Last updated:  2024-03-27
Predicting performance for post-quantum encrypted-file systems
Daniel J. Bernstein
Public-key cryptography is widely deployed for encrypting stored files. This paper uses microbenchmarks and purchase costs to predict the performance of various post-quantum KEMs in this application. In particular, this paper concludes that Classic McEliece is (1) the most efficient option and (2) easily affordable. As a quantitative example, the estimated five-year per-user cost of mceliece6960119f is 0.00024 dollars if 100000 files are encrypted and stored for an average user during those five years and each file is decrypted twice on average.
Last updated:  2024-03-27
Statistical testing of random number generators and their improvement using randomness extraction
Cameron Foreman, Richie Yeung, and Florian J. Curchod
Random number generators (RNGs) are notoriously hard to build and test, especially in a cryptographic setting. Although one cannot conclusively determine the quality of an RNG by testing the statistical properties of its output alone, running numerical tests is both a powerful verification tool and the only universally applicable method. In this work, we present and make available a comprehensive statistical testing environment (STE) that is based on existing statistical test suites. The STE can be parameterised to run lightweight (i.e. fast) all the way to intensive testing, which goes far beyond what is required by certification bodies. With it, we benchmark the statistical properties of several RNGs, comparing them against each other. We then present and implement a variety of post-processing methods, in the form of randomness extractors, which improve the RNG's output quality under different sets of assumptions and analyse their impact through numerical testing with the STE.
Last updated:  2024-03-27
Updatable Policy-Compliant Signatures
Christian Badertscher, Monosij Maitra, Christian Matt, and Hendrik Waldner
Policy-compliant signatures (PCS) are a recently introduced primitive by Badertscher et al. [TCC 2021] in which a central authority distributes secret and public keys associated with sets of attributes (e.g., nationality, affiliation with a specific department, or age) to its users. The authority also enforces a policy determining which senders can sign messages for which receivers based on a joint check of their attributes. For example, senders and receivers must have the same nationality, or only senders that are at least 18 years old can send to members of the computer science department. PCS further requires attribute-privacy – nothing about the users’ attributes is revealed from their public keys and signatures apart from whether the attributes satisfy the policy or not. The policy in a PCS scheme is fixed once and for all during the setup. Therefore, a policy update requires a redistribution of all keys. This severely limits the practicality of PCS. In this work, we introduce the notion of updatable policy-compliant signatures (UPCS) extending PCS with a mechanism to efficiently update the policy without redistributing keys to all participants. We define the notion of UPCS and provide the corresponding security definitions. We then provide a generic construction of UPCS based on digital signatures, a NIZK proof system, and a so-called secret-key two-input partially-hiding predicate encryption (2-PHPE) scheme. Unfortunately, the only known way to build the latter for general two-input predicates is using indistinguishability obfuscation. We show that the reliance on the heavy tool of 2-PHPE is inherent to build UPCS by proving that non-interactive UPCS implies 2-PHPE. To circumvent the reliance on 2-PHPE, we consider interactive UPCS, which allows the sender and receiver to interact during the message signing procedure. In this setting, we present two schemes: the first one requires only a digital signature scheme, a NIZK proof system, and secure two-party computation. This scheme works for arbitrary policies, but requires sender and receiver to engage in a two-party computation protocol for each policy update. Our second scheme additionally requires a (single-input) predicate-encryption scheme but, in turn, only requires a single interaction between sender and receiver, independent of the updates. In contrast to 2-PHPE, single-input predicate encryption for certain predicate classes is known to exist (e.g., from pairings) under more concrete and well-understood assumptions.
Last updated:  2024-03-27
Single Server PIR via Homomorphic Thorp Shuffles
Ben Fisch, Arthur Lazzaretti, Zeyu Liu, and Charalampos Papamanthou
Private Information Retrieval (PIR) is a two player protocol where the client, given some query $x \in [N]$ interacts with the server, which holds a $N$-bit string $\textsf{DB}$ in order to privately retrieve $\textsf{DB}[x]$. In this work, we focus on the single server client-preprocessing model, initially idealized by Corrigan-Gibbs and Kogan (EUROCRYPT 2020), where the client and server first run some joint preprocessing algorithm, after which the client can retrieve elements of the server's string $\textsf{DB}$ privately in time sublinear in $N$. All known constructions of single server client-preprocessing PIR rely on one of the following two paradigms: (1) a linear-bandwidth offline phase where the client downloads the whole database from the server, or (2) a sublinear-bandwidth offline phase where however the server has to compute a large-depth ($O_\lambda (N)$) circuit under FHE in order to execute the preprocessing phase. In this paper, we construct a single server client-preprocessing PIR scheme which achieves both sublinear offline bandwidth (the client does not have to download the whole database offline) and a low-depth (i.e. $O_\lambda(1)$), highly parallelizable preprocessing circuit. We estimate that on a single thread, our scheme's preprocessing time should be more than 350x times faster than in prior single server client-preprocessing PIR constructions. Moreover, with parallelization, the latency reduction would be even more drastic. In addition, this construction also allows for updates in $O_\lambda (1)$ time, something not achieved before in this model.
Last updated:  2024-03-27
One Tree to Rule Them All: Optimizing GGM Trees and OWFs for Post-Quantum Signatures
Carsten Baum, Ward Beullens, Shibam Mukherjee, Emmanuela Orsini, Sebastian Ramacher, Christian Rechberger, Lawrence Roy, and Peter Scholl
The use of MPC-in-the-Head (MPCitH)-based zero-knowledge proofs of knowledge (ZKPoK) to prove knowledge of a preimage of a one-way function (OWF) is a popular approach towards constructing efficient post-quantum digital signatures. Starting with the Picnic signature scheme, many optimized MPCitH signatures using a variety of (candidate) OWFs have been proposed. Recently, Baum et al. (CRYPTO 2023) showed a fundamental improvement to MPCitH, called VOLE-in-the-Head (VOLEitH), which can generically reduce the signature size by at least a factor of two without decreasing computational performance or introducing new assumptions. Based on this, they designed the FAEST signature which uses AES as the underlying OWF. However, in comparison to MPCitH, the behavior of VOLEitH when using other OWFs is still unexplored. In this work, we improve a crucial building block of the VOLEitH and MPCitH approaches, the so-called all-but-one vector commitment, thus decreasing the signature size of VOLEitH and MPCitH signature schemes. Moreover, by introducing a small Proof of Work into the signing procedure, we can improve the parameters of VOLEitH (further decreasing signature size) without compromising the computational performance of the scheme. Based on these optimizations, we propose three VOLEitH signature schemes FAESTER, KuMQuat, and MandaRain based on AES, MQ, and Rain, respectively. We carefully explore the parameter space for these schemes and implement each, showcasing their performance with benchmarks. Our experiments show that these three signature schemes outperform MPCitH-based competitors that use comparable OWFs, in terms of both signature size and signing/verification time.
Last updated:  2024-03-27
Guess and Determine Analysis Based on Set Split
Zhe CEN, Xiutao FENG, Zhangyi WANG, Yamin ZHU, and Chunping CAO
The guess and determine attack is a common method in cryptanalysis. Its idea is to firstly find some variables which can deduced all remaining variables in a cipher and then traverse all values of these variables to find a solution. People usually utilize the exhausted search to find these variables. However, it is not applicable any more when the number of variables is a bit large. In this work we propose a guess and determine analysis based on set split to find as few variables as possible in the first step of guess and determine attack, which is a kind of exhausted search based on trading space for time and is more effective than the latter. Firstly we give an idea of set split in detail by introducing some conceptions such as base set, likely solution region and so on. And then we discuss how to utilize the set split to achieve a guess and determine analysis and give its specific implementation scheme. Finally, comparing it with the other two guess and determine analysis based on the exhausted search and the MILP method, we illustrate the effectiveness of our method by two ciphers Snow 2.0 and Enocoro-128v2. Our method spends about 0.000103 seconds finding a best solution of 9 variables for the former and 0.13 seconds finding a best solution of 18 variables for the latter in a personal Macbook respectively, which are better than those of both the exhausted search and the MILP method.
Last updated:  2024-03-27
Obfuscation of Evasive Algebraic Set Membership
Steven D. Galbraith and Trey Li
We define the membership function of a set as the function that determines whether an input is an element of the set. Canetti, Rothblum, and Varia showed how to obfuscate evasive membership functions of hyperplanes over a finite field of order an exponentially large prime, assuming the hardness of a modified decisional Diffie-Hellman problem. Barak, Bitansky, Canetti, Kalai, Paneth, and Sahai extended their work from hyperplanes to hypersurfaces of bounded degree, assuming multilinear maps. Both works are limited to algebraic sets over large fields of prime orders, and are based on less standard assumptions, although they prove virtual black-box security. In this paper, we handle much more general algebraic sets based on more standard assumptions, and prove input-hiding security, which is not weaker nor stronger than virtual black-box security (i.e., they are incomparable). Our first obfuscator handles affine algebraic sets over finite fields of order an arbitrary prime power. It is based on the preimage-resistance property of cryptographic hash function families. Our second obfuscator applies to both affine and projective algebraic sets over finite fields of order a polynomial size prime power. It is based on the same hardness assumption(s) required by input-hiding small superset obfuscation. Our paper is the first to handle the obfuscation problem of projective algebraic sets over small finite fields.
Last updated:  2024-03-27
Accelerating BGV Bootstrapping for Large $p$ Using Null Polynomials Over $\mathbb{Z}_{p^e}$
Shihe Ma, Tairong Huang, Anyu Wang, and Xiaoyun Wang
The BGV scheme is one of the most popular FHE schemes for computing homomorphic integer arithmetic. The bootstrapping technique of BGV is necessary to evaluate arbitrarily deep circuits homomorphically. However, the BGV bootstrapping performs poorly for large plaintext prime $p$ due to its digit removal procedure exhibiting a computational complexity of at least $O(\sqrt{p})$. In this paper, we propose optimizations for the digit removal procedure with large $p$ by leveraging the properties of null polynomials over the ring $\mathbb{Z}_{p^e}$. Specifically, we demonstrate that it is possible to construct low-degree null polynomials based on two observations of the input to the digit removal procedure: 1) the support size of the input can be upper-bounded by $(2B+1)^2$; 2) the size of the lower digits to be removed can be upper-bounded by $B$. Here $B$ can be controlled within a narrow interval $[22,23]$ in our parameter selection, making the degree of these null polynomials much smaller than $p$ for large values of $p$. These low-degree null polynomials can significantly reduce the polynomial degrees during homomorphic digit removal, thereby decreasing both running time and capacity consumption. Theoretically, our optimizations reduce the computational cost of extracting a single digit from $O(\sqrt{pe})$ (by Chen and Han) or $O(\sqrt{p}\sqrt[4]{e})$ (by Geelen et al.) to $\min(2B+1,\sqrt{\lceil e/t\rceil(2B+1)})$ for some $t\ge 1$. We implement and benchmark our method on HElib with $p=17,127,257,8191$ and $65537$. With our optimized digit removal, we achieve a bootstrapping throughput $1.38\sim151$ times that in HElib, with the speedup increasing with the value of $p$. For $p=65537$, we accelerate the digit removal step by 80 times and reduce the bootstrapping time from more than 12 hours to less than 14 minutes.
Last updated:  2024-03-27
CCA Security with Short AEAD Tags
Mustafa Khairallah
The size of the authentication tag represents a significant overhead for applications that are limited by bandwidth or memory. Hence, some authenticated encryption designs have a smaller tag than the required privacy level, which was also suggested by the NIST lightweight cryptography standardization project. In the ToSC 2022, two papers have raised questions about the IND-CCA security of AEAD schemes in this situation. These papers show that (a) online AE cannot provide IND-CCA security beyond the tag length, and (b) it is possible to have IND-CCA security beyond the tag length in a restricted Encrypt-then-Encipher framework. In this paper, we address some of the remaining gaps in this area. Our main result is to show that, for a fixed stretch, Pseudo-Random Injection security implies IND-CCA security as long as the minimum ciphertext size is at least as large as the required IND-CCA security level. We also show that this bound is tight and that any AEAD scheme that allows empty plaintexts with a fixed stretch cannot achieve IND-CCA security beyond the tag length. Next, we look at the weaker notion of MRAE security, and show that two-pass schemes that achieve MRAE security do not achieve IND-CCA security beyond the tag size. This includes SIV and rugged PRPs.
Last updated:  2024-03-27
Divide and Surrender: Exploiting Variable Division Instruction Timing in HQC Key Recovery Attacks
Robin Leander Schröder, Stefan Gast, and Qian Guo
We uncover a critical side-channel vulnerability in the Hamming Quasi-Cyclic (HQC) round 4 optimized implementation arising due to the use of the modulo operator. In some cases, compilers optimize uses of the modulo operator with compile-time known divisors into constant-time Barrett reductions. However, this optimization is not guaranteed: for example, when a modulo operation is used in a loop the compiler may emit division (div) instructions which have variable execution time depending on the numerator. When the numerator depends on secret data, this may yield a timing side-channel. We name vulnerabilities of this kind Divide and Surrender (DaS) vulnerabilities. For processors supporting Simultaneous Multithreading (SMT) we propose a new approach called DIV-SMT which enables precisely measuring small division timing variations using scheduler and/or execution unit contention. We show that using only 100 such side-channel traces we can build a Plaintext-Checking (PC) oracle with above 90% accuracy. Our approach may also prove applicable to other instances of the DaS vulnerability, such as KyberSlash. We stress that exploitation with DIV-SMT requires co-location of the attacker on the same physical core as the victim. We then apply our methodology to HQC and present a novel way to recover HQC secret keys faster, achieving an 8-fold decrease in the number of idealized oracle queries when compared to previous approaches. Our new PC oracle attack uses our newly developed Zero Tester method to quickly determine whether an entire block of bits contains only zero-bits. The Zero Tester method enables the DIV-SMT powered attack on HQC-128 to complete in under 2 minutes on our targeted AMD Zen2 machine.
Last updated:  2024-03-26
SoK: Public Randomness
Alireza Kavousi, Zhipeng Wang, and Philipp Jovanovic
Public randomness is a fundamental component in many cryptographic protocols and distributed systems and often plays a crucial role in ensuring their security, fairness, and transparency properties. Driven by the surge of interest in blockchain and cryptocurrency platforms and the usefulness of such a building block in those areas, designing secure protocols to generate public randomness in a distributed manner has received considerable attention in recent years. This paper presents a systematization of knowledge on the topic of public randomness with a focus on cryptographic tools providing public verifiability and key themes underlying these systems. We provide concrete insights on how state-of-the-art protocols achieve this task efficiently in an adversarial setting and present various research gaps that may be of interest for future research.
Last updated:  2024-03-26
Real-Valued Somewhat-Pseudorandom Unitaries
Zvika Brakerski and Nir Magrafta
We explore a very simple distribution of unitaries: random (binary) phase -- Hadamard -- random (binary) phase -- random computational-basis permutation. We show that this distribution is statistically indistinguishable from random Haar unitaries for any polynomial set of orthogonal input states (in any basis) with polynomial multiplicity. This shows that even though real-valued unitaries cannot be completely pseudorandom (Haug, Bharti, Koh, arXiv:2306.11677), we can still obtain some pseudorandom properties without giving up on the simplicity of a real-valued unitary. Our analysis shows that an even simpler construction: applying a random (binary) phase followed by a random computational-basis permutation, would suffice, assuming that the input is orthogonal and flat (that is, has high min-entropy when measured in the computational basis). Using quantum-secure one-way functions (which imply quantum-secure pseudorandom functions and permutations), we obtain an efficient cryptographic instantiation of the above.
Last updated:  2024-03-26
Accountable Decryption made Formal and Practical
Rujia Li, Yuanzhao Li, Qin Wang, Sisi Duan, Qi Wang, and Mark Ryan
With the increasing scale and complexity of online activities, accountability, as an after-the-fact mechanism, has become an effective complementary approach to ensure system security. Decades of research have delved into the connotation of accountability. They fail, however, to achieve practical accountability of decryption. This paper seeks to address this gap. We consider the scenario where a client (called encryptor, her) encrypts her data and then chooses a delegate (a.k.a. decryptor, him) that stores data for her. If the decryptor initiates an illegitimate decryption on the encrypted data, there is a non-negligible probability that this behavior will be detected, thereby holding the decryptor accountable for his decryption. We make three contributions. First, we review key definitions of accountability known so far. Based on extensive investigations, we formalize new definitions of accountability specifically targeting the decryption process, denoted as accountable decryption, and discuss the (in)possibilities when capturing this concept. We also define the security goals in correspondence. Secondly, we present a novel Trusted Execution Environment(TEE)-assisted solution aligning with definitions. Instead of fully trusting TEE, we take a further step, making TEE work in the "trust, but verify" model where we trust TEE and use its service, but empower users (i.e., decryptors) to detect the potentially compromised state of TEEs. Thirdly, we implement a full-fledged system and conduct a series of evaluations. The results demonstrate that our solution is efficient. Even in a scenario involving $300,000$ log entries, the decryption process concludes in approximately $5.5$ms, and malicious decryptors can be identified within $69$ms.
Last updated:  2024-03-25
Anamorphic Encryption: New Constructions and Homomorphic Realizations
Dario Catalano, Emanuele Giunta, and Francesco Migliaro
The elegant paradigm of Anamorphic Encryption (Persiano et al., Eurocrypt 2022) considers the question of establishing a private communication in a world controlled by a dictator. The challenge is to allow two users, sharing some secret anamorphic key, to exchange covert messages without the dictator noticing, even when the latter has full access to the regular secret keys. Over the last year several works considered this question and proposed constructions, novel extensions and strengthened definitions. In this work we make progress on the study of this primitive in three main directions. First, we show that two general and well established encryption paradigms, namely hybrid encryption and the IBE-to-CCA transform, admit very simple and natural anamorphic extensions. Next, we show that anamorphism, far from being a phenomenon isolated to "basic" encryption schemes, extends also to homomorphic encryption. We show that some existing homomorphic schemes, (and most notably the fully homomorphic one by Gentry, Sahai and Waters) can be made anamorphic, while retaining their homomorphic properties both with respect to the regular and the covert message. Finally we refine the notion of anamorphic encryption by envisioning the possibility of splitting the anamorphic key into an encryption component (that only allows to encrypt covert messages) and a decryption component. This makes possible for a receiver to set up several, independent, covert channels associated with a single covert key.
Last updated:  2024-03-25
A Variation on Knellwolf and Meier's Attack on the Knapsack Generator
Florette Martinez
Pseudo-random generators are deterministic algorithms that take in input a random secret seed and output a flow of random-looking numbers. The Knapsack generator, presented by Rueppel and Massey in 1985 is one of the many attempt at designing a pseudo-random generator that is cryptographically secure. It is based on the subset-sum problem, a variant of the Knapsack optimization problem, which is considered computationally hard. In 2011 Simon Knellwolf et Willi Meier found a way to go around this hard problem and exhibited a weakness of this generator. In addition to be able to distinguish the outputs from the uniform distribution, they designed an algorithm that retrieves a large portion of the secret. We present here an alternate version of the attack, with similar costs, that works on the same range of parameters but retrieves a larger portion of the secret.
Last updated:  2024-03-25
Cutting the GRASS: Threshold GRoup Action Signature Schemes
Michele Battagliola, Giacomo Borin, Alessio Meneghetti, and Edoardo Persichetti
Group actions are fundamental mathematical tools, with a long history of use in cryptography. Indeed, the action of finite groups at the basis of the discrete logarithm problem is behind a very large portion of modern cryptographic systems. With the advent of post-quantum cryptography, however, the method for building protocols shifted towards a different paradigm, centered on the difficulty of discerning 'noisy' objects, as is the case for lattices, codes, and multivariate systems. This method yields promising results for 'core' primitives such as encryption or signature, but can be less than ideal in the case when more advanced functionalities are required. In this work, we show that isomorphism problems which stem from cryptographic group actions, can be viable building blocks for threshold signature schemes. In particular, we construct a full $N$-out-of-$N$ threshold signature scheme, and discuss the efficiency issues arising from extending it to the generic $T$-out-of-$N$ case. To give a practical outlook on our constructions, we instantiate them with the LESS and MEDS frameworks, which are two flavors of code-based cryptographic group actions. Finally, we highlight some ideas that would allow for a more efficient and compact $(T,N)$ threshold variant of LESS, whose security relies on new hardness assumptions.
Last updated:  2024-03-25
Extremely Simple (Almost) Fail-Stop ECDSA Signatures
Mario Yaksetig
Fail-stop signatures are digital signatures that allow a signer to prove that a specific forged signature is indeed a forgery. After such a proof is published, the system can be stopped. We introduce a new simple ECDSA fail-stop signature scheme. Our proposal is based on the minimal assumption that an adversary with a quantum computer is not able to break the (second) preimage resistance of a cryptographically-secure hash function. Our scheme is as efficient as traditional ECDSA, does not limit the number of signatures that a signer can produce, and relies on minimal security assumptions. Using our construction, the signer has minimal computational overhead in the signature producing phase and produces a signature indistinguishable from a 'regular' ECDSA signature.
Last updated:  2024-03-25
Proteus: A Pipelined NTT Architecture Generator
Florian Hirner, Ahmet Can Mert, and Sujoy Sinha Roy
Number Theoretic Transform (NTT) is a fundamental building block in emerging cryptographic constructions like fully homomorphic encryption, post-quantum cryptography and zero-knowledge proof. In this work, we introduce Proteus, an open-source parametric hardware to generate pipelined architectures for the NTT. For a given parameter set including the polynomial degree and size of the coefficient modulus, Proteus can generate Radix-2 NTT architectures using Single-path Delay Feedback (SDF) and Multi-path Delay Commutator (MDC) approaches. We also present a detailed analysis of NTT implementation approaches and use several optimizations to achieve the best NTT configuration. Our evaluations demonstrate performance gain up to $1.8\times$ compared to SDF and MDC-based NTT implementations in the literature. Our SDF and MDC architectures use 1.75× and 6.5× less DSPs, and 3× and 10.5× less BRAMs, respectively, compared to state-of-the-art SDF and MDC-based NTT implementations.
Last updated:  2024-03-25
Harmonizing PUFs for Forward Secure Authenticated Key Exchange with Symmetric Primitives
Harishma Boyapally, Durba Chatterjee, Kuheli Pratihar, Sayandeep Saha, Debdeep Mukhopadhyay, and Shivam Bhasin
Physically Unclonable Functions (PUFs) have been a potent choice for enabling low-cost, secure communication. However, in most applications, one party holds the PUF, and the other securely stores the challenge-response pairs (CRPs). It does not remove the need for secure storage entirely, which is one of the goals of PUFs. This paper proposes a PUF-based construction called Harmonizing PUFs ($\textsf{H_PUF}$s), allowing two independent PUFs to generate the same outcome without storing any confidential data. As an application of $\textsf{H_PUF}$ construction, we present $\textsf{H-AKE}$: a low-cost authenticated key exchange protocol for resource-constrained nodes that is secure against replay and impersonation attacks. The novelty of the protocol is that it achieves forward secrecy without requiring to perform asymmetric group operations like elliptic curve scalar multiplications underlying traditional key-exchange techniques.
Last updated:  2024-03-25
Making Hash-based MVBA Great Again
Hanwen Feng, Zhenliang Lu, Tiancheng Mai, and Qiang Tang
Multi-valued Validated Asynchronous Byzantine Agreement ($\mathsf{MVBA}$) is one essential primitive for many distributed protocols, such as asynchronous Byzantine fault-tolerant scenarios like atomic broadcast ($\mathsf{ABC}$), asynchronous distributed key generation, and many others. Recent efforts (Lu et al, PODC' 20) have pushed the communication complexity of $\mathsf{MVBA}$ to optimal $O(\ell n + \lambda n^2)$, which, however, heavily rely on ``heavyweight'' cryptographic tools, such as non-interactive threshold signatures. The computational cost of algebraic operations, the susceptibility to quantum attacks, and the necessity of a trusted setup associated with threshold signatures present significant remaining challenges. There is a growing interest in information-theoretic or hash-based constructions (historically called signature-free constructions). Unfortunately, the state-of-the-art hash-based $\mathsf{MVBA}$ (Duan et al., CCS'23) incurs a large $O(\ell n^2 + \lambda n^3)$-bits communication, which in turn makes the hash-based $\mathsf{MVBA}$inferior performance-wise comparing with the ``classical'' ones. Indeed, this was clearly demonstrated in our experimental evaluations. To make hash-based $\mathsf{MVBA}$ actually realize its full potential, in this paper, we introduce an $\mathsf{MVBA}$ with adaptive security, and $\widetilde{O}(\ell n + \lambda n^2)$ communication, exclusively leveraging conventional hash functions. Our new $\mathsf{MVBA}$ achieves nearly optimal communication, devoid of heavy operations, surpassing both threshold signature-based schemes and the hash-based scheme in many practical settings, as demonstrated in our experiments. For example, in scenarios with a network size of $n = 201$ and an input size of $1.75$ MB, our $\mathsf{MVBA}$ exhibits a latency that is 81\% lower than that of the existing hash-based $\mathsf{MVBA}$ and 47\% lower than the threshold signature-based $\mathsf{MVBA}$. Our new construction also achieves optimal parameters in other metrics such as $O(1)$ rounds and $O(n^2)$ message complexity, except with a sub-optimal resilience, tolerating up to $20\%$ Byzantine corruptions (instead of $33\%$). Given its practical performance advantages, our new hash-based $\mathsf{MVBA}$ naturally leads to better asynchronous distributed protocols, by simply plugging it into existing frameworks.
Last updated:  2024-03-25
Accumulation without Homomorphism
Benedikt Bünz, Pratyush Mishra, Wilson Nguyen, and William Wang
Accumulation schemes are a simple yet powerful primitive that enable highly efficient constructions of incrementally verifiable computation (IVC). Unfortunately, all prior accumulation schemes rely on homomorphic vector commitments whose security is based on public-key assumptions. It is an interesting open question to construct efficient accumulation schemes that avoid the need for such assumptions. In this paper, we answer this question affirmatively by constructing an accumulation scheme from *non-homomorphic* vector commitments which can be realized from solely symmetric-key assumptions (e.g. Merkle trees). We overcome the need for homomorphisms by instead performing spot-checks over error-correcting encodings of the committed vectors. Unlike prior accumulation schemes, our scheme only supports a bounded number of accumulation steps. We show that such *bounded-depth* accumulation still suffices to construct proof-carrying data (a generalization of IVC). We also demonstrate several optimizations to our PCD construction which greatly improve concrete efficiency.
Last updated:  2024-03-25
Lower data attacks on Advanced Encryption Standard
Orhun Kara
The Advanced Encryption Standard (AES) is one of the most commonly used and analyzed encryption algorithms. In this work, we present new combinations of some prominent attacks on AES, achieving new records in data requirements among attacks, utilizing only $2^4$ and $2^{16}$ chosen plaintexts (CP) for 6-round and 7-round AES-192/256 respectively. One of our attacks is a combination of a meet-in-the-middle (MiTM) attack with a square attack mounted on 6-round AES-192/256 while another attack combines an MiTM attack and an integral attack, utilizing key space partitioning technique, on 7-round AES-192/256. Moreover, we illustrate that impossible differential (ID) attacks can be viewed as the dual of MiTM attacks in certain aspects which enables us to recover the correct key using the meet-in-the-middle (MiTM) technique instead of sieving through all potential wrong keys in our ID attack. Furthermore, we introduce the constant guessing technique in the inner rounds which significantly reduces the number of key bytes to be searched. The time and memory complexities of our attacks remain marginal.
Last updated:  2024-03-24
PerfOMR: Oblivious Message Retrieval with Reduced Communication and Computation
Zeyu Liu, Eran Tromer, and Yunhao Wang
Anonymous message delivery, as in privacy-preserving blockchain and private messaging applications, needs to protect recipient metadata: eavesdroppers should not be able to link messages to their recipients. This raises the question: how can untrusted servers assist in delivering the pertinent messages to each recipient, without learning which messages are addressed to whom? Recent work constructed Oblivious Message Retrieval (OMR) protocols that outsource the message detection and retrieval in a privacy-preserving way, using homomorphic encryption. Their construction exhibits significant costs in computation per message scanned (${\sim}0.1$ second), as well as in the size of the associated messages (${\sim}1$kB overhead) and public keys (${\sim}132$kB). This work constructs more efficient OMR schemes, by replacing the LWE-based clue encryption of prior works with a Ring-LWE variant, and utilizing the resulting flexibility to improve several components of the scheme. We thus devise, analyze, and benchmark two protocols: The first protocol focuses on improving the detector runtime, using a new retrieval circuit that can be homomorphically evaluated $15$x faster than the prior work. The second protocol focuses on reducing the communication costs, by designing a different homomorphic decryption circuit that allows the parameter of the Ring-LWE encryption to be set such that the public key size is about $235$x smaller than the prior work, and the message size is roughly $1.6$x smaller. The runtime of this second construction is ${\sim}40.0$ms per message, still more than $2.5$x faster than prior works.
Last updated:  2024-03-24
Large Language Models for Blockchain Security: A Systematic Literature Review
Zheyuan He, Zihao Li, and Sen Yang
Large Language Models (LLMs) have emerged as powerful tools in various domains involving blockchain security (BS). Several recent studies are exploring LLMs applied to BS. However, there remains a gap in our understanding regarding the full scope of applications, impacts, and potential constraints of LLMs on blockchain security. To fill this gap, we conduct a literature review on LLM4BS. As the first review of LLM's application on blockchain security, our study aims to comprehensively analyze existing research and elucidate how LLMs contribute to enhancing the security of blockchain systems. Through a thorough examination of scholarly works, we delve into the integration of LLMs into various aspects of blockchain security. We explore the mechanisms through which LLMs can bolster blockchain security, including their applications in smart contract auditing, identity verification, anomaly detection, vulnerable repair, and so on. Furthermore, we critically assess the challenges and limitations associated with leveraging LLMs for blockchain security, considering factors such as scalability, privacy concerns, and adversarial attacks. Our review sheds light on the opportunities and potential risks inherent in this convergence, providing valuable insights for researchers, practitioners, and policymakers alike.
Last updated:  2024-03-24
E2E near-standard and practical authenticated transciphering
Ehud Aharoni, Nir Drucker, Gilad Ezov, Eyal Kushnir, Hayim Shaul, and Omri Soceanu
Homomorphic encryption (HE) enables computation delegation to untrusted third parties while maintaining data confidentiality. Hybrid encryption (a.k.a transciphering) allows a reduction in the number of ciphertexts and storage size, which makes FHE solutions practical for a variety of modern applications. Still, modern transciphering has three main drawbacks: 1) lack of standardization or bad performance of symmetric decryption under FHE; 2) post-HE-evaluation is limited to small-size applications; 3) lack of input data integrity. Interestingly, modern-size secure inference applications were demonstrated using approximated FHE schemes such as CKKS. However, implementing transciphering using standard Authenticated Encryption (AE) over CKKS is challenging due to its approximated nature. In this paper, we aim to close these gaps. First, we report and demonstrate the first end-to-end process that uses transciphering for real-world applications i.e., running deep neural network (DNN) inference under encryption. For that, we discuss the concept of Authenticated Transciphering (AT), which like AE, provides some integrity guarantees for the transciphered data. Finally, to demonstrate the AT concept, we report on the first implementation of Ascon decryption under CKKS, and complete the picture with a detailed technical description of our AES-GCM implementation under CKKS.
Last updated:  2024-03-22
Scalable Private Signaling
Sashidhar Jakkamsetti, Zeyu Liu, and Varun Madathil
Private messaging systems that use a bulletin board, like privacy-preserving blockchains, have been a popular topic during the last couple of years. In these systems, typically a private message is posted on the board for a recipient and the privacy requirement is that no one can determine the sender and the recipient of the message. Until recently, the efficiency of these recipients was not considered, and the party had to perform a naive scan of the board to retrieve their messages. More recently, works like Fuzzy Message Detection (FMD), Private Signaling (PS), and Oblivious Message Retrieval (OMR) have studied the problem of protecting recipient privacy by outsourcing the message retrieval process to an untrusted server. However, FMD only provides limited privacy guarantees, and PS and OMR greatly lack scalability. In this work, we present a new construction for private signaling which is both asymptotically superior and concretely orders of magnitude faster than all prior works while providing full privacy. Our constructions make use of a trusted execution environment (TEE) and an Oblivious RAM to improve the computation complexity of the server. We also improve the privacy guarantees by keeping the recipient hidden even during the retrieval of signals from the server. Our proof-of-concept open-source implementation shows that for a server serving a million recipients and ten million messages, it only takes $< 60$ milliseconds to process a sent message, and $< 6$ seconds to process a retrieval (of 100 signals) request from a recipient.
Last updated:  2024-03-22
Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex Proofs
Sebastian Angel, Eleftherios Ioannidis, Elizabeth Margolin, Srinath Setty, and Jess Woods
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Last updated:  2024-03-22
Watermarkable and Zero-Knowledge Verifiable Delay Functions from any Proof of Exponentiation
Charlotte Hoffmann and Krzysztof Pietrzak
A verifiable delay function $\texttt{VDF}(x,T)\rightarrow (y,\pi)$ maps an input $x$ and time parameter $T$ to an output $y$ together with an efficiently verifiable proof $\pi$ certifying that $y$ was correctly computed. The function runs in $T$ sequential steps, and it should not be possible to compute $y$ much faster than that. The only known practical VDFs use sequential squaring in groups of unknown order as the sequential function, i.e., $y=x^{2^T}$. There are two constructions for the proof of exponentiation (PoE) certifying that $y=x^{2^T}$, with Wesolowski (Eurocrypt'19) having very short proofs, but they are more expensive to compute and the soundness relies on stronger assumptions than the PoE proposed by Pietrzak (ITCS'19). A recent application of VDFs by Arun, Bonneau and Clark (Asiacrypt'22) are short-lived proofs and signatures, which are proofs and signatures which are only sound for some time $t$, but after that can be forged by anyone. For this they rely on "watermarkable VDFs", where the proof embeds a prover chosen watermark. To achieve stronger notions of proofs/signatures with reusable forgeability, they rely on "zero-knowledge VDFs", where instead of the output $y$, one just proves knowledge of this output. The existing proposals for watermarkable and zero-knowledge VDFs all build on Wesolowski's PoE, for the watermarkable VDFs there's currently no security proof. In this work we give the first constructions that transform any PoEs in hidden order groups into watermarkable VDFs and into zkVDFs, solving an open question by Arun et al.. Unlike our watermarkable VDF, the zkVDF (required for reusable forgeability) is not very practical as the number of group elements in the proof is a security parameter. To address this, we introduce the notion of zero-knowledge proofs of sequential work (zkPoSW), a notion that relaxes zkVDFs by not requiring that the output is unique. We show that zkPoSW are sufficient to construct proofs or signatures with reusable forgeability, and construct efficient zkPoSW from any PoE, ultimately achieving short lived proofs and signatures that improve upon Arun et al's construction in several dimensions (faster forging times, weaker assumptions). A key idea underlying our constructions is to not directly construct a (watermarked or zk) proof for $y=x^{2^T}$, but instead give a (watermarked or zk) proof for the more basic statement that $x',y'$ satisfy $x'=x^r,y'=y^r$ for some $r$, together with a normal PoE for $y'=(x')^{2^T}$.
Last updated:  2024-03-22
Revisiting Oblivious Top-$k$ Selection with Applications to Secure $k$-NN Classification
Kelong Cong, Robin Geelen, Jiayi Kang, and Jeongeun Park
An oblivious Top-$k$ algorithm selects the $k$ smallest elements from $d$ elements while ensuring the sequence of operations and memory accesses do not depend on the input. In 1969, Alekseev proposed an oblivious Top-$k$ algorithm with complexity $\mathcal{O}(d\log^2{k})$, which was later improved by Yao in 1980 for small $k\ll \sqrt{d}$. In this paper, we revisit the literature on oblivious Top-$k$ and propose another improvement of Alekseev's method that outperforms both for large $k = \Omega(\sqrt{d})$. Our construction is equivalent to applying a new truncation technique to Batcher’s odd-even sorting algorithm. In addition, we propose a combined network to take advantage of both Yao's and our technique that achieves the best concrete performance, in terms of the number of comparators, for any $k$. To demonstrate the efficiency of our combined Top-$k$ network, we present a secure non-interactive $k$-nearest neighbors classifier using homomorphic encryption as an application. Compared with the work of Zuber and Sirdey (PoPETS 2021) where oblivious Top-$k$ was realized with complexity $\mathcal{O}(d^2)$, our experimental results show a speedup of up to 47 times (not accounting for difference in CPU) for $d = 1000$.
Last updated:  2024-03-22
BRAKE: Biometric Resilient Authenticated Key Exchange
Pia Bauspieß, Tjerand Silde, Matej Poljuha, Alexandre Tullot, Anamaria Costache, Christian Rathgeb, Jascha Kolberg, and Christoph Busch
Biometric data are uniquely suited for connecting individuals to their digital identities. Deriving cryptographic key exchange from successful biometric authentication therefore gives an additional layer of trust compared to password-authenticated key exchange. However, biometric data are sensitive personal data that need to be protected on a long-term basis. Furthermore, efficient feature extraction and comparison components resulting in high intra-subject tolerance and inter-subject distinguishability, documented with good biometric performance, need to be applied in order to prevent zero-effort impersonation attacks. In this work, we present a novel protocol for Biometric Resilient Authenticated Key Exchange that fulfils the above requirements of biometric information protection compliant with the international ISO/IEC 24745 standard. In our protocol, we present a novel modification of unlinkable fuzzy vault schemes that allows their connection with oblivious pseudo-random functions to achieve resilient protection against offline attacks crucial for the protection of biometric data. Our protocol is independent of the biometric modality and can be implemented based on the security of discrete logarithms as well as lattices. We provide an open-source implementation of both instantiations of our protocol which achieve real-time efficiency with transaction times of less than one second from the image capture to the completed key exchange.
Last updated:  2024-03-22
Generalized Kotov-Ushakov Attack on Tropical Stickel Protocol Based on Modified Tropical Circulant Matrices
Sulaiman Alhussaini, Craig Collett, and Serge˘ı Sergeev
After the Kotov-Ushakov attack on the tropical implementation of Stickel protocol, various attempts have been made to create a secure variant of such implementation. Some of these attempts used a special class of commuting matrices resembling tropical circulants, and they have been proposed with claims of resilience against the Kotov-Ushakov attack, and even being potential post-quantum candidates. This paper, however, reveals that a form of the Kotov-Ushakov attack remains applicable and, moreover, there is a heuristic implementation of that attack which has a polynomial time complexity and shows an overwhelmingly good success rate.
Last updated:  2024-03-22
Functional Bootstrapping for FV-style Cryptosystems
Dongwon Lee, Seonhong Min, and Yongsoo Song
Fully Homomorphic Encryption (FHE) enables the computation of an arbitrary function over encrypted data without decrypting them. In particular, bootstrapping is a core building block of FHE which reduces the noise of a ciphertext thereby recovering the computational capability. This paper introduces a new bootstrapping framework for the Fan-Vercauteren (FV) scheme, called the functional bootstrapping, providing more generic and advanced functionality than the ordinary bootstrapping method. More specifically, the functional bootstrapping allows us to evaluate an arbitrary function while removing the error of an input ciphertext. Therefore, we achieve better depth consumption and computational complexity as the evaluation of a circuit can be integrated as part of the functional bootstrapping procedure. In particular, our approach extends the functionality of FV since it is even applicable to functions between different plaintext spaces. At the heart of our functional bootstrapping framework is a homomorphic Look-Up Table (LUT) evaluation method where we represent any LUT using only the operations supported by the FV scheme. Finally, we provide a proof-of-concept implementation and present benchmarks of the functional bootstrapping. In concrete examples, such as delta and sign functions, our functional bootstrapping takes about 46.5s or 171.4s for 9-bit or 13-bit plaintext modulus, respectively.
Last updated:  2024-03-22
At Last! A Homomorphic AES Evaluation in Less than 30 Seconds by Means of TFHE
Daphné Trama, Pierre-Emmanuel Clet, Aymen Boudguiga, and Renaud Sirdey
Since the pioneering work of Gentry, Halevi, and Smart in 2012, the state of the art on transciphering has moved away from work on AES to focus on new symmetric algorithms that are better suited for a homomorphic execution. Yet, with recent advances in homomorphic cryptosystems, the question arises as to where we stand today. Especially since AES execution is the application that may be chosen by NIST in the FHE part of its future call for threshold encryption. In this paper, we propose an AES implementation using TFHE programmable bootstrapping which runs in less than a minute on an average laptop. We detail the transformations carried out on the original AES code to lead to a more efficient homomorphic evaluation and we also give several execution times on different machines, depending on the type of execution (sequential or parallelized). These times vary from 4.5 minutes (resp. 54 secs) for sequential (resp. parallel) execution on a standard laptop down to 28 seconds for a parallelized execution over 16 threads on a multi-core workstation.
Last updated:  2024-03-22
Folding-based zkLLM
Wilbert W
This paper introduces a new approach to construct zero-knowledge large language models (zkLLM) based on the Folding technique. We first review the concept of Incrementally Verifiable Computation (IVC) and compare the IVC constructions based on SNARK and Folding. Then we discuss the necessity of Non-uniform IVC (NIVC) and present several Folding schemes that support more expressive circuits, such as SuperNova, Sangria, Origami, HyperNova, and Protostar. Based on these techniques, we propose a zkLLM design that uses a RAM machine architecture with a set of opcodes. We define corresponding constraint circuits for each opcode and describe the workflows of the prover and verifier. Finally, we provide examples of opcodes to demonstrate the circuit construction methods. Our zkLLM design achieves high efficiency and expressiveness, showing great potential for practical applications.
Last updated:  2024-03-21
Feldman's Verifiable Secret Sharing for a Dishonest Majority
Yi-Hsiu Chen and Yehuda Lindell
Verifiable secret sharing (VSS) protocols enable parties to share secrets while guaranteeing security (in particular, that all parties hold valid and consistent shares) even if the dealer or some of the participants are malicious. Most work on VSS focuses on the honest majority case, primarily since it enables one to guarantee output delivery (e.g., a corrupted recipient cannot prevent an honest dealer from sharing their value). Feldman's VSS is a well known and popular protocol for this task and relies on the discrete log hardness assumption. In this paper, we present a variant of Feldman's VSS for the dishonest majority setting and formally prove its security. Beyond the basic VSS protocol, we present a publicly-verifiable version, as well as show how to securely add participants to the sharing and how to refresh an existing sharing (all secure in the presence of a dishonest majority). We prove that our protocols are UC secure, for appropriately defined ideal functionalities.
Last updated:  2024-03-21
DiStefano: Decentralized Infrastructure for Sharing Trusted Encrypted Facts and Nothing More
Sofía Celi, Alex Davidson, Hamed Haddadi, Gonçalo Pestana, and Joe Rowell
We design DiStefano: an efficient, maliciously-secure framework for generating private commitments over TLS-encrypted web traffic, for a designated third-party. DiStefano provides many improvements over previous TLS commitment systems, including: a modular protocol specific to TLS 1.3, support for arbitrary verifiable claims over encrypted data, inherent ring privacy for client browsing history, and various optimisations to ensure fast online performance of the TLS 1.3 session. We build a permissive open-source implementation of DiStefano integrated into the BoringSSL cryptographic library (used by Chromium-based Internet browsers). We show that DiStefano is practical for committing to facts in arbitrary TLS traffic, requiring \(< 1s\) and \(\leq 5KB\) to execute the online phase in a LAN setting.
Last updated:  2024-03-21
Updatable Public Key Encryption with Strong CCA Security: Security Analysis and Efficient Generic Construction
Kyoichi Asano and Yohei Watanabe
With applications in secure messaging, Updatable Public Key Encryption (UPKE) was proposed by Jost et al. (EUROCRYPT '19) and Alwen et al. (CRYPTO '20). It is a natural relaxation of forward-secure public-key encryption. In UPKE, we can update secret keys by using update ciphertexts which any sender can generate. The UPKE schemes proposed so far that satisfy the strong CCA security are Haidar et al.'s concrete construction (CCS '22) and Dodis et al's generic construction that use Non-Interactive Zero-Knowledge (NIZK) arguments. Yet, even despite the aid of random oracles, their concrete efficiency is quite far from the most efficient CPA-secure scheme. In this paper, we first demonstrate a simple and efficient attack against Dodis et al.'s strongly CCA-secure scheme, and show how to fix it. Then, based on the observation from the attack and fix, we propose a new strongly CCA-secure generic construction for a UPKE scheme with random oracles and show that its instantiation is almost as concretely efficient as the most efficient CPA-secure one.
Last updated:  2024-03-21
The Security of SHA2 under the Differential Fault Characteristic of Boolean Functions
Weiqiong Cao, Hua Chen, Hongsong Shi, Haoyuan Li, Jian Wang, and Jingyi Feng
SHA2 has been widely adopted across various traditional public-key cryptosystems, post-quantum cryptography, personal identification, and network communication protocols, etc. Hence, ensuring the robust security of SHA2 is of critical importance. There have been several differential fault attacks based on random word faults targeting SHA1 and SHACAL-2. However, extending such random word-based fault attacks to SHA2 proves significantly more difficult due to the heightened complexity of the boolean functions in SHA2. In this paper, assuming random word faults, we find some distinctive differential properties within the boolean functions in SHA2. Leveraging these findings, we propose a new differential fault attack methodology that can be effectively utilized to recover the final message block and its corresponding initial vector in SHA2, forge HMAC-SHA2 messages, extract the key of SHACAL-2, and extend our analysis to similar algorithm like SM3. We validate the effectiveness of these attacks through rigorous simulations and theoretical deductions, revealing that they indeed pose substantial threats to the security of SHA2. In our simulation-based experiments, our approach necessitates guessing $T$ bits within a register, with $T$ being no more than $5$ at most, and having a approximate $95\%$ (for SHA512) probability of guessing just $1$ bit. Moreover, upon implementing a consecutive series of 15 fault injections, the success probability for recovering one register (excluding the guessed bits) approaches $100\%$. Ultimately, approximately 928 faulty outputs based on random word faults are required to carry out the attack successfully.
Last updated:  2024-03-21
Prouff & Rivain’s Formal Security Proof of Masking, Revisited: Tight Bounds in the Noisy Leakage Model
Loïc Masure and François-Xavier Standaert
Masking is a counter-measure that can be incorporated to software and hardware implementations of block ciphers to provably se- cure them against side-channel attacks. The security of masking can be proven in different types of threat models. In this paper, we are interested in directly proving the security in the most realistic threat model, the so-called noisy leakage adversary, that captures well how real-world side- channel adversaries operate. Direct proofs in this leakage model have been established by Prouff & Rivain at Eurocrypt 2013, Dziembowski et al. at Eurocrypt 2015, and Prest et al. at Crypto 2019. Both proofs are complementary to each other, in the sense that the weaknesses of one proof are fixed in at least one of the others, and conversely. These weak- nesses concerned in particular the strong requirements on the noise level and the security parameter to get meaningful security bounds, and some requirements on the type of adversary covered by the proof — i.e., cho- sen or random plaintexts. This suggested that the drawbacks of each security bound could actually be proof artifacts. In this paper, we solve these issues, by revisiting Prouff & Rivain’s approach.
Last updated:  2024-03-21
Massive Superpoly Recovery with a Meet-in-the-middle Framework -- Improved Cube Attacks on Trivium and Kreyvium
Jiahui He, Kai Hu, Hao Lei, and Meiqin Wang
The cube attack extracts the information of secret key bits by recovering the coefficient called superpoly in the output bit with respect to a subset of plaintexts/IV, which is called a cube. While the division property provides an efficient way to detect the structure of the superpoly, superpoly recovery could still be prohibitively costly if the number of rounds is sufficiently high. In particular, Core Monomial Prediction (CMP) was proposed at ASIACRYPT 2022 as a scaled-down version of Monomial Prediction (MP), which sacrifices accuracy for efficiency but ultimately gets stuck at 848 rounds of \trivium. In this paper, we provide new insights into CMP by elucidating the algebraic meaning to the core monomial trails. We prove that it is sufficient to recover the superpoly by extracting all the core monomial trails, an approach based solely on CMP, thus demonstrating that CMP can achieve perfect accuracy as MP does. We further reveal that CMP is still MP in essence, but with variable substitutions on the target function. Inspired by the divide-and-conquer strategy that has been widely used in previous literature, we design a meet-in-the-middle (MITM) framework, in which the CMP-based approach can be embedded to achieve a speedup. To illustrate the power of these new techniques, we apply the MITM framework to \trivium, \grain and \kreyvium. As a result, not only can the previous computational cost of superpoly recovery be reduced (e.g., 5x faster for superpoly recovery on 192-round \grain), but we also succeed in recovering superpolies for up to 851 rounds of \trivium and up to 899 rounds of \kreyvium. This surpasses the previous best results by respectively 3 and 4 rounds. Using the memory-efficient M\"obius transform proposed at EUROCRYPT 2021, we can perform key recovery attacks on target ciphers, even though the superpoly may contain over $2^{40}$ monomials. This leads to the best cube attacks on the target ciphers.
Last updated:  2024-03-21
STIR: Reed–Solomon Proximity Testing with Fewer Queries
Gal Arnon, Alessandro Chiesa, Giacomo Fenzi, and Eylon Yogev
We present STIR (Shift To Improve Rate), an interactive oracle proof of proximity (IOPP) for Reed-Solomon codes that achieves the best known query complexity of any concretely efficient IOPP for this problem. For $\lambda$ bits of security, STIR has query complexity $O(\log d + \lambda \cdot \log \log d )$, while FRI, a popular protocol, has query complexity $O(\lambda \cdot \log d )$ (including variants of FRI based on conjectured security assumptions). STIR relies on a new technique for recursively improving the rate of the tested Reed-Solomon code. We provide an implementation of STIR compiled to a SNARK. Compared to a highly-optimized implementation of FRI, STIR achieves an improvement in argument size that ranges from $1.25\times$ to $2.46\times$ depending on the chosen parameters, with similar prover and verifier running times. For example, in order to achieve 128 bits of security for degree $2^{26}$ and rate $1/4$, STIR has argument size $114$ KiB, compared to $211$ KiB for FRI.
Last updated:  2024-03-21
OPSA: Efficient and Verifiable One-Pass Secure Aggregation with TEE for Federated Learning
Zhangshuang Guan, Yulin Zhao, Zhiguo Wan, and Jinsong Han
In federated learning, secure aggregation (SA) protocols like Flamingo (S\&P'23) and LERNA (ASIACRYPT'23) have achieved efficient multi-round SA in the malicious model. However, each round of their aggregation requires at least three client-server round-trip communications and lacks support for aggregation result verification. Verifiable SA schemes, such as VerSA (TDSC'21) and Eltaras et al.(TIFS'23), provide verifiable aggregation results under the security assumption that the server does not collude with any user. Nonetheless, these schemes incur high communication costs and lack support for efficient multi-round aggregation. Executing SA entirely within Trusted Execution Environment (TEE), as desined in SEAR (TDSC'22), guarantees both privacy and verifiable aggregation. However, the limited physical memory within TEE poses a significant computational bottleneck, particularly when aggregating large models or handling numerous clients. In this work, we introduce OPSA, a multi-round one-pass secure aggregation framework based on TEE to achieve efficient communication, streamlined computation and verifiable aggregation all at once. OPSA employs a new strategy of revealing shared keys in TEE and instantiates two types of masking schemes. Furthermore, a result verification module is designed to be compatible with any type of SA protocol instantiated under the OPSA framework with weaker security assumptions. Compared with the state-of-the-art schemes, OPSA achieves a 2$\sim$10$\times$ speedup in multi-round aggregation while also supporting result verification simultaneously. OPSA is more friendly to scenarios with high network latency and large-scale model aggregation.
Last updated:  2024-03-21
A General Framework of Homomorphic Encryption for Multiple Parties with Non-Interactive Key-Aggregation
Hyesun Kwak, Dongwon Lee, Yongsoo Song, and Sameer Wagh
Homomorphic Encryption (HE) is a useful primitive for secure computation, but it is not generally applicable when multiple parties are involved, as the authority is solely concentrated in a single party, the secret key owner. To solve this issue, several variants of HE have emerged in the context of multiparty setting, resulting in two major lines of work -- Multi-Party HE (MPHE) and Multi-Key HE (MKHE). In short, MPHEs tend to be more efficient, but all parties should be specified at the beginning to collaboratively generate a public key, and the access structure is fixed throughout the entire computation. On the other hand, MKHEs have relatively poor performance but provide better flexibility in that a new party can generate its own key and join the computation anytime. In this work, we propose a new HE primitive, called Multi-Group HE (MGHE). Stated informally, an MGHE scheme provides seamless integration between MPHE and MKHE, and has the best of both worlds. In an MGHE scheme, a group of parties jointly generates a public key for efficient single-key encryption and homomorphic operations similar to MPHE. However, it also supports computation on encrypted data under different keys, in the MKHE manner. We formalize the security and correctness notions for MGHE and discuss the relation with previous approaches. We also present a concrete instantiation of MGHE from the BFV scheme and provide a proof-of-concept implementation to demonstrate its performance. In particular, our MGHE construction has a useful property that the key generation is simply done by aggregating individual keys without any interaction between the parties, while all the existing MPHE constructions relied on multi-round key-generation protocols. Finally, we describe a method to design a general multi-party computation protocol from our MGHE scheme.
Last updated:  2024-03-21
CheckOut: User-Controlled Anonymization for Customer Loyalty Programs
Matthew Gregoire, Rachel Thomas, and Saba Eskandarian
To resist the regimes of ubiquitous surveillance imposed upon us in every facet of modern life, we need technological tools that subvert surveillance systems. Unfortunately, while cryptographic tools frequently demonstrate how we can construct systems that safeguard user privacy, there is limited motivation for corporate entities engaged in surveillance to adopt these tools, as they often clash with profit incentives. This paper demonstrates how, in one particular aspect of everyday life -- customer loyalty programs -- users can subvert surveillance and attain anonymity, without necessitating any cooperation or modification in the behavior of their surveillors. We present the CheckOut system, which allows users to coordinate large anonymity sets of shoppers to hide the identity and purchasing habits of each particular user in the crowd. CheckOut scales up and systematizes past efforts to subvert loyalty surveillance, which have been primarily ad-hoc and manual affairs where customers physically swap loyalty cards to mask their real identities. CheckOut allows increased scale while ensuring that the necessary computing infrastructure does not itself become a new centralized point of privacy failure. Of particular importance to our scheme is a protocol for loyalty programs that offer reward points, where we demonstrate how CheckOut can assist users in paying each other back for loyalty points accrued while using each others' loyalty accounts. We present two different mechanisms to facilitate redistributing rewards points, offering trade-offs in functionality, performance, and security.
Last updated:  2024-03-20
Sailfish: Towards Improving Latency of DAG-based BFT
Nibesh Shrestha, Aniket Kate, and Kartik Nayak
Existing DAG-based BFT protocols exhibit long latency to commit decisions. The primary reason for such a long latency is having a leader every 2 or more “rounds”. Even under honest leaders, these protocols require two or more reliable broadcast (RBC) instances to commit the proposal submitted by the leader (leader vertex), and additional RBCs to commit other proposals (non-leader vertices). In this work, we present Sailfish, the first DAG-based BFT that supports a leader vertex in each round. Under honest leaders, Sailfish maintains a commit latency of one RBC round plus $1\delta$ to commit the leader vertex (where $\delta$ is the actual transmission latency of a message) and only an additional RBC round to commit non-leader vertices.
Last updated:  2024-03-20
Knot-based Key Exchange protocol
Silvia Sconza and Arno Wildi
We propose a new key exchange protocol based on the Generalised Diffie-Hellman Key Exchange. In the latter, instead of using a group-action, we consider a semigroup action. In our proposal, the semigroup is the set of oriented knots in $\mathbb{S}^3$ with the operation of connected sum. As a semigroup action, we choose the action of the semigroup on itself through the connected sum. For the protocol to work, we need to use knot invariants, which allow us to create the shared secret key starting from the same knot represented in two different ways. In particular, we use finite type invariants. The security of the protocol is guaranteed by the hardness of decomposing knots in the semigroup.
Last updated:  2024-03-20
Optimized Homomorphic Evaluation of Boolean Functions
Nicolas Bon, David Pointcheval, and Matthieu Rivain
We propose a new framework to homomorphically evaluate Boolean functions using the Torus Fully Homomorphic Encryption (TFHE) scheme. Compared to previous approaches focusing on Boolean gates, our technique can evaluate more complex Boolean functions with several inputs using a single bootstrapping. This allows us to greatly reduce the number of bootstrapping operations necessary to evaluate a Boolean circuit compared to previous works, thus achieving significant improvements in terms of performances. We define theoretically our approach which consists in adding an intermediate homomorphic layer between the plain Boolean space and the ciphertext space. This layer relies on so-called $p$-encodings embedding bits into $\mathbb{Z}_p$. We analyze the properties of these encodings to enable the evaluation of a given Boolean function and provides a deterministic algorithm (as well as an efficient heuristic) to find valid sets of encodings for a given function. We also propose a method to decompose any Boolean circuit into Boolean functions which are efficiently evaluable using our approach. We apply our framework to homomorphically evaluate various cryptographic primitives, and in particular the AES cipher. Our implementation results show significant improvements compared to the state of the art.
Last updated:  2024-03-20
Simple Three-Round Multiparty Schnorr Signing with Full Simulatability
Yehuda Lindell
In a multiparty signing protocol, also known as a threshold signature scheme, the private signing key is shared amongst a set of parties and only a quorum of those parties can generate a signature. Research on multiparty signing has been growing in popularity recently due to its application to cryptocurrencies. Most work has focused on reducing the number of rounds to two, and as a result: (a) are not fully simulatable in the sense of MPC real/ideal security definitions, and/or (b) are not secure under concurrent composition, and/or (c) utilize non-standard assumptions of different types in their proofs of security. In this paper, we describe a simple three-round multiparty protocol for Schnorr signatures and prove its security. The protocol is fully simulatable, secure under concurrent composition, and proven secure in the standard model or random-oracle model (depending on the instantiations of the commitment and zero-knowledge primitives). The protocol realizes an ideal Schnorr signing functionality with perfect security in the ideal commitment and zero-knowledge hybrid model (and thus the only assumptions needed are for realizing these functionalities). In our presentation, we do not assume that all parties begin with the message to be signed, the identities of the participating parties and a unique common session identifier, since this is often not the case in practice. Rather, the parties achieve consensus on these parameters as the protocol progresses.
Last updated:  2024-03-20
Fast Secure Computations on Shared Polynomials and Applications to Private Set Operations
Pascal Giorgi, Fabien Laguillaumie, Lucas Ottow, and Damien Vergnaud
Secure multi-party computation aims to allow a set of players to compute a given function on their secret inputs without revealing any other information than the result of the computation. In this work, we focus on the design of secure multi-party protocols for shared polynomial operations. We consider the classical model where the adversary is honest-but-curious, and where the coefficients (or any secret values) are either encrypted using an additively homomorphic encryption scheme or shared using a threshold linear secret-sharing scheme. Our protocols terminate after a constant number of rounds and minimize the number of secure multiplications. In their seminal article at PKC 2006, Mohassel and Franklin proposed constant-rounds protocols for the main operations on (shared) polynomials. In this work, we improve the fan-in multiplication of nonzero polynomials, the multi-point polynomial evaluation and the polynomial interpolation (on secret points) to reach a quasi-linear complexity (instead of quadratic in Mohassel and Franklin's work) in the degree of shared input/output polynomials. Computing with shared polynomials is a core component of designing multi-party protocols for privacy-preserving operations on private sets, like private disjointness test or private set intersection. Using our new protocols, we are able to improve the complexity of such protocols and to design the first variant which always returns a correct result.
Last updated:  2024-03-20
Malicious Security for Sparse Private Histograms
Lennart Braun, Adrià Gascón, Mariana Raykova, Phillipp Schoppmann, and Karn Seth
We present a construction for secure computation of differentially private sparse histograms that aggregates the inputs from a large number of clients. Each client contributes a value to the aggregate at a specific index. We focus on the case where the set of possible indices is superpolynomially large. Hence, the resulting histogram will be sparse, i.e., most entries will have the value zero. Our construction relies on two non-colluding servers and provides security against malicious adversaries that may control one of the servers and any numbers of clients. It achieves communication and computation complexities linear in the input size, and achieves the optimal error $O\big(\frac{\log(1/\delta)}{\epsilon}\big)$, independent of the size of the domain of indices. We compute the communication cost of our protocol, showing its scalability. For a billion clients, the communication cost for each server is under 26 KiB per client. Our paper solves an open problem of the work of Bell et al. (CCS'22) which presented a solution for the semi-honest setting while incurring sublinear overhead in its efficiency. We formalize a proof approach for proving malicious security in settings where the output and possible additional information revealed during the execution need to provide differential privacy.
Last updated:  2024-03-20
Publicly Verifiable Secret Sharing over Class Groups and Applications to DKG and YOSO
Ignacio Cascudo and Bernardo David
Publicly Verifiable Secret Sharing (PVSS) allows a dealer to publish encrypted shares of a secret so that parties holding the corresponding decryption keys may later reconstruct it. Both dealing and reconstruction are non-interactive and any verifier can check their validity. PVSS finds applications in randomness beacons, distributed key generation (DKG) and in YOSO MPC (Gentry et al. CRYPTO'21), when endowed with suitable publicly verifiable re-sharing as in YOLO YOSO (Cascudo et al. ASIACRYPT'22). We introduce a PVSS scheme over class groups that achieves similar efficiency to state-of-the art schemes that only allow for reconstructing a function of the secret, while our scheme allows the reconstruction of the original secret. Our construction generalizes the DDH-based scheme of YOLO YOSO to operate over class groups, which poses technical challenges in adapting the necessary NIZKs in face of the unknown group order and the fact that efficient NIZKs of knowledge are not as simple to construct in this setting. Building on our PVSS scheme's ability to recover the original secret, we propose two DKG protocols for discrete logarithm key pairs: a biasable 1-round protocol, which improves on the concrete communication/computational complexities of previous works; and a 2-round unbiasable protocol, which improves on the round complexity of previous works. We also add publicly verifiable resharing towards anonymous committees to our PVSS, so that it can be used to efficiently transfer state among committees in the YOSO setting. Together with a recent construction of MPC in the YOSO model based on class groups (Braun et al. CRYPTO'23), this results in the most efficient full realization (i.e without assuming receiver anonymous channels) of YOSO MPC based on the CDN framework with transparent setup.
Last updated:  2024-03-20
Zero-Dimensional Gröbner Bases for Rescue-XLIX
Matthias Johann Steiner
Rescue-XLIX is an Arithmetization-Oriented Substitution-Permutation Network over prime fields $\mathbb{F}_p$ which in one full round first applies a SPN based on $x \mapsto x^d$ followed by a SPN based on the inverse power map $x \mapsto x^\frac{1}{d}$. In a recent work, zero-dimensional Gröbner bases for SPN and Poseidon sponge functions have been constructed by utilizing weight orders. Following this approach we construct zero-dimensional Gröbner bases for Rescue-XLIX ciphers and sponge functions.
Last updated:  2024-03-20
A Zero-Dimensional Gröbner Basis for Poseidon
Matthias Johann Steiner
In this paper we construct dedicated weight orders $>$ so that a $>$-Gröbner bases of Poseidon can be found via linear transformations for the preimage as well as the CICO problem. In particular, with our Gröbner bases we can exactly compute the $\mathbb{F}_q$-vector space dimension of the quotient space for all possible Poseidon configurations. This in turn resolves previous attempts to assess the security of Poseidon against Gröbner basis attacks, since the vector space dimension quantifies the complexity of computing the variety of a zero-dimensional polynomial system.
Last updated:  2024-03-20
A Deep Analysis of two Glitch-Free Hardware Masking Schemes SESYM and LMDPL
Nicolai Müller, Daniel Lammers, and Amir Moradi
In the context of masking, which is the dominant technique for protecting cryptographic hardware designs against SCA attacks, the focus has long been on the design of masking schemes that guarantee provable security in the presence of glitches. Unfortunately, achieving this comes at the cost of increased latency, since registers are required to stop glitch propagation. Previous work has attempted to reduce latency by eliminating registers, but the exponential increase in area makes such approaches impractical. Some relatively new attempts have used DRP logic styles to avoid glitches in algorithmically masked circuits. Promising approaches in this area include LMDPL and SESYM, presented at CHES 2020 and CHES 2022 respectively. Both schemes allow masking of arbitrary functions with only one cycle latency. However, even if glitches no longer occur, there are other physical defaults that may violate the security of a glitch-free masked circuit. The imbalanced delay of dual rails is a known security problem for DRP logic styles such as WDDL, but is not covered by the known security models, e.g., robust probing model. In this work, we illustrate that imbalanced signal delays pose a threat to the security of algorithmically masked circuits implemented with DRP logic, both in theory and practice. Notably, we underscore the security of LMDPL even when delays are taken into account, contrasting with the vulnerability observed in SESYM under similar conditions. Consequently, our findings highlight the critical importance of addressing imbalanced delays in the design of masked circuits using DRP logic. In particular, our findings motivate the need for an appropriate security model, and imply that relying solely on the probing security model and avoiding glitches may be insufficient to construct secure circuits.
Last updated:  2024-03-20
SoK: Polynomial Multiplications for Lattice-Based Cryptosystems
Vincent Hwang
We survey various mathematical tools used in software works multiplying polynomials in $\mathbb{Z}_q [x] / ⟨x^n −αx−β⟩$. In particular, we survey implementation works targeting polynomial multiplications in lattice-based cryptosystems Dilithium, Kyber, NTRU, NTRU Prime, and Saber with instruction set architectures/extensions Armv7-M, Armv7E-M, Armv8-A, and AVX2. There are three emphases in this paper: (i) modular arithmetic, (ii) homomorphisms, and (iii) vectorization. For modular arithmetic, we survey Montgomery, Barrett, and Plantard multiplications. For homomorphisms, we survey (a) various homomorphisms such as Cooley–Tukey FFT, Bruun’s FFT, Rader’s FFT, Karatsuba, and Toom– Cook; (b) various algebraic techniques for adjoining nice properties to the coefficient rings, including injections, Schönhage’s FFT, Nussbaumer’s FFT, and localization; and (c) various algebraic techniques related to the polynomial moduli, including twisting, composed multiplication, evaluation at ∞, Good–Thomas FFT, truncation, incomplete transformation, and Toeplitz matrix-vector product. For vectorization, we survey the relations between homomorphisms and the support of vector arithmetic. We then go through several case studies: We compare the implementations of modular multiplications used in Dilithium and Kyber, explain how the matrix-to-vector structure was exploited in Saber, and review the design choices of transformations for NTRU and NTRU Prime with vectorization. Finally, we outline several interesting implementation projects.
Last updated:  2024-03-20
Partially Non-Interactive Two-Round Lattice-Based Threshold Signatures
Rutchathon Chairattana-Apirom, Stefano Tessaro, and Chenzhi Zhu
This paper gives the first lattice-based two-round threshold signature based on lattice assumptions for which the first message is independent of the message being signed without relying on fully-homomorphic encryption. Our construction supports arbitrary thresholds and is scalable to support a large number of signers, say n = 1024. Our construction provides a careful instantiation of a generic threshold signature construction by Tessaro and Zhu (EUROCRYPT ’23) based on specific linear hash functions, which in turns can be seen as a generalization of the FROST scheme by Komlo and Goldberg (SAC ’20). Our reduction techniques are new in the context of lattice-based cryptography. Also, our scheme does not use any heavy tools, such as NIZKs or homomorphic trapdoor commitments.
Last updated:  2024-03-20
Algebraic isomorphic spaces of ideal lattices, reduction of Ring-SIS problem, and new reduction of Ring-LWE problem
Zhuang Shan, Leyou Zhang, Qing Wu, and Qiqi Lai
This paper mainly studies an open problem in modern cryptography, namely the Ring-SIS reduction problem. In order to prove the hardness of the Ring-SIS problem, this paper introduces the concepts of the one-dimensional SIS problem, the Ring-SIS$|_{x=0}$ problem, and the variant knapsack problem. The equivalence relations between the three are first established, on which the connection between the Ring-SIS$|_{x=0}$ problem and the Ring-SIS problem is built. This proves that the hardness of the Ring-SIS problem is no less than that of the variant knapsack problem and no more than that of the SIS problem. Additionally, we reduce the Ring-LWE problem to the Ring-SIS problem, which guarantees the security of encryption schemes based on Ring-LWE to a certain degree. Lastly, this article proves that the difficulty of the Ring-SIS problem and the Ring-LWE problem is moderate with respect to the spatial dimension or polynomial degree.
Last updated:  2024-03-20
Arctic: Lightweight and Stateless Threshold Schnorr Signatures
Chelsea Komlo and Ian Goldberg
Threshold Schnorr signatures are seeing increased adoption in practice, and offer practical defenses against single points of failure. However, one challenge with existing randomized threshold Schnorr signature schemes is that signers must carefully maintain secret state across signing rounds, while also ensuring that state is deleted after a signing session is completed. Failure to do so will result in a fatal key-recovery attack by re-use of nonces. While deterministic threshold Schnorr signatures that mitigate this issue exist in the literature, all prior schemes incur high complexity and performance overhead in comparison to their randomized equivalents. In this work, we seek the best of both worlds; a deterministic and stateless threshold Schnorr signature scheme that is also simple and efficient. Towards this goal, we present Arctic, a lightweight two-round threshold Schnorr signature that is deterministic, and therefore does not require participants to maintain state between signing rounds. As a building block, we formalize the notion of a Verifiable Pseudorandom Secret Sharing (VPSS) scheme, and define Shine, an efficient VPSS construction. Shine is secure when the total number of participants is at least 2t − 1 and the adversary is assumed to corrupt at most t − 1; i.e., in the honest majority model. We prove that Arctic is secure under the discrete logarithm assumption in the random oracle model, similarly assuming at minimum 2t − 1 number of signers and a corruption threshold of at most t − 1. For moderately sized groups (i.e., when n ≤ 20), Arctic is more than an order of magnitude more efficient than prior deterministic threshold Schnorr signatures in the literature. For small groups where n ≤ 10, Arctic is three orders of magnitude more efficient.
Last updated:  2024-03-19
Shorter VOLEitH Signature from Multivariate Quadratic
Dung Bui
VOLE-in-the-head paradigm recently introduced by Baum et al. (Crypto 2023) allows transforming zero-knowledge protocols in the designated verifier setting into public-coin protocols, which can be made non-interactive and publicly verifiable. Our transformation applies to a large class of ZK protocols based on vector oblivious linear evaluation (VOLE) and leads to resulting ZK protocols that have linear proof size and are simpler, smaller, and faster than related approaches based on MPC-in-the-head. We propose a new candidate post-quantum signature scheme from the Multivariate Quadratic(MQ) problem based on a new protocol for the VOLE-in-the-head paradigm, which significantly reduces the signature size compared to previous works. We achieve a signature size of 2.5KB for a 128-bit security level. Compared to the state-of-the-art MQ-based signature schemes, our signature scheme achieves a factor from 3 to 4 improvement in terms of the signature size while keeping the computational efficiency competitive
Last updated:  2024-03-19
ON THE IMPLEMENTATION OF A LATTICE-BASED DAA FOR VANET SYSTEM
Doryan Lesaignoux and Mikael Carmona
Direct Anonymous Attestation (DAA) is a cryptographic protocol that enables users with a Trusted Platform Module (TPM) to authenticate without revealing their identity. Thus, DAA emerged as a good privacy-enhancing solution. Current standards have security based on factorization and discrete logarithm problem making them vulnerable to quantum computer attacks. Recently, a number of lattice-based DAA has been propose in the literature to start transition to quantum-resistant cryptography. In addition to these, DAA has been adapted to Vehicle Ad-hoc NETwork system (VANETs) to offer secure vehicule-to-vehicule/infrastructure communication (V2V and V2I). In this paper, we provide an implementation of the most advanced post-quantum DAA for VANETs. We explore the cryptographic foundations, construction methodologies, and the performance of this scheme, offering insights into their suitability for various real-world use cases.
Last updated:  2024-03-19
Security Guidelines for Implementing Homomorphic Encryption
Jean-Philippe Bossuat, Rosario Cammarota, Jung Hee Cheon, Ilaria Chillotti, Benjamin R. Curtis, Wei Dai, Huijing Gong, Erin Hales, Duhyeong Kim, Bryan Kumara, Changmin Lee, Xianhui Lu, Carsten Maple, Alberto Pedrouzo-Ulloa, Rachel Player, Luis Antonio Ruiz Lopez, Yongsoo Song, Donggeon Yhee, and Bahattin Yildiz
Fully Homomorphic Encryption (FHE) is a cryptographic primitive that allows performing arbitrary operations on encrypted data. Since the conception of the idea in [RAD78], it was considered a holy grail of cryptography. After the first construction in 2009 [Gen09], it has evolved to become a practical primitive with strong security guarantees. Most modern constructions are based on well-known lattice problems such as Learning with Errors (LWE). Besides its academic appeal, in recent years FHE has also attracted significant attention from industry, thanks to its applicability to a considerable number of real-world use-cases. An upcoming standardization effort by ISO/IEC aims to support the wider adoption of these techniques. However, one of the main challenges that standards bodies, developers, and end users usually encounter is establishing parameters. This is particularly hard in the case of FHE because the parameters are not only related to the security level of the system, but also to the type of operations that the system is able to handle. In this paper, we provide examples of parameter sets for LWE targeting particular security levels that can be used in the context of FHE constructions. We also give examples of complete FHE parameter sets, including the parameters relevant for correctness and performance, alongside those relevant for security. As an additional contribution, we survey the parameter selection support offered in open-source FHE libraries.
Last updated:  2024-03-19
Perfect Zero-Knowledge PCPs for #P
Tom Gur, Jack O'Connor, and Nicholas Spooner
We construct perfect zero-knowledge probabilistically checkable proofs (PZK-PCPs) for every language in #P. This is the first construction of a PZK-PCP for any language outside BPP. Furthermore, unlike previous constructions of (statistical) zero-knowledge PCPs, our construction simultaneously achieves non-adaptivity and zero knowledge against arbitrary (adaptive) polynomial-time malicious verifiers. Our construction consists of a novel masked sumcheck PCP, which uses the combinatorial nullstellensatz to obtain antisymmetric structure within the hypercube and randomness outside of it. To prove zero knowledge, we introduce the notion of locally simulatable encodings: randomised encodings in which every local view of the encoding can be efficiently sampled given a local view of the message. We show that the code arising from the sumcheck protocol (the Reed--Muller code augmented with subcube sums) admits a locally simulatable encoding. This reduces the algebraic problem of simulating our masked sumcheck to a combinatorial property of antisymmetric functions.
Last updated:  2024-03-19
Monomial Isomorphism for Tensors and Applications to Code Equivalence Problems
Giuseppe D'Alconzo
Starting from the problem of $d$-Tensor Isomorphism ($d$-TI), we study the relation between various Code Equivalence problems in different metrics. In particular, we show a reduction from the sum-rank metric (CE${}_{sr}$) to the rank metric (CE${}_{rk}$). To obtain this result, we investigate reductions between tensor problems. We define the Monomial Isomorphism problem for $d$-tensors ($d$-TI${}^*$), where, given two $d$-tensors, we ask if there are $d-1$ invertible matrices and a monomial matrix sending one tensor into the other. We link this problem to the well-studied $d$-TI and the TI-completeness of $d$-TI${}^*$ is shown. Due to this result, we obtain a reduction from CE${}_{sr}$ to CE${}_{rk}$. In the literature, a similar result was known, but it needs an additional assumption on the automorphisms of matrix codes. Since many constructions based on the hardness of Code Equivalence problems are emerging in cryptography, we analyze how such reductions can be taken into account in the design of cryptosystems based on CE${}_{sr}$.
Last updated:  2024-03-19
Atlas-X Equity Financing: Unlocking New Methods to Securely Obfuscate Axe Inventory Data Based on Differential Privacy
Antigoni Polychroniadou, Gabriele Cipriani, Richard Hua, and Tucker Balch
Banks publish daily a list of available securities/assets (axe list) to selected clients to help them effectively locate Long (buy) or Short (sell) trades at reduced financing rates. This reduces costs for the bank, as the list aggregates the bank's internal firm inventory per asset for all clients of long as well as short trades. However, this is somewhat problematic: (1) the bank's inventory is revealed; (2) trades of clients who contribute to the aggregated list, particularly those deemed large, are revealed to other clients. Clients conducting sizable trades with the bank and possessing a portion of the aggregated asset exceeding $50\%$ are considered to be concentrated clients. This could potentially reveal a trading concentrated client's activity to their competitors, thus providing an unfair advantage over the market. Atlas-X Axe Obfuscation, powered by new differential private methods, enables a bank to obfuscate its published axe list on a daily basis while under continual observation, thus maintaining an acceptable inventory Profit and Loss (P\&L) cost pertaining to the noisy obfuscated axe list while reducing the clients' trading activity leakage. Our main differential private innovation is a differential private aggregator for streams (time series data) of both positive and negative integers under continual observation. For the last two years, Atlas-X system has been live in production across three major regions—USA, Europe, and Asia—at J.P. Morgan, a major financial institution, facilitating significant profitability. To our knowledge, it is the first differential privacy solution to be deployed in the financial sector. We also report benchmarks of our algorithm based on (anonymous) real and synthetic data to showcase the quality of our obfuscation and its success in production.
Last updated:  2024-03-19
Mangrove: A Scalable Framework for Folding-based SNARKs
Wilson Nguyen, Trisha Datta, Binyi Chen, Nirvan Tyagi, and Dan Boneh
We present a framework for building efficient folding-based SNARKs. First we develop a new "uniformizing" compiler for NP statements that converts any poly-time computation to a sequence of identical simple steps. The resulting uniform computation is especially well-suited to be processed by a folding-based IVC scheme. Second, we develop two optimizations to folding-based IVC. The first reduces the recursive overhead of the IVC by restructuring the relation to which folding is applied. The second employs a "commit-and-fold" strategy to further simplify the relation. Together, these optimizations result in a folding-based SNARK that has a number of attractive features. First, the scheme uses a constant-size transparent common reference string (CRS). Second, the prover has (i) low memory footprint, (ii) makes only two passes over the data, (iii) is highly parallelizable, and (iv) is concretely efficient. Microbenchmarks indicate proving time is comparable to leading monolithic SNARKs, and is significantly faster than other streaming SNARKs. On a laptop, for $2^{24}$ ($2^{32}$) gates, the Mangrove prover is estimated to take $2$ minutes ($8$ hours) with peak memory usage approximately $390$ MB ($800$ MB).
Last updated:  2024-03-19
Multi-Party Replicated Secret Sharing over a Ring with Applications to Privacy-Preserving Machine Learning
Alessandro Baccarini, Marina Blanton, and Chen Yuan
Secure multi-party computation has seen significant performance advances and increasing use in recent years. Techniques based on secret sharing offer attractive performance and are a popular choice for privacy-preserving machine learning applications. Traditional techniques operate over a field, while designing equivalent techniques for a ring $\mathbb{Z}_{2^k}$ can boost performance. In this work, we develop a suite of multi-party protocols for a ring in the honest majority setting starting from elementary operations to more complex with the goal of supporting general-purpose computation. We demonstrate that our techniques are substantially faster than their field-based equivalents when instantiated with a different number of parties and perform on par with or better than state-of-the-art techniques with designs customized for a fixed number of parties. We evaluate our techniques on machine learning applications and show that they offer attractive performance.
Last updated:  2024-03-18
Atomic and Fair Data Exchange via Blockchain
Ertem Nusret Tas, István András Seres, Yinuo Zhang, Márk Melczer, Mahimna Kelkar, Joseph Bonneau, and Valeria Nikolaenko
We introduce a blockchain Fair Data Exchange (FDE) protocol, enabling a storage server to transfer a data file to a client atomically: the client receives the file if and only if the server receives an agreed-upon payment. We put forth a new definition for a cryptographic scheme that we name verifiable encryption under committed key (VECK), and we propose two instantiations for this scheme. Our protocol relies on a blockchain to enforce the atomicity of the exchange and uses VECK to ensure that the client receives the correct data (matching an agreed-upon commitment) before releasing the payment for the decrypting key. Our protocol is trust-minimized and requires only constant-sized on-chain communication, concretely $3$ signatures, $1$ verification key, and $1$ secret key, with most of the data stored and communicated off-chain. It also supports exchanging only a subset of the data, can amortize the server's work across multiple clients, and offers a general framework to design alternative FDE protocols using different commitment schemes. A prominent application of our protocol is the Danksharding data availability scheme on Ethereum, which commits to data via KZG polynomial commitments. We also provide an open-source implementation for our protocol with both instantiations for VECK, demonstrating our protocol's efficiency and practicality on Ethereum.
Last updated:  2024-03-18
Towards Verifiable FHE in Practice: Proving Correct Execution of TFHE's Bootstrapping using plonky2
Louis Tremblay Thibault and Michael Walter
In this work we demonstrate for the first time that a full FHE bootstrapping operation can be proven using a SNARK in practice. We do so by designing an arithmetic circuit for the bootstrapping operation and prove it using plonky2. We are able to prove the circuit on an AWS C6i.metal instance in about 20 minutes. Proof size is about 200kB and verification takes less than 10ms. As the basis of our bootstrapping operation we use TFHE's programmable bootstrapping and modify it in a few places to more efficiently represent it as an arithmetic circuit (while maintaining full functionality and security). In order to achieve our results in a memory-efficient way, we take advantage of the structure of the computation and plonky2's ability to efficiently prove its own verification circuit to implement a recursion-based IVC scheme.
Last updated:  2024-03-18
Encrypted Image Classification with Low Memory Footprint using Fully Homomorphic Encryption
Lorenzo Rovida and Alberto Leporati
Classifying images has become a straightforward and accessible task, thanks to the advent of Deep Neural Networks. Nevertheless, not much attention is given to the privacy concerns associated with sensitive data contained in images. In this study, we propose a solution to this issue by exploring an intersection between Machine Learning and cryptography. In particular, Fully Homomorphic Encryption (FHE) emerges as a promising solution, as it enables computations to be performed on encrypted data. We, therefore, propose a Residual Network implementation based on FHE which allows the classification of encrypted images, ensuring that only the user can see the result. We suggest a circuit which reduces the memory requirements by more than 85% compared to the most recent works, while maintaining a high level of accuracy and a short computational time. We implement the circuit using the well-known CKKS scheme, which enables approximate encrypted computations. We evaluate the results from three perspectives: memory requirements, computational time and calculations precision. We demonstrate that it is possible to evaluate an encrypted ResNet20 in less than five minutes on a laptop using approximately 15GB of memory, achieving an accuracy of 91.67% on the CIFAR-10 dataset, which is almost equivalent to the accuracy of the plain model (92.60%).
Last updated:  2024-03-18
On iterated punctured Grover
Cezary Pilaszewicz and Marian Margraf
Grover’s algorithm is a very versatile cryptanalytical tool. Even though it doesn’t provide an exponential speed-up, it still changed the cryptographic requirements all over the world. Usually, Grover’s algorithm is executed with a fixed well-defined function indicating good states. In this paper, we want to investigate what happens if the function is changed over time to mark less and less good states. We compute the amplitudes after $2^{s/2}$ steps of an adjusted Grover’s algorithm proposed by Zheng et al. in Nested Quantum Search Model on Symmetric Ciphers and Its Applications (2023). We use the amplitudes to reason that such an approach always leads to a worse run-time when compared to the naïve version. We also indicate at which point in Zheng et al. the counterintuitive nature of quantum computation leads to false assumptions.
Last updated:  2024-03-18
Isogeny problems with level structure
Luca De Feo, Tako Boris Fouotsa, and Lorenz Panny
Given two elliptic curves and the degree of an isogeny between them, finding the isogeny is believed to be a difficult problem---upon which rests the security of nearly any isogeny-based scheme. If, however, to the data above we add information about the behavior of the isogeny on a large enough subgroup, the problem can become easy, as recent cryptanalyses on SIDH have shown. Between the restriction of the isogeny to a full $N$-torsion subgroup and no ''torsion information'' at all lies a spectrum of interesting intermediate problems, raising the question of how easy or hard each of them is. Here we explore modular isogeny problems where the torsion information is masked by the action of a group of $2\times 2$ matrices. We give reductions between these problems, classify them by their difficulty, and link them to security assumptions found in the literature.
Last updated:  2024-03-18
Quantum Cryptanalysis of 5 rounds Feistel schemes and Benes schemes
Maya Chartouny, Jacques Patarin, and Ambre Toulemonde
In this paper, we provide new quantum cryptanalysis results on 5 rounds (balanced) Feistel schemes and on Benes schemes. More precisely, we give an attack on 5 rounds Feistel schemes in $\Theta(2^{2n/3})$ quantum complexity and an attack on Benes schemes in $\Theta(2^{2n/3})$ quantum complexity, where n is the number of bits of the internel random functions.
Last updated:  2024-03-18
Classical and Quantum Generic Attacks on 6-round Feistel Schemes
Maya Chartouny, Benoit Cogliati, and Jacques Patarin
In this paper, we describe new quantum generic attacks on 6 rounds balanced Feistel networks with internal functions or internal permutations. In order to obtain our new quantum attacks, we revisit a result of Childs and Eisenberg that extends Ambainis' collision finding algorithm to the subset finding problem. In more details, we continue their work by carefully analyzing the time complexity of their algorithm. We also use four points structures attacks instead of two points structures attacks that leads to a complexity of $\mathcal{O}(2^{8n/5})$ instead of $\mathcal{O}(2^{2n})$. Moreover, we have also found a classical (i.e. non quantum) improved attack on $6$ rounds with internal permutations. The complexity here will be in $\mathcal{O}(2^{2n})$ instead of $\mathcal{O}(2^{3n})$ previously known.
Last updated:  2024-03-18
Asymptotics and Improvements of Sieving for Codes
Léo Ducas, Andre Esser, Simona Etinski, and Elena Kirshanova
A recent work by Guo, Johansson, and Nguyen (Eprint'23) proposes a promising adaptation of Sieving techniques from lattices to codes, in particular, by claiming concrete cryptanalytic improvements on various schemes. The core of their algorithm reduces to a Near Neighbor Search (NNS) problem, for which they devise an ad-hoc approach. In this work, we aim for a better theoretical understanding of this approach. First, we provide an asymptotic analysis which is not present in the original paper. Second, we propose a more systematic use of well-established NNS machinery, known as Locality Sensitive Hashing and Filtering (LSH/F). LSH/F is an approach that has been applied very successfully in the case of sieving over lattices. We thus establish the first baseline for the sieving approach with a decoding complexity of $2^{0.117n}$ for the conventional worst parameters (full distance decoding, where complexity is maximized over all code rates). Our cumulative improvements eventually enable us to lower the hardest parameter decoding complexity for SievingISD algorithms to $2^{0.101n}$. This approach outperforms the BJMM algorithm (Eurocrypt'12) but falls behind the most advanced conventional ISD approach by Both and May (PQCrypto'18). As for lattices, we found the Random-Spherical-Code-Product (RPC) to give the best asymptotic complexity. Moreover, we also consider an alternative that seems specific to the Hamming Sphere, which we believe could be of practical interest as it plausibly hides less sub-exponential overheads than RPC.
Last updated:  2024-03-18
Studying Lattice-Based Zero-Knowlege Proofs: A Tutorial and an Implementation of Lantern
Lena Heimberger, Florian Lugstein, and Christian Rechberger
Lattice-based cryptography has emerged as a promising new candidate to build cryptographic primitives. It offers resilience against quantum attacks, enables fully homomorphic encryption, and relies on robust theoretical foundations. Zero-knowledge proofs (ZKPs) are an essential primitive for various privacy-preserving applications. For example, anonymous credentials, group signatures, and verifiable oblivious pseudorandom functions all require ZKPs. Currently, the majority of ZKP systems are based on elliptic curves, which are susceptible to attacks from quantum computers. This project presents the first implementation of Lantern, a state-of-the-art lattice-based ZKP system that can create compact proofs, which are a few dozen kilobytes large, for basic statements. We thoroughly explain the theory behind the scheme and give a full implementation in a Jupyter Notebook using SageMath to make Lantern more accessible to researchers. Our interactive implementation allows users to fully understand the scheme and its building blocks, providing a valuable resource to understand both ZKPs and lattice cryptography. Albeit not optimized for performance, this implementation allows us to construct a Module-LWE secret proof in 35s on a consumer laptop. Through our contributions, we aim to advance the understanding and practical utilization of lattice-based ZKP systems, particularly emphasizing accessibility for the broader research community.
Last updated:  2024-03-18
Compressed M-SIDH: An Instance of Compressed SIDH-like Schemes with Isogenies of Highly Composite Degrees
Kaizhan Lin, Jianming Lin, Shiping Cai, Weize Wang, and Chang-An Zhao
Recently, SIDH was broken by a series of attacks. To avoid the attacks, several new countermeasures, such as M-SIDH and binSIDH, have been developed. Different from SIDH, the new SIDH-like schemes have relatively large public key sizes. Besides, the orders of the torsion groups considered in new SIDH-like schemes are the products of many primes. Therefore, the key compression techniques in SIDH can not be directly applied to these schemes. It remains an open problem to compress the public key in new SIDH-like schemes. This paper takes M-SIDH as an instance to explore how to compress the public key in new SIDH-like schemes efficiently. We propose compressed M-SIDH, which is reminiscent of compressed SIDH. We also show that our approach to compress the public key of M-SIDH is valid and prove that compressed M-SIDH is secure as long as M-SIDH is secure. In addition, new algorithms to accelerate the performance of public-key compression in M-SIDH are presented in this paper. We provide a proof-of-concept implementation of compressed M-SIDH in SageMath. Experimental results show that our approach fits well with compressed M-SIDH. The techniques proposed in this work also benefit public-key compression in other SIDH-like protocols, such as binSIDH and terSIDH. Besides, our method for torsion basis generation has the potential to improve the performance of SQALE and dCSIDH.
Last updated:  2024-03-18
Single trace HQC shared key recovery with SASCA
Guillaume Goy, Julien Maillard, Philippe Gaborit, and Antoine Loiseau
This paper presents practicable single trace attacks against the Hamming Quasi-Cyclic (HQC) Key Encapsulation Mechanism. These attacks are the first Soft Analytical Side-Channel Attacks (SASCA) against code-based cryptography. We mount SASCA based on Belief Propagation (BP) on several steps of HQC's decapsulation process. Firstly, we target the Reed-Solomon (RS) decoder involved in the HQC publicly known code. We perform simulated attacks under Hamming weight leakage model, and reach excellent accuracies (superior to $0.9$) up to a high noise level ($\sigma = 3$), thanks to a re-decoding strategy. In a real case attack scenario, on a STM32F407, this attack leads to a perfect success rate. Secondly, we conduct an analogous attack against the RS encoder used during the re-encryption step required by the Fujisaki-Okamoto-like transform. Both in simulation and practical instances, results are satisfactory and this attack represents a threat to the security of HQC. Finally, we analyze the strength of countermeasures based on masking and shuffling strategies. In line with previous SASCA literature targeting Kyber, we show that masking HQC is a limited countermeasure against BP attacks, as well as shuffling countermeasures adapted from Kyber. We evaluate the ``full shuffling'' strategy which thwarts our attack by introducing sufficient combinatorial complexity. Eventually, we highlight the difficulty of protecting the current RS encoder with a shuffling strategy. A possible countermeasure would be to consider another encoding algorithm for the scheme to support a full shuffling. Since the encoding subroutine is only a small part of the implementation, it would come at a small cost.
Last updated:  2024-03-18
On Tweakable Correlation Robust Hashing against Key Leakages
Chun Guo, Xiao Wang, Kang Yang, and Yu Yu
We continue the study of blockcipher-based (tweakable) correlation robust hash functions, which are central building blocks of circuit garbling and oblivious-transfer extension schemes. Motivated by Roy (CRYPTO 2022), we first enhance the multi-user tweakable correlation robust notion of Guo et al. (CRYPTO 2020) with a {\it key leaking oracle} that tells the adversary whether a certain user key satisfies the adversarially-chosen predicate. We then investigate the state-of-the-art hash construction of Guo et al. with respect to our new security definition, providing security proof as well as matching attacks. As an application, we exhibit an OT extension protocol with non-trivial multi-user security.
Last updated:  2024-03-18
Tight ZK CPU: Batched ZK Branching with Cost Proportional to Evaluated Instruction
Yibin Yang, David Heath, Carmit Hazay, Vladimir Kolesnikov, and Muthuramakrishnan Venkitasubramaniam
We explore Zero-Knowledge proofs (ZKP) of statements expressed as programs written in high-level languages, e.g., C or assembly. At the core of executing such programs in ZK is the repeated evaluation of a CPU step, achieved by branching over the CPU’s instruction set. This approach is general and covers traversal-execution of a program’s control flow graph (CFG): here CPU instructions are straight-line program fragments (of various sizes) associated with the CFG nodes. This highlights the usefulness of ZK CPUs with a large number of instructions of varying sizes. We formalize and design an efficient tight ZK CPU, where the cost (both computation and communication, for each party) of each step depends only on the instruction taken. This qualitatively improves over state-of-the-art, where cost scales with the size of the largest CPU instruction (largest CFG node). Our technique is formalized in the standard commit-and-prove paradigm, so our results are compatible with a variety of (interactive and non-interactive) general-purpose ZK. We implemented an interactive tight arithmetic (over $\mathbb{F}_{2^{61}-1}$) ZK CPU based on Vector Oblivious Linear Evaluation (VOLE) and compared it to the state-of-the-art non-tight VOLE-based ZK CPU Batchman (Yang et al. CCS’23). In our experiments, under the same hardware configuration, we achieve comparable performance when instructions are of the same size and a $5$-$18×$ improvement when instructions are of varied size. Our VOLE-based ZK CPU can execute $100$K (resp. $450$K) multiplication gates per second in a WAN-like (resp. LAN-like) setting. It requires ≤ $102$ Bytes per multiplication gate. Our basic building block, ZK Unbalanced Read-Only Memory (ZK UROM), may be of an independent interest.
Last updated:  2024-03-17
Anonymous Complaint Aggregation for Secure Messaging
Connor Bell and Saba Eskandarian
Private messaging platforms provide strong protection against platform eavesdropping, but malicious users can use privacy as cover for spreading abuse and misinformation. In an attempt to identify the sources of misinformation on private platforms, researchers have proposed mechanisms to trace back the source of a user-reported message (CCS '19,'21). Unfortunately, the threat model considered by initial proposals allowed a single user to compromise the privacy of another user whose legitimate content the reporting user did not like. More recent work has attempted to mitigate this side effect by requiring a threshold number of users to report a message before its origins can be identified (NDSS '22). However, the state of the art scheme requires the introduction of new probabilistic data structures and only achieves a "fuzzy" threshold guarantee. Moreover, false positives, where the source of an unreported message is identified, are possible. This paper introduces a new threshold source tracking technique that allows a private messaging platform, with the cooperation of a third-party moderator, to operate a threshold reporting scheme with exact thresholds and no false positives. Unlike prior work, our techniques require no modification of the message delivery process for a standard source tracking scheme, affecting only the abuse reporting procedure, and do not require tuning of probabilistic data structures.
Last updated:  2024-03-17
Perfectly-Secure Multiparty Computation with Linear Communication Complexity over Any Modulus
Daniel Escudero, Yifan Song, and Wenhao Wang
Consider the task of secure multiparty computation (MPC) among $n$ parties with perfect security and guaranteed output delivery, supporting $t<n/3$ active corruptions. Suppose the arithmetic circuit $C$ to be computed is defined over a finite ring $\mathbb{Z}/q\mathbb{Z}$, for an arbitrary $q\in\mathbb{Z}$. It is known that this type of MPC over such ring is possible, with communication that scales as $O(n|C|)$, assuming that $q$ scales as $\Omega(n)$. However, for constant-size rings $\mathbb{Z}/q\mathbb{Z}$ where $q = O(1)$, the communication is actually $O(n\log n|C|)$ due to the need of the so-called ring extensions. In most natural settings, the number of parties is variable but the ``datatypes'' used for the computation are fixed (e.g. 64-bit integers). In this regime, no protocol with linear communication exists. In this work we provide an MPC protocol in this setting: perfect security, G.O.D. and $t<n/3$ active corruptions, that enjoys linear communication $O(n|C|)$, even for constant-size rings $\mathbb{Z}/q\mathbb{Z}$. This includes as important particular cases small fields such as $\mathbb{F}_2$, and also the ring $\mathbb{Z}/2^k\mathbb{Z}$. The main difficulty in achieving this result is that widely used techniques such as linear secret-sharing cannot work over constant-size rings, and instead, one must make use of ring extensions that add $\Omega(\log n)$ overhead, while packing $\Omega(\log n)$ ring elements in each extension element in order to amortize this cost. We make use of reverse multiplication-friendly embeddings (RMFEs) for this packing, and adapt recent techniques in network routing (Goyal et al. CRYPTO'22) to ensure this can be efficiently used for non-SIMD circuits. Unfortunately, doing this naively results in a restriction on the minimum width of the circuit, which leads to an extra additive term in communication of $\mathsf{poly}(n)\cdot\mathsf{depth}(C)$. One of our biggest technical contributions lies in designing novel techniques to overcome this limitation by packing elements that are distributed across different layers. To the best of our knowledge, all works that have a notion of packing (e.g. RMFE or packed secret-sharing) group gates across the same layer, and not doing so, as in our work, leads to a unique set of challenges and complications.
Last updated:  2024-03-17
Input Transformation Based Zero-Knowledge Argument System for Arbitrary Circuits
Frank Y.C. Lu
We introduce a new efficient, transparent, interactive zero-knowledge argument system that is based on the new input transformation concept that we will introduce in this paper. The core of this concept is a mechanism that converts input parameters into a format that can be processed directly by the circuit so that the circuit output can be verified through direct computation of the circuit.  Our benchmark result shows our approach can significantly improve both prover runtime and verifier runtime performance by either close to or more than one order of magnitude over the state of the art while keeping the communication cost competitive with that of the state of the art. Specifically, when processing an deep circuit of $2^{20}$ sequential multiplication gates with 960 input bits on a single CPU thread, the performance of the BinaryBoost version of our protocol is: $0.8$ seconds for the prover runtime cost; $17$ milliseconds for the verifier runtime cost; and $55$ kilobytes for the communication cost. Our approach also allows our protocol to be memory-efficient without forcing it to require a designated verifier. The theoretical memory cost of our protocol is  $\leq O({m_p}^{\frac{1}{2}})$ without requiring a designated verifier.
Last updated:  2024-03-16
Toward A Practical Multi-party Private Set Union
Jiahui Gao, Son Nguyen, and Ni Trieu
This paper studies a multi-party private set union (mPSU), a fundamental cryptographic problem that allows multiple parties to compute the union of their respective datasets without revealing any additional information. We propose an efficient mPSU protocol which is secure in the presence of any number of colluding semi-honest participants. Our protocol avoids computationally expensive homomorphic operations or generic multi-party computation, thus providing an efficient solution for mPSU. The crux of our protocol lies in the utilization of new cryptographic tools, namely, Membership Oblivious Transfer (mOT) and Conditional Oblivious Pseudorandom Function (cOPRF). We believe that the mOT and cOPRF may be of independent interest. We implement our mPSU protocol and evaluate their performance. Our protocol shows an improvement of up to $37.82\times$ in term of running time and $389.85\times$ bandwidth cost compared to the existing state-of-the-art protocols.
Last updated:  2024-03-16
SECDSA: Mobile signing and authentication under classical ``sole control''
Eric Verheul
The 2014 European eIDAS regulation regulates strong electronic authentication and legally binding electronic signatures. Both require user "sole control". Historically smartcards are used based on direct interaction between user and relying party. Here sole control is provided by giving users both physical possession and control of the cryptographic key used for signing/authentication through a PIN. Such **classical** sole control is required in the 1999 electronic signature directive by some interpretations. The eIDAS regulation repeals the directive and explicitly relaxes its sole control requirements in a trade-off between security and usability. This allows user interaction to be outsourced to intermediary parties (authentication providers, signing services). This also allows mobile applications as user friendly alternatives for smartcards. However, current mobile platforms are only equipped with limited cryptographic hardware not supporting secure knowledge factors (PINs) controlling keys. The eIDAS relaxation raises concerns on sole control; intermediary parties should not be able to act as man-in-the-middle and impersonate users. In this paper we present a simple cryptographic design for signing and authentication on standard mobile platforms providing classical sole control. We argue that our design can meet the highest eIDAS requirements, effectively introducing a new signature category in a 2016 decision of the European Commission. We also sketch a SECDSA based implementation of the European Digital Identity Wallet recently proposed by the European Commission as part of the eIDAS regulation update.
Last updated:  2024-03-16
The Systemic Errors of Banded Quantum Fourier Transformation
Zhengjun Cao and Zhenfu Cao
Quantum Fourier Transformation (QFT) needs to construct the rotation gates with extremely tiny angles. Since it is impossible to physically manipulate such tiny angles (corresponding to extremely weak energies), those gates should be replaced by some scaled and controllable gates. The version of QFT is called banded QFT (BQFT), and can be mathematically specified by Kronecker product and binary fraction. But the systemic errors of BQFT has never been heuristically estimated. In this paper, we generate the programming code for BQFT and argue that its systemic errors are not negligible, which means the physical implementation of QFT with a huge transform size is still a challenge. To the best of our knowledge, it is the first time to obtain the result.
Last updated:  2024-03-16
Verifiable Information-Theoretic Function Secret Sharing
Stanislav Kruglik, Son Hoang Dau, Han Mao Kiah, Huaxiong Wang, and Liang Feng Zhang
A function secret sharing (FSS) (Boyle et al., Eurocrypt 2015) is a cryptographic primitive that enables additive secret sharing of functions from a given function family $\mathcal{F}$. FSS supports a wide range of cryptographic applications, including private information retrieval (PIR), anonymous messaging systems, private set intersection and more. Formally, given positive integers $r \geq 2$ and $t < r$, and a class $\mathcal{F}$ of functions $f: [n] \to \mathbb{G}$ for an Abelian group $\mathbb{G}$, an $r$-party $t$-private FSS scheme for $\mathcal{F}$ allows a dealer to split each $f \in \mathcal{F}$ into $r$ function shares $f_1, \ldots, f_r$ among $r$ servers. Shares have the property that $f = f_1 + \cdots + f_r$ and functions are indistinguishable for any coalition of up to $t$ servers. FSS for point function $f_{\alpha_1,\ldots,\alpha_l,\beta_1,\ldots,\beta_l}$ for different $\alpha$ and $l<t$ that evaluates to $\beta_i$ on input $\alpha_i$ for all $i\in[l]$ and to zero on all other inputs for $l=1$ are known under the name of distributed point functions (DPF). FSS for special interval functions $f^{<}_{\alpha,\beta}$ that evaluate to $\beta$ on inputs lesser than $\alpha$ and to zero on all other inputs are known under the name of distributed comparison functions (DCF). Most existing FSS schemes are based on the existence of one-way functions or pseudo-random generators, and as a result, hiding of function $f$ holds only against computationally bounded adversaries. Protocols employing them as building blocks are computationally secure. Several exceptions mostly focus on DPF for four, eight or $d(t+1)$ servers for positive integer $d$, and none of them provide verifiability. In this paper, we propose DPF for $d(t+l-1)+1$ servers, where $d$ is a positive integer, offering a better key size compared to the previously proposed DPF for $d(t+1)$ servers and DCF for $dt+1$ servers, also for positive integer $d$. We introduce their verifiable extension in which any set of servers holding $t$ keys cannot persuade us to accept the wrong value of the function. This verifiability notion differs from existing verifiable FSS schemes in the sense that we verify not only the belonging of the function to class $\mathcal{F}$ but also the correctness of computation results. Our schemes provide a secret key size $O(n^{1/d}\cdot s\log(p))$ for DPF and $O(n^{1/d}\cdot s\log(p))$ for DCF, where $p^s$ is the size of group $\mathbb{G}$.
Last updated:  2024-03-16
LATKE: A Framework for Constructing Identity-Binding PAKEs
Jonathan Katz and Michael Rosenberg
Motivated by applications to the internet of things (IoT), Cremers, Naor, Paz, and Ronen (Crypto '22) recently considered a setting in which multiple parties share a common password and want to be able to securely authenticate to each other. They observed that using standard password-authenticated key exchange (PAKE) protocols in this setting allows for catastrophic impersonation attacks whereby compromise of a single party allows an attacker to impersonate any party to any other. To address this, they proposed the notion of identity-binding PAKE (iPAKE) and showed constructions of iPAKE protocols CHIP and CRISP. In this work we present LATKE, a new framework for iPAKE that allows us to construct protocols offering features beyond what CHIP and CRISP achieve. In particular, we can instantiate the components of our framework to yield an iPAKE protocol with post-quantum security and identity concealment, where one party hides its identity until it has authenticated the other. To our knowledge, this is the first iPAKE protocol with either property. We show that the iPAKEs produced by LATKE UC-realize a slightly weakened version of the original iPAKE functionality in the adaptive corruption model with erasure and programmable random oracles. To demonstrate the concrete efficiency of our framework, we implement various instantiations and compare the resulting protocols to CHIP when run on commodity hardware. We find some pre-quantum instantiations have computation cost within 5% of CHIP and with a communication overhead of 324B, and one post-quantum instantiation achieves computation cost within 3% of CHIP with a communication overhead of 3kB.
Last updated:  2024-03-16
Modeling Mobile Crash in Byzantine Consensus
Hans Schmiedel, Runchao Han, Qiang Tang, Ron Steinfeld, and Jiangshan Yu
Targeted Denial-of-Service (DoS) attacks have been a practical concern for permissionless blockchains. Potential solutions, such as random sampling, are adopted by blockchains. However, the associated security guarantees have only been informally discussed in prior work. This is due to the fact that existing adversary models are either not fully capturing this attack or giving up certain design choices (as in the sleepy model or asynchronous network model), or too strong to be practical (as in the mobile Byzantine adversary model). This paper provides theoretical foundations and desired properties for consensus protocols that resist against targeted DoS attacks. In particular, we define the Mobile Crash Adaptive Byzantine (MCAB) model to capture such an attack. In addition, we identify and formalize two properties for consensus protocols under the MCAB model, and analyze their trade-offs. As case studies, we prove that Ouroboros Praos and Algorand are secure in our MCAB model, giving the first formal proofs supporting their security guarantee against targeted DoS attacks, which were previously only informally discussed. We also illustrate an application of our properties to secure a streamlined BFT protocol, chained Hotstuff, against targeted DoS attacks.
Last updated:  2024-03-15
PriDe CT: Towards Public Consensus, Private Transactions, and Forward Secrecy in Decentralized Payments
Yue Guo, Harish Karthikeyan, Antigoni Polychroniadou, and Chaddy Huussin
Anonymous Zether, proposed by Bunz et al. (FC, 2020) and subsequently improved by Diamond (IEEE S&P, 2021) is an account-based confidential payment mechanism that works by using a smart contract to achieve privacy (i.e. identity of receivers to transactions and payloads are hidden). In this work, we look at simplifying the existing protocol while also achieving batching of transactions for multiple receivers, while ensuring consensus and forward secrecy. To the best of our knowledge, this work is the first to formally study the notion of forward secrecy in the setting of blockchain, borrowing a very popular and useful idea from the world of secure messaging. Specifically, we introduce: - FUL-Zether, a forward-secure version of Zether (Bunz et al., FC, 2020). - PRIvate DEcentralized Confidental Transactions (PriDe CT), a much-simplified version of Anonymous Zether that achieves competitive performance and enables batching of transactions for multiple receivers. - PRIvate DEcentralized Forward-secure Until Last update Confidential Transactions (PriDeFUL CT), a forward-secure version of PriDe CT. We also present an open-source, Ethereum-based implementation of our system. PriDe CT uses linear homomorphic encryption as Anonymous Zether but with simpler zero-knowledge proofs. PriDeFUL CT uses an updatable public key encryption scheme to achieve forward secrecy by introducing a new DDH-based construction in the standard model. In terms of transaction sizes, Quisquis (Asiacrypt, 2019), which is the only cryptocurrency that supports batchability (albeit in the UTXO model), has 15 times more group elements than PriDe CT. Meanwhile, for a ring of $N$ receivers, Anonymous Zether requires $6\log N$ more terms even without accounting for the ability to batch in PriDe CT. Further, our implementation indicates that, for $N=32$, even if there were 7 intended receivers, PriDe CT outperforms Anonymous Zether in proving time and gas consumption.
Last updated:  2024-03-15
The 2Hash OPRF Framework and Efficient Post-Quantum Instantiations
Ward Beullens, Lucas Dodgson, Sebastian Faller, and Julia Hesse
An Oblivious Pseudo-Random Function (OPRF) is a two-party protocol for jointly evaluating a Pseudo-Random Function (PRF), where a user has an input x and a server has an input k. At the end of the protocol, the user learns the evaluation of the PRF using key k at the value x, while the server learns nothing about the user's input or output. OPRFs are a prime tool for building secure authentication and key exchange from passwords, private set intersection, private information retrieval, and many other privacy-preserving systems. While classical OPRFs run as fast as a TLS Handshake, current *quantum-safe* OPRF candidates are still practically inefficient. In this paper, we propose a framework for constructing OPRFs from post-quantum multi-party computation. The framework captures a family of so-called "2Hash PRFs", which sandwich a function evaluation in between two hashes. The core of our framework is a compiler that yields an OPRF from a secure evaluation of any function that is key-collision resistant and one-more unpredictable. We instantiate this compiler by providing such functions built from Legendre symbols, and from AES encryption. We then give a case-tailored protocol for securely evaluating our Legendre-based function, built from oblivious transfer (OT) and zero-knowledge proofs (ZKP). Instantiated with lattice-based OT and ZKPs, we obtain a quantum-safe OPRF that completes in 0.57 seconds, with less than 1MB of communication.
Last updated:  2024-03-15
Efficiently Testable Circuits without Conductivity
Mirza Ahad Baig, Suvradip Chakraborty, Stefan Dziembowski, Małgorzata Gałązka, Tomasz Lizurej, and Krzysztof Pietrzak
The notion of ``efficiently testable circuits'' (ETC) was recently put forward by Baig et al.~(ITCS'23). Informally, an ETC compiler takes as input any Boolean circuit $C$ and outputs a circuit/inputs tuple $(C',\mathbb{T})$ where (completeness) $C'$ is functionally equivalent to $C$ and (security) if $C'$ is tampered in some restricted way, then this can be detected as $C'$ will err on at least one input in the small test set $\mathbb{T}$. The compiler of Baig et al. detects tampering even if the adversary can tamper with \emph{all} wires in the compiled circuit. Unfortunately, the model requires a strong ``conductivity'' restriction: the compiled circuit has gates with fan-out up to 3, but wires can only be tampered in one way even if they have fan-out greater than one. In this paper, we solve the main open question from their work and construct an ETC compiler without this conductivity restriction. While Baig et al. use gadgets computing the AND and OR of particular subsets of the wires, our compiler computes inner products with random vectors. We slightly relax their security notion and only require that tampering is detected with high probability over the choice of the randomness. Our compiler increases the size of the circuit by only a small constant factor. For a parameter $\lambda$ (think $\lambda\le 5$), the number of additional input and output wires is $|C|^{1/\lambda}$, while the number of test queries to detect an error with constant probability is around $2^{2\lambda}$.
Last updated:  2024-03-15
Practical Lattice-Based Distributed Signatures for a Small Number of Signers
Nabil Alkeilani Alkadri, Nico Döttling, and Sihang Pu
$n$-out-of-$n$ distributed signatures are a special type of threshold $t$-out-of-$n$ signatures. They are created by a group of $n$ signers, each holding a share of the secret key, in a collaborative way. This kind of signatures has been studied intensively in recent years, motivated by different applications such as reducing the risk of compromising secret keys in cryptocurrencies. Towards maintaining security in the presence of quantum adversaries, Damgård et al. (J Cryptol 35(2), 2022) proposed lattice-based constructions of $n$-out-of-$n$ distributed signatures and multi-signatures following the Fiat-Shamir with aborts paradigm (ASIACRYPT 2009). Due to the inherent issue of aborts, the protocols either require to increase their parameters by a factor of $n$, or they suffer from a large number of restarts that grows with $n$. This has a significant impact on their efficiency, even if $n$ is small. Moreover, the protocols use trapdoor homomorphic commitments as a further cryptographic building block, making their deployment in practice not as easy as standard lattice-based Fiat-Shamir signatures. In this work, we present a new construction of $n$-out-of-$n$ distributed signatures. It is designed specifically for applications with small number of signers. Our construction follows the Fiat-Shamir with aborts paradigm, but solves the problem of large number of restarts without increasing the parameters by a factor of $n$ and utilizing any further cryptographic primitive. To demonstrate the practicality of our protocol, we provide a software implementation and concrete parameters aiming at 128 bits of security. Furthermore, we select concrete parameters for the construction by Damgård et al. and for the most recent lattice-based multi-signature scheme by Chen (CRYPTO 2023), and show that our approach provides a significant improvement in terms of all efficiency metrics. Our results also show that the multi-signature schemes by Damgård et al. and Chen as well as a multi-signature variant of our protocol produce signatures that are not smaller than a naive multi-signature derived from the concatenation of multiple standard signatures.
Last updated:  2024-03-15
Differential Cryptanalysis of a Lightweight Block Cipher LELBC
Manjeet Kaur, Tarun Yadav, Manoj Kumar, and Dhananjoy Dey
In this study, we investigate the newly developed low energy lightweight block cipher (LELBC), specifically designed for resource-constrained Internet of Things (IoT) devices in smart agriculture. The designers conducted a preliminary differential cryptanalysis of LELBC through mixed-integer linear programming (MILP). This paper further delves into LELBC’s differential characteristics in both single and related-key frameworks using MILP, identifying a nine-round differential characteristic with a probability of $2^{-60}$ in a single-key framework and a 12-round differential characteristic with a probability of $2^{-60}$ in a related-key framework.
Last updated:  2024-03-15
RLWE-based distributed key generation and threshold decryption
Ferran Alborch, Ramiro Martínez, and Paz Morillo
Ever since the appearance of quantum computers, prime factoring and discrete logarithm based cryptography has been put in question, giving birth to the so called post-quantum cryptography. The most prominent field in post-quantum cryptography is lattice-based cryptography, protocols that are proved to be as difficult to break as certain difficult lattice problems like Learning With Errors (LWE) or Ring Learning With Errors (RLWE). Furthermore, the application of cryptographic techniques to different areas, like electronic voting, has also seen to a great interest in distributed cryptography. In this work we will give two original threshold protocols based in the lattice problem RLWE: one for key generation and one for decryption. We will prove them both correct and secure under the assumption of hardness of some well-known lattice problems and we will give a rough implementation of the protocols in C to give some tentative results about their viability.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.