Papers updated in last 7 days (50 results)

Last updated:  2024-04-25
Memory adds no cost to lattice sieving for computers in 3 or more spatial dimensions
Samuel Jaques
The security of lattice-based crytography (LWE, NTRU, and FHE) depends on the hardness of the shortest-vector problem (SVP). Sieving algorithms give the lowest asymptotic runtime to solve SVP, but depend on exponential memory. Memory access costs much more in reality than in the RAM model, so we consider a computational model where processors, memory, and meters of wire are in constant proportions to each other. While this adds substantial costs to route data during lattice sieving, we modify existing algorithms to amortize these costs and find that, asymptotically, a classical computer can achieve the previous RAM model cost of $2^{0.2925d+o(d)}$ to sieve a $d$-dimensional lattice for a computer existing in 3 or more spatial dimensions, and can reach $2^{0.3113d+o(d)}$ in 2 spatial dimensions, where "spatial dimensions" are the dimensions of the physical geometry in which the computer exists. Under some assumptions about the constant terms of memory access, we estimate increases in bit security between $7$ to $23$ bits for different Kyber parameter sets and $8$ to $22$ bits for Dilithium.
Last updated:  2024-04-25
Tight Security of TNT and Beyond: Attacks, Proofs and Possibilities for the Cascaded LRW Paradigm
Ashwin Jha, Mustafa Khairallah, Mridul Nandi, and Abishanka Saha
Liskov, Rivest and Wagner laid the theoretical foundations for tweakable block ciphers (TBC). In a seminal paper, they proposed two (up to) birthday-bound secure design strategies --- LRW1 and LRW2 --- to convert any block cipher into a TBC. Several of the follow-up works consider cascading of LRW-type TBCs to construct beyond-the-birthday bound (BBB) secure TBCs. Landecker et al. demonstrated that just two-round cascading of LRW2 can already give a BBB security. Bao et al. undertook a similar exercise in context of LRW1 with TNT --- a three-round cascading of LRW1 --- that has been shown to achieve BBB security as well. In this paper, we present a CCA distinguisher on TNT that achieves a non-negligible advantage with $ O(2^{n/2}) $ queries, directly contradicting the security claims made by the designers. We provide a rigorous and complete advantage calculation coupled with experimental verification that further support our claim. Next, we provide new and simple proofs of birthday-bound CCA security for both TNT and its single-key variant, which confirm the tightness of our attack. Furthering on to a more positive note, we show that adding just one more block cipher call, referred as 4-LRW1, does not just re-establish the BBB security, but also amplifies it up to $ 2^{3n/4} $ queries. As a side-effect of this endeavour, we propose a new abstraction of the cascaded LRW-design philosophy, referred to as the LRW+ paradigm, comprising two block cipher calls sandwiched between a pair of tweakable universal hashes. This helps us to provide a modular proof covering all cascaded LRW constructions with at least $ 2 $ rounds, including 4-LRW1, and its more established relative, the well-known CLRW2, or more aptly, 2-LRW2.
Last updated:  2024-04-25
Sing a song of Simplex
Victor Shoup
We flesh out some details of the recently proposed Simplex atomic broadcast protocol, and modify it so that leaders disperse blocks in a more communication-efficient fashion. The resulting protocol, called DispersedSimplex, maintains the simplicity and excellent -- indeed, optimal -- latency characteristics of the original Simplex protocol. We also present several variations, including a variant that supports "stable leaders", variants that incorporate very recently developed data dissemination techniques that allow us to disperse blocks even more efficiently, and variants that are "signature free". We also suggest a number of practical optimizations and provide concrete performance estimates that take into account not just network latency but also network bandwidth limitations and computational costs. Based on these estimates, we argue that despite its simplicity, DispersedSimplex should, in principle, perform in practice as well as or better than any other state-of-the-art atomic broadcast protocol, at least in terms of common-case throughput and latency.
Last updated:  2024-04-25
Kronos: A Secure and Generic Sharding Blockchain Consensus with Optimized Overhead
Yizhong Liu, Andi Liu, Yuan Lu, Zhuocheng Pan, Yinuo Li, Jianwei Liu, Song Bian, and Mauro Conti
Sharding enhances blockchain scalability by dividing the network into shards, each managing specific unspent transaction outputs or accounts. As an introduced new transaction type, cross-shard transactions pose a critical challenge to the security and efficiency of sharding blockchains. Currently, there is a lack of a generic sharding consensus pattern that achieves both security and low overhead. In this paper, we present Kronos, a secure sharding blockchain consensus achieving optimized overhead. In particular, we propose a new secure sharding consensus pattern, based on a buffer managed jointly by shard members. Valid transactions are transferred to the payee via the buffer, while invalid ones are rejected through happy or unhappy paths. Kronos is proved to achieve security with atomicity under malicious clients with optimal intra-shard overhead $k\mathcal{B}$ ($k$ for involved shard number and $\mathcal{B}$ for a Byzantine fault tolerance (BFT) cost). Efficient rejection even requires no BFT execution in happy paths, and the cost in unhappy paths is still lower than a two-phase commit. Besides, we propose secure cross-shard certification methods based on batch certification and reliable cross-shard transfer. The former combines hybrid trees or vector commitments, while the latter integrates erasure coding. Handling $b$ transactions, Kronos is proved to achieve reliability with low cross-shard overhead $\mathcal{O}(n b \lambda)$ ($n$ for shard size and $\lambda$ for the security parameter). Notably, Kronos imposes no restrictions on BFT and does not rely on time assumptions, offering optional constructions in various modules. Kronos could serve as a universal framework for enhancing the performance and scalability of existing BFT protocols, supporting generic models, including asynchronous networks, increasing the throughput by several orders of magnitude. We implement Kronos using two prominent BFT protocols: asynchronous Speeding Dumbo (NDSS'22) and partial synchronous Hotstuff (PODC'19). Extensive experiments (over up to 1000 AWS EC2 nodes across 4 AWS regions) demonstrate Kronos scales the consensus nodes to thousands, achieving a substantial throughput of 320 ktx/sec with 2.0 sec latency. Compared with the past solutions, Kronos outperforms, achieving up to a 12$\times$ improvement in throughput and a 50% reduction in latency when cross-shard transactions dominate the workload.
Last updated:  2024-04-25
Attribute-based Keyed (Fully) Homomorphic Encryption
Keita Emura, Shingo Sato, and Atsushi Takayasu
Keyed homomorphic public key encryption (KHPKE) is a variant of homomorphic public key encryption, where only users who have a homomorphic evaluation key can perform a homomorphic evaluation. Then, KHPKE satisfies the CCA2 security against users who do not have a homomorphic evaluation key, while it satisfies the CCA1 security against users who have the key. Thus far, several KHPKE schemes have been proposed under the standard Diffie-Hellman-type assumptions and keyed fully homomorphic encryption (KFHE) schemes have also been proposed from lattices although there are no KFHE schemes secure solely under the LWE assumption in the standard model. As a natural extension, there is an identity-based variant of KHPKE; however, the security is based on a $q$-type assumption and there are no attribute-based variants. Moreover, there are no identity-based variants of KFHE schemes due to the complex design of the known KFHE schemes. In this paper, we provide two constructions of attribute-based variants. First, we propose an attribute-based KFHE (ABKFHE) scheme from lattices. We start by designing the first KFHE scheme secure solely under the LWE assumption in the standard model. Since the design is conceptually much simpler than known KFHE schemes, we replace their building blocks with attribute-based ones and obtain the proposed ABKFHE schemes. Next, we propose an efficient attribute-based KHPKE (ABKHE) scheme from a pair encoding scheme (PES). Due to the benefit of PES, we obtain various ABKHE schemes that contain the first identity-based KHPKE scheme secure under the standard $k$-linear assumption and the first pairing-based ABKHE schemes supporting more expressive predicates.
Last updated:  2024-04-24
A provably masked implementation of BIKE Key Encapsulation Mechanism
Loïc Demange and Mélissa Rossi
BIKE is a post-quantum key encapsulation mechanism (KEM) selected for the 4th round of the NIST’s standardization campaign. It relies on the hardness of the syndrome decoding problem for quasi-cyclic codes and on the indistinguishability of the public key from a random element, and provides the most competitive performance among round 4 candidates, which makes it relevant for future real-world use cases. Analyzing its side-channel resistance has been highly encouraged by the community and several works have already outlined various side-channel weaknesses and proposed ad-hoc countermeasures. However, in contrast to the well-documented research line on masking lattice-based algorithms, the possibility of generically protecting code-based algorithms by masking has only been marginally studied in a 2016 paper by Cong Chen et al. At this stage of the standardization campaign, it is important to assess the possibility of fully masking BIKE scheme and the resulting cost in terms of performances. In this work, we provide the first high-order masked implementation of a code-based algorithm. We had to tackle many issues such as finding proper ways to handle large sparse polynomials, masking the key-generation algorithm or keeping the benefit of the bitslicing. In this paper, we present all the gadgets necessary to provide a fully masked implementation of BIKE, we discuss our different implementation choices and we propose a full proof of masking in the Ishai Sahai and Wagner (Crypto 2003) model. More practically, we also provide an open C-code masked implementation of the key-generation, encapsulation and decapsulation algorithms with extensive benchmarks. While the obtained performance is slower than existing masked lattice-based algorithms, the scaling in the masking order is still encouraging and no Boolean to Arithmetic conversion has been used. We hope that this work can be a starting point for future analysis and optimization.
Last updated:  2024-04-24
Encryption Based Covert Channel for Large Language Models
Yongge Wang
Transformer neural networks have gained significant traction since their introduction, becoming pivotal across diverse domains. Particularly in large language models like Claude and ChatGPT, the transformer architecture has demonstrated remarkable efficacy. This paper provides a concise overview of transformer neural networks and delves into their security considerations, focusing on covert channel attacks and their implications for the safety of large language models. We present a covert channel utilizing encryption and demonstrate its efficacy in circumventing Claude.ai's security measures. Our experiment reveals that Claude.ai appears to log our queries and blocks our attack within two days of our initial successful breach. This raises two concerns within the community: (1) The extensive logging of user inputs by large language models could pose privacy risks for users. (2) It may deter academic research on the security of such models due to the lack of experiment repeatability.
Last updated:  2024-04-24
Simple constructions of linear-depth t-designs and pseudorandom unitaries
Tony Metger, Alexander Poremba, Makrand Sinha, and Henry Yuen
Uniformly random unitaries, i.e., unitaries drawn from the Haar measure, have many useful properties, but cannot be implemented efficiently. This has motivated a long line of research into random unitaries that ``look'' sufficiently Haar random while also being efficient to implement. Two different notions of derandomisation have emerged: $t$-designs are random unitaries that information-theoretically reproduce the first $t$ moments of the Haar measure, and pseudorandom unitaries (PRUs) are random unitaries that are computationally indistinguishable from Haar random. In this work, we take a unified approach to constructing $t$-designs and PRUs. For this, we introduce and analyse the ``$PFC$ ensemble'', the product of a random computational basis permutation $P$, a random binary phase operator $F$, and a random Clifford unitary $C$. We show that this ensemble reproduces exponentially high moments of the Haar measure. We can then derandomise the $PFC$ ensemble to show the following: 1. Linear-depth $t$-designs. We give the first construction of a (diamond-error) approximate $t$-design with circuit depth linear in $t$. This follows from the $PFC$ ensemble by replacing the random phase and permutation operators with their $2t$-wise independent counterparts. 2. Non-adaptive PRUs. We give the first construction of PRUs with non-adaptive security, i.e., we construct unitaries that are indistinguishable from Haar random to polynomial-time distinguishers that query the unitary in parallel on an arbitary state. This follows from the $PFC$ ensemble by replacing the random phase and permutation operators with their pseudorandom counterparts. 3. Adaptive pseudorandom isometries. We show that if one considers isometries (rather than unitaries) from $n$ to $n + \omega(\log n)$ qubits, a small modification of our PRU construction achieves adaptive security, i.e., even a distinguisher that can query the isometry adaptively in sequence cannot distinguish it from Haar random isometries. This gives the first construction of adaptive pseudorandom isometries. Under an additional conjecture, this proof also extends to adaptive PRUs.
Last updated:  2024-04-24
Rate-1 Fully Local Somewhere Extractable Hashing from DDH
Pedro Branco, Nico Döttling, Akshayaram Srinivasan, and Riccardo Zanotto
Somewhere statistically binding (SSB) hashing allows us to sample a special hashing key such that the digest statistically binds the input at $m$ secret locations. This hash function is said to be somewhere extractable (SE) if there is an additional trapdoor that allows the extraction of the input bits at the $m$ locations from the digest. Devadas, Goyal, Kalai, and Vaikuntanathan (FOCS 2022) introduced a variant of somewhere extractable hashing called rate-1 fully local SE hash functions. The rate-1 requirement states that the size of the digest is $m + \mathsf{poly}(\lambda)$ (where $\lambda$ is the security parameter). The fully local property requires that for any index $i$, there is a "very short" opening showing that $i$-th bit of the hashed input is equal to $b$ for some $b \in \{0,1\}$. The size of this opening is required to be independent of $m$ and in particular, this means that its size is independent of the size of the digest. Devadas et al. gave such a construction from Learning with Errors (LWE). In this work, we give a construction of a rate-1 fully local somewhere extractable hash function from Decisional Diffie-Hellman (DDH) and BARGs. Under the same assumptions, we give constructions of rate-1 BARG and RAM SNARG with partial input soundness whose proof sizes are only matched by prior constructions based on LWE.
Last updated:  2024-04-24
Hadamard Product Argument from Lagrange-Based Univariate Polynomials
Jie Xie, Yuncong Hu, and Yu Yu
Hadamard product is a point-wise product for two vectors. This paper presents a new scheme to prove Hadamard-product relation as a sub-protocol for SNARKs based on univariate polynomials. Prover uses linear cryptographic operations to generate the proof containing logarithmic field elements. The verification takes logarithmic cryptographic operations with constant numbers of pairings in bilinear group. The construction of the scheme is based on the Lagrange-based KZG commitments (Kate, Zaverucha, and Goldberg at Asiacrypt 2010) and the folding technique. We construct an inner-product protocol from folding technique on univariate polynomials in Lagrange form, and by carefully choosing the random polynomials suitable for folding technique, we construct a Hadamard-product protocol from the inner-product protocol, giving an alternative to prove linear algebra relations in linear time, and the protocol has a better concrete proof size than previous works.
Last updated:  2024-04-23
Guarding the First Order: The Rise of AES Maskings
Amund Askeland, Siemen Dhooghe, Svetla Nikova, Vincent Rijmen, and Zhenda Zhang
We provide three first-order hardware maskings of the AES, each allowing for a different trade-off between the number of shares and the number of register stages. All maskings use a generalization of the changing of the guards method enabling the re-use of randomness between masked S-boxes. As a result, the maskings do not require fresh randomness while still allowing for a minimal number of shares and providing provable security in the glitch-extended probing model. The low-area variant has five cycles of latency and a serialized area cost of $8.13~kGE$. The low-latency variant reduces the latency to three cycles while increasing the serialized area by $67.89\%$ compared to the low-area variant. The maskings of the AES encryption are implemented on FPGA and evaluated with Test Vector Leakage Assessment (TVLA).
Last updated:  2024-04-23
A Characterization of AE Robustness as Decryption Leakage Indistinguishability
Ganyuan Cao
Robustness has emerged as an important criterion for authenticated encryption, alongside the requirements of confidentiality and integrity. We introduce a novel notion, denoted as IND-rCCA, to formalize the robustness of authenticated encryption from the perspective of decryption leakage. This notion is an augmentation of common notions defined for AEAD schemes by considering indistinguishability of potential leakage due to decryption failure in the presence of multiple checks for errors. With this notion, we analyze the disparity between a single-error decryption function and the actual leakage incurred during decryption. We introduce the notion of error unity to require that only one error is disclosed whether implicitly or explicitly even there are multiple checks for errors. We further extend this notion to IND-sf-rCCA to formalize the stateful security involving out-of-order ciphertext. Additionally, we present a modification to the Encode-then-Encrypt-then-MAC (EEM) paradigm to boost its robustness and provide a concrete security proof for the modification.
Last updated:  2024-04-23
A note on -Tweakable HCTR: A BBB Secure Tweakable Enciphering Scheme-
Mustafa Khairallah
Tweakable HCTR is an tweakable enciphering proposed by Dutta and Nandi in Indocrypt 2018. It provides beyond birthday bound security when each tweak value is not used too frequently. More importantly for this note, its security bound degrades linearly with the maximum input length. We show in this note that this is not true by showing a single query distinguisher with advantage $O(l^2/2^n)$ where $l$ is the length of that query. The distinguisher does not break the beyond-birthday-bound claim but gives higher advantage than the claimed bound.
Last updated:  2024-04-23
Quantum Implementation and Analysis of SHA-2 and SHA-3
Kyungbae Jang, Sejin Lim, Yujin Oh, Hyunjun Kim, Anubhab Baksi, Sumanta Chakraborty, and Hwajeong Seo
Quantum computers have the potential to solve hard problems that are nearly impossible to solve by classical computers, this has sparked a surge of research to apply quantum technology and algorithm against the cryptographic systems to evaluate for its quantum resistance. In the process of selecting post-quantum standards, NIST categorizes security levels based on the complexity that quantum computers would require to crack AES encryption (levels 1, 3 and 5) and SHA-2 or SHA-3 (levels 2 and 4). In assessing the security strength of cryptographic algorithms against quantum threats, accurate predictions of quantum resources are crucial. Following the work of Jaques et al. in Eurocrypt 2020, NIST estimated security levels 1, 3, and 5, corresponding to quantum circuit size for finding the key for AES-128, AES-192, and AES-256, respectively. This work has been recently followed-up by Huang et al. (Asiacrypt'22) and Liu et al. (Asiacrypt'23) among others; though the most up-to-date results are available in the work by Jang et al. (ePrint'22). However, for levels 2 and 4, which relate to the collision finding for the SHA-2 and SHA-3 hash functions, quantum attack complexities are probably not well-studied. In this paper, we present novel techniques for optimizing the quantum circuit implementations for SHA-2 and SHA-3 algorithms in all the categories specified by NIST. After that, we evaluate the quantum circuits of target cryptographic hash functions for quantum collision search. Finally, we define the quantum attack complexity for levels 2 and 4, and comment on the security strength of the extended level. We present new concepts to optimize the quantum circuits at the component level and the architecture level.
Last updated:  2024-04-23
SQIAsignHD: SQIsignHD Adaptor Signature
Farzin Renan and Péter Kutas
Adaptor signatures can be viewed as a generalized form of the standard digital signature schemes where a secret randomness is hidden within a signature. Adaptor signatures are a recent cryptographic primitive and are becoming an important tool for blockchain applications such as cryptocurrencies to reduce on-chain costs, improve fungibility, and contribute to off-chain forms of payment in payment-channel networks, payment-channel hubs, and atomic swaps. However, currently used adaptor signature constructions are vulnerable to quantum adversaries due to Shor's algorithm. In this work, we introduce $\mathsf{SQIAsignHD}$, a new quantum-resistant adaptor signature scheme based on isogenies of supersingular elliptic curves, using SQIsignHD - as the underlying signature scheme - and exploiting the idea of the artificial orientation on the supersingular isogeny Diffie-Hellman key exchange protocol, SIDH, as the underlying hard relation. We, furthermore, show that our scheme is secure in the Quantum Random Oracle Model (QROM).
Last updated:  2024-04-22
Constrained Pseudorandom Functions for Inner-Product Predicates from Weaker Assumptions
Sacha Servan-Schreiber
In this paper, we build a framework for constructing Constrained Pseudorandom Functions (CPRFs) with inner-product constraint predicates, using ideas from subtractive secret sharing and related-key-attack security. Our framework can be instantiated using a random oracle or any suitable Related-Key-Attack (RKA) secure pseudorandom function. We provide three instantiations of our framework: 1. an adaptively-secure construction in the random oracle model; 2. a selectively-secure construction under the DDH assumption; and 3. a selectively-secure construction with a polynomial domain under the assumption that one-way functions exist. All three instantiations are constraint-hiding and support inner-product predicates, leading to the first constructions of such expressive CPRFs under each corresponding assumption. Moreover, while the OWF-based construction is primarily of theoretical interest, the random oracle and DDH-based constructions are concretely efficient, which we show via an implementation.
Last updated:  2024-04-22
Worst-Case to Average-Case Hardness of LWE: A Simple and Practical Perspective
Divesh Aggarwal, Leong Jin Ming, and Alexandra Veliche
In this work, we study the worst-case to average-case hardness of the Learning with Errors problem (LWE) under an alternative measure of hardness $−$ the maximum success probability achievable by a probabilistic polynomial-time (PPT) algorithm. Previous works by Regev (STOC 2005), Peikert (STOC 2009), and Brakerski, Peikert, Langlois, Regev, Stehle (STOC 2013) give worst-case to average-case reductions from lattice problems, specifically the approximate decision variant of the Shortest Vector Problem (GapSVP) and the Bounded Distance Decoding (BDD) problem, to LWE. These reductions, however, are lossy in the sense that even the strongest assumption on the worst-case hardness of GapSVP or BDD implies only mild hardness of LWE. Our alternative perspective gives a much tighter reduction and strongly relates the hardness of LWE to that of BDD. In particular, we show that under a reasonable assumption about the success probability of solving BDD via a PPT algorithm, we obtain a nearly tight lower bound on the highest possible success probability for solving LWE via a PPT algorithm. Furthermore, we show a tight relationship between the best achievable success probability by any probabilistic polynomial-time algorithm for decision-LWE to that of search-LWE. Our results not only refine our understanding of the computational complexity of LWE, but also provide a useful framework for analyzing the practical security implications.
Last updated:  2024-04-22
flookup: Fractional decomposition-based lookups in quasi-linear time independent of table size
Ariel Gabizon and Dmitry Khovratovich
We present a protocol for checking the values of a committed polynomial $\phi(X)$ over a multiplicative subgroup $H\subset \mathbb{F}$ of size $m$ are contained in a table $T\in \mathbb{F}^N$. After an $O(N \log^2 N)$ preprocessing step, the prover algorithm runs in *quasilinear* time $O(m\log ^2 m)$. We improve upon the recent breakthrough results Caulk[ZBK+22] and Caulk+[PK22], which were the first to achieve the complexity sublinear in the full table size $N$ with prover time being $O(m^2+m\log N)$ and $O(m^2)$, respectively. We pose further improving this complexity to $O(m\log m)$ as the next important milestone for efficient zk-SNARK lookups.
Last updated:  2024-04-22
Proofs for Deep Thought: Accumulation for large memories and deterministic computations
Benedikt Bünz and Jessica Chen
We construct two new accumulation schemes. The first one is for checking that $\ell$ read and write operations were performed correctly from a memory of size $T$. The prover time is entirely independent of $T$ and only requires committing to $6\ell$ field elements, which is an over $100$X improvement over prior work. The second one is for deterministic computations. It does not require committing to the intermediate wires of the computation but only to the input and output. This is achieved by building an accumulation scheme for a modified version of the famous GKR protocol. We show that these schemes are highly compatible and that the accumulation for GKR can further reduce the cost of the memory-checking scheme. Using the BCLMS (Crypto 21) compiler, these protocols yield an efficient, incrementally verifiable computation (IVC) scheme that is particularly useful for machine computations with large memories and deterministic steps.
Last updated:  2024-04-22
WESP: An encryption method that, as the key size increases, require an exponentially growing time to break
Sam Widlund
WESP is a new encryption algorithm that is based on equation systems, in which the equations are generated using the values of tables that act as the encryption key, and the equations having features making them suitable for cryptographic use. The algorithm is defined, and its properties are discussed. Besides just describing the algorithm, also reasons are presented why the algorithm works the way it works. The key size in WESP can be altered and has no upper limit, and typically the key size is bigger than currently commonly used keys. A calculation is presented, calculating how many bytes can be securely encrypted before the algorithm might start to repeat it’s sequence of encrypting bytes, and that this period can be adjusted to be arbitrarily large. It is also shown that withing the period the resulting stream of encrypting bytes is statistically uniformly distributed. It is also shown that if the encryption tables are not known, the equations in the system of equations cannot be known, and it is demonstrated that the system of equations cannot be solved if the equations are not known, and thus the encryption cannot be broken in a closed form. Then, we calculate for all symbols used in the algorithm, the minimum amount of trials needed, in order to be able to verity the trials. Since the algorithm is constantly updating key values, verification becomes impossible if equations are not evaluated in order. The calculation shows that the minimum number of trials required is such that the number of trials, i.e., the time required to break the encryption, increases exponentially as the key size grows. Since there is no upper limit on the key size there is neither any upper limit on the time it requires to break the encryption.
Last updated:  2024-04-22
Secure Integrated Sensing and Communication Under Correlated Rayleigh Fading
Martin Mittelbach, Rafael F. Schaefer, Matthieu Bloch, Aylin Yener, and Onur Gunlu
We consider a secure integrated sensing and communication (ISAC) scenario, in which a signal is transmitted through a state-dependent wiretap channel with one legitimate receiver with which the transmitter communicates and one honest-but-curious target that the transmitter wants to sense. The secure ISAC channel is modeled as two state-dependent fast-fading channels with correlated Rayleigh fading coefficients and independent additive Gaussian noise components. Delayed channel outputs are fed back to the transmitter to improve the communication performance and to estimate the channel state sequence. We establish and illustrate an achievable secrecy-distortion region for degraded secure ISAC channels under correlated Rayleigh fading. We also evaluate the inner bound for a large set of parameters to derive practical design insights for secure ISAC methods. The presented results include in particular parameter ranges for which the secrecy capacity of a classical wiretap channel setup is surpassed and for which the channel capacity is approached.
Last updated:  2024-04-22
On the {\sf P/poly} Validity of the Agr17 FE Scheme
Yupu Hu, Siyue Dong, Baocang Wang, Jun Liu, Yanbin Pan, and Xingting Dong
Functional encryption (FE) is a cutting-edge research topic in cryptography. The Agr17 FE scheme is a major scheme of FE area. This scheme had the novelty of “being applied for the group of general functions (that is, {\sf P/poly} functions) without IO”. It took the BGG+14 ABE scheme as a bottom structure, which was upgraded into a “partially hiding attribute” scheme, and combined with a fully homomorphic encryption (FHE) scheme. However, the Agr17 FE scheme had a strange operation. For noise cancellation of FHE decryption stage, it used bulky “searching noise” rather than elegant “filtering”. It searched total modulus interval, so that the FHE modulus should be polynomially large. In this paper we discuss the {\sf P/poly} validity of the Agr17 FE scheme. First, we obtain the result that the Agr17 FE scheme is {\sf P/poly} invalid. More detailedly, when the Agr17 FE scheme is applied for the group of randomly chosen {\sf P/poly} Boolean functions, FHE modulus at the “searching” stage cannot be polynomially large. Our analysis is based on three restrictions of the BGG+14 ABE scheme: (1) The modulus of the BGG+14 ABE should be adapted to being super-polynomially large, if it is applied for the group of randomly chosen {\sf P/poly} functions. (2) The modulus of the BGG+14 ABE cannot be switched. (3) If the BGG+14 ABE is upgraded into a “partially hiding attribute” scheme, permitted operations about hidden part of the attribute can only be affine operations. Then, to check whether the {\sf P/poly} validity can be obtained by modifying the scheme, we consider two modified versions. The first modified version is controlling the FHE noise by repeatedly applying bootstrapping, and replacing a modular inner product with an arithmetic inner product. The second modified version is replacing the search for the modulus interval with the search for a public noise interval, hoping such noise interval polynomially large and tolerating the modulus which may be super-polynomially large. The first modified version may be {\sf P/poly} valid, but it is weaker. There is no evidence to support the {\sf P/poly} validity of the second modified version. Finally, we present an additional conclusion that there is no evidence to support the {\sf P/poly} validity of the GVW15 PE scheme.
Last updated:  2024-04-22
Practical Delegatable Attribute-Based Anonymous Credentials with Chainable Revocation
Min Xie, Peichen Ju, Yanqi Zhao, Zoe L. Jiang, Junbin Fang, Yong Yu, and Xuan Wang
Delegatable Anonymous Credentials (DAC) are an enhanced Anonymous Credentials (AC) system that allows credential owners to use credentials anonymously, as well as anonymously delegate them to other users. In this work, we introduce a new concept called Delegatable Attribute-based Anonymous Credentials with Chainable Revocation (DAAC-CR), which extends the functionality of DAC by allowing 1) fine-grained attribute delegation, 2) issuers to restrict the delegation capabilities of the delegated users at a fine-grained level, including the depth of delegation and the sets of delegable attributes, and 3) chainable revocation, meaning if a credential within the delegation chain is revoked, all subsequent credentials derived from it are also invalid. We provide a practical DAAC-CR instance based on a novel primitive that we identify as structure-preserving signatures on equivalence classes on vector commitments (SPSEQ-VC). This primitive may be of independent interest, and we detail an efficient construction. Compared to traditional DAC systems that rely on non-interactive zero-knowledge (NIZK) proofs, the credential size in our DAAC-CR instance is constant, independent of the length of delegation chain and the number of attributes. We formally prove the security of our scheme in the generic group model and demonstrate its practicality through performance benchmarks.
Last updated:  2024-04-22
Zero-Knowledge Proof Vulnerability Analysis and Security Auditing
Xueyan Tang, Lingzhi Shi, Xun Wang, Kyle Charbonnet, Shixiang Tang, and Shixiao Sun
Zero-Knowledge Proof (ZKP) technology marks a revolutionary advancement in the field of cryptography, enabling the verification of certain information ownership without revealing any specific details. This technology, with its paradoxical yet powerful characteristics, provides a solid foundation for a wide range of applications, especially in enhancing the privacy and security of blockchain technology and other cryptographic systems. As ZKP technology increasingly becomes a part of the blockchain infrastructure, its importance for security and completeness becomes more pronounced. However, the complexity of ZKP implementation and the rapid iteration of the technology introduce various vulnerabilities, challenging the privacy and security it aims to offer. This study focuses on the completeness, soundness, and zero-knowledge properties of ZKP to meticulously classify existing vulnerabilities and deeply explores multiple categories of vulnerabilities, including completeness issues, soundness problems, information leakage, and non-standardized cryptographic implementations. Furthermore, we propose a set of defense strategies that include a rigorous security audit process and a robust distributed network security ecosystem. This audit strategy employs a divide-and-conquer approach, segmenting the project into different levels, from the application layer to the platform-nature infrastructure layer, using threat modelling, line-by-line audit, and internal cross-review, among other means, aimed at comprehensively identifying vulnerabilities in ZKP circuits, revealing design flaws in ZKP applications, and accurately identifying inaccuracies in the integration process of ZKP primitives.
Last updated:  2024-04-22
Quantum copy-protection of compute-and-compare programs in the quantum random oracle model
Andrea Coladangelo, Christian Majenz, and Alexander Poremba
Copy-protection allows a software distributor to encode a program in such a way that it can be evaluated on any input, yet it cannot be "pirated" - a notion that is impossible to achieve in a classical setting. Aaronson (CCC 2009) initiated the formal study of quantum copy-protection schemes, and speculated that quantum cryptography could offer a solution to the problem thanks to the quantum no-cloning theorem. In this work, we introduce a quantum copy-protection scheme for a large class of evasive functions known as "compute-and-compare programs" - a more expressive generalization of point functions. A compute-and-compare program $\mathsf{CC}[f,y]$ is specified by a function $f$ and a string $y$ within its range: on input $x$, $\mathsf{CC}[f,y]$ outputs $1$, if $f(x) = y$, and $0$ otherwise. We prove that our scheme achieves non-trivial security against fully malicious adversaries in the quantum random oracle model (QROM), which makes it the first copy-protection scheme to enjoy any level of provable security in a standard cryptographic model. As a complementary result, we show that the same scheme fulfils a weaker notion of software protection, called "secure software leasing", introduced very recently by Ananth and La Placa (eprint 2020), with a standard security bound in the QROM, i.e. guaranteeing negligible adversarial advantage. Finally, as a third contribution, we elucidate the relationship between unclonable encryption and copy-protection for multi-bit output point functions.
Last updated:  2024-04-22
On the Two-sided Permutation Inversion Problem
Gorjan Alagic, Chen Bai, Alexander Poremba, and Kaiyan Shi
In the permutation inversion problem, the task is to find the preimage of some challenge value, given oracle access to the permutation. This is a fundamental problem in query complexity, and appears in many contexts, particularly cryptography. In this work, we examine the setting in which the oracle allows for quantum queries to both the forward and the inverse direction of the permutation---except that the challenge value cannot be submitted to the latter. Within that setting, we consider two options for the inversion algorithm: whether it can get quantum advice about the permutation, and whether it must produce the entire preimage (search) or only the first bit (decision). We prove several theorems connecting the hardness of the resulting variations of the inversion problem, and establish a number of lower bounds. Our results indicate that, perhaps surprisingly, the inversion problem does not become significantly easier when the adversary is granted oracle access to the inverse, provided it cannot query the challenge itself.
Last updated:  2024-04-22
Generic MitM Attack Frameworks on Sponge Constructions
Xiaoyang Dong, Boxin Zhao, Lingyue Qin, Qingliang Hou, Shun Zhang, and Xiaoyun Wang
This paper proposes general meet-in-the-middle (MitM) attack frameworks for preimage and collision attacks on hash functions based on (generalized) sponge construction. As the first contribution, our MitM preimage attack framework covers a wide range of sponge-based hash functions, especially those with lower claimed security level for preimage compared to their output size. Those hash functions have been very widely standardized (e.g., Ascon-Hash, PHOTON, etc.), but are rarely studied against preimage attacks. Even the recent MitM attack framework on sponge construction by Qin et al. (EUROCRYPT 2023) cannot attack those hash functions. As the second contribution, our MitM collision attack framework shows a different tool for the collision cryptanalysis on sponge construction, while previous collision attacks on sponge construction are mainly based on differential attacks. Most of the results in this paper are the first third-party cryptanalysis results. If cryptanalysis previously existed, our new results significantly improve the previous results, such as improving the previous 2-round collision attack on Ascon-Hash to the current 4 rounds, improving the previous 3.5-round quantum preimage attack on SPHINCS$^+$-Haraka to our 4-round classical preimage attack, etc.
Last updated:  2024-04-22
LLRing: Logarithmic Linkable Ring Signatures with Transparent Setup
Xiangyu Hui and Sid Chi-Kin Chau
Linkable ring signatures are an important cryptographic primitive for anonymized applications, such as e-voting, e-cash and confidential transactions. To eliminate backdoor and overhead in a trusted setup, transparent setup in the discrete logarithm or pairing settings has received considerable attention in practice. Recent advances have improved the proof sizes and verification efficiency of linkable ring signatures with a transparent setup to achieve logarithmic bounds. Omniring (CCS '19) and RingCT 3.0 (FC '20) proposed linkable ring signatures in the discrete logarithm setting with logarithmic proof sizes with respect to the ring size, whereas DualDory (ESORICS '22) achieves logarithmic verifiability in the pairing setting. We make three novel contributions in this paper to improve the efficiency and soundness of logarithmic linkable ring signatures: (1) We identify an attack on DualDory that breaks its linkability. (2) To eliminate such attacks, we present a new linkable ring signature scheme in the pairing setting with logarithmic verifiability. (3) We also improve the verification efficiency of linkable ring signatures in the discrete logarithm setting, by a technique of reducing the number of group exponentiations for verification in Omniring by 50%. Furthermore, our technique is applicable to general inner-product relation proofs, which might be of independent interest. Finally, we empirically evaluate our schemes and compare them with the extant linkable ring signatures in concrete implementation.
Last updated:  2024-04-21
FHERMA: Building the Open-Source FHE Components Library for Practical Use
Gurgen Arakelov, Nikita Kaskov, Daria Pianykh, and Yuriy Polyakov
Fully Homomorphic Encryption (FHE) is a powerful Privacy-Enhancing Technology (PET) that enables computations on encrypted data without having access to the secret key. While FHE holds immense potential for enhancing data privacy and security, creating its practical applications is associated with many difficulties. A significant barrier is the absence of easy-to-use, standardized components that developers can utilize as foundational building blocks. Addressing this gap requires constructing a comprehensive library of FHE components, a complex endeavor due to multiple inherent problems. We propose a competition-based approach for building such a library. More concretely, we present FHERMA, a new challenge platform that introduces black-box and white-box challenges, and fully automated evaluation of submitted FHE solutions. The initial challenges on the FHERMA platform are motivated by practical problems in machine learning and blockchain. The winning solutions get integrated into an open-source library of FHE components, which is available to all members of the PETs community under the Apache 2.0 license.
Last updated:  2024-04-21
Ponyta: Foundations of Side-Contract-Resilient Fair Exchange
Hao Chung, Elisaweta Masserova, Elaine Shi, and Sri AravindaKrishnan Thyagarajan
This paper is subsumed by Rapidash (https://eprint.iacr.org/2022/1063). Please use Rapidash for the citation. Fair exchange is a fundamental primitive for blockchains, and is widely adopted in applications such as atomic swaps, payment channels, and DeFi. Most existing designs of blockchain-based fair exchange protocols consider only the users as strategic players, and assume honest miners. However, recent works revealed that the fairness of commonly deployed fair exchange protocols can be completely broken in the presence of user-miner collusion. In particular, a user can bribe the miners to help it cheat — a phenomenon also referred to as Miner Extractable Value (MEV). We provide the first formal treatment of side-contract-resilient fair exchange. We propose a new fair exchange protocol called Ponyta, and we prove that the protocol is incentive compatible in the presence of user-miner collusion. In particular, we show that Ponyta satisfies a coalition-resistant Nash equilibrium. Further, we show how to use Ponyta to realize a cross-chain coin swap application, and prove that our coin swap protocol also satisfies coalition-resistant Nash equilibrium. Our work helps to lay the theoretical groundwork for studying side-contract-resilient fair exchange. Finally, we present practical instantiations of Ponyta in Bitcoin and Ethereum with minimal overhead in terms of costs for the users involved in the fair exchange, thus showcasing instantiability of Ponyta with a wide range of cryptocurrencies.
Last updated:  2024-04-21
Maximizing Miner Revenue in Transaction Fee Mechanism Design
Ke Wu, Elaine Shi, and Hao Chung
Transaction fee mechanism design is a new decentralized mechanism design problem where users bid for space on the blockchain. Several recent works showed that the transaction fee mechanism design fundamentally departs from classical mechanism design. They then systematically explored the mathematical landscape of this new decentralized mechanism design problem in two settings: in the plain setting where no cryptography is employed, and in a cryptography-assisted setting where the rules of the mechanism are enforced by a multi-party computation protocol. Unfortunately, in both settings, prior works showed that if we want the mechanism to incentivize honest behavior for both users as well as miners (possibly colluding with users), then the miner revenue has to be zero. Although adopting a relaxed, approximate notion of incentive compatibility gets around this zero miner-revenue limitation, the scaling of the miner revenue is nonetheless poor. In this paper, we show that if we make a mildly stronger reasonable-world assumption than prior works, we can circumvent the known limitations on miner revenue, and design auctions that generate optimal miner revenue. We also systematically explore the mathematical landscape of transaction fee mechanism design under the new reasonable-world and demonstrate how such assumptions can alter the feasibility and infeasibility landscape.
Last updated:  2024-04-21
A Security Analysis of Restricted Syndrome Decoding Problems
Ward Beullens, Pierre Briaud, and Morten Øygarden
Restricted syndrome decoding problems (R-SDP and R-SDP($G$)) provide an interesting basis for post-quantum cryptography. Indeed, they feature in CROSS, a submission in the ongoing process for standardizing post-quantum signatures. This work improves our understanding of the security of both problems. Firstly, we propose and implement a novel collision attack on R-SDP($G$) that provides the best attack under realistic restrictions on memory. Secondly, we derive precise complexity estimates for algebraic attacks on R-SDP that are shown to be accurate by our experiments. We note that neither of these improvements threatens the updated parameters of CROSS.
Last updated:  2024-04-21
Large Language Models for Blockchain Security: A Systematic Literature Review
Zheyuan He, Zihao Li, and Sen Yang
Large Language Models (LLMs) have emerged as powerful tools in various domains involving blockchain security (BS). Several recent studies are exploring LLMs applied to BS. However, there remains a gap in our understanding regarding the full scope of applications, impacts, and potential constraints of LLMs on blockchain security. To fill this gap, we conduct a literature review on LLM4BS. As the first review of LLM's application on blockchain security, our study aims to comprehensively analyze existing research and elucidate how LLMs contribute to enhancing the security of blockchain systems. Through a thorough examination of scholarly works, we delve into the integration of LLMs into various aspects of blockchain security. We explore the mechanisms through which LLMs can bolster blockchain security, including their applications in smart contract auditing, identity verification, anomaly detection, vulnerable repair, and so on. Furthermore, we critically assess the challenges and limitations associated with leveraging LLMs for blockchain security, considering factors such as scalability, privacy concerns, and adversarial attacks. Our review sheds light on the opportunities and potential risks inherent in this convergence, providing valuable insights for researchers, practitioners, and policymakers alike.
Last updated:  2024-04-21
Greco: Fast Zero-Knowledge Proofs for Valid FHE RLWE Ciphertexts Formation
Enrico Bottazzi
Fully homomorphic encryption (FHE) allows for evaluating arbitrary functions over encrypted data. In Multi-party FHE applications, different parties encrypt their secret data and submit ciphertexts to a server, which, according to the application logic, performs homomorphic operations on them. For example, in a secret voting application, the tally is computed by summing up the ciphertexts encoding the votes. Valid encrypted votes are of the form $E(0)$ and $E(1)$. A malicious voter could send an invalid encrypted vote such as $E(145127835)$, which can mess up the whole election. Because of that, users must prove that the ciphertext they submitted is a valid Ring-Learning with Errors (RLWE) ciphertext and that the plaintext message they encrypted is a valid vote (for example, either a 1 or 0). Greco uses zero-knowledge proof to let a user prove that their RLWE ciphertext is well-formed. Or, in other words, that the encryption operation was performed correctly. The resulting proof can be, therefore, composed with additional application-specific logic and subject to public verification in a non-interactive setting. Considering the secret voting application, one can prove further properties of the message being encrypted or even properties about the voter, allowing the application to support anonymous voting as well. The prover has been implemented using Halo2-lib as a proving system, and the benchmarks have shown that Greco can already be integrated into user-facing applications without creating excessive friction for the user. The implementation is available at https://github.com/privacy-scaling-explorations/greco
Last updated:  2024-04-21
Updatable, Aggregatable, Succinct Mercurial Vector Commitment from Lattice
Hongxiao Wang, Siu-Ming Yiu, Yanmin Zhao, and Zoe L. Jiang
Vector commitments (VC) and their variants attract a lot of attention due to their wide range of usage in applications such as blockchain and accumulator. Mercurial vector commitment (MVC), as one of the important variants of VC, is the core technique for building more complicated cryptographic applications, such as the zero-knowledge set (ZKS) and zero-knowledge elementary database (ZK-EDB). However, to the best of our knowledge, the only post-quantum MVC construction is trivially implied by a generic framework proposed by Catalano and Fiore (PKC '13) with lattice-based components which causes $\textit{large}$ auxiliary information and $\textit{cannot satisfy}$ any additional advanced properties, that is, updatable and aggregatable. A major difficulty in constructing a $\textit{non-black-box}$ lattice-based MVC is that it is not trivial to construct a lattice-based VC that satisfies a critical property called ``mercurial hiding". In this paper, we identify some specific features of a new falsifiable family of basis-augmented SIS assumption ($\mathsf{BASIS}$) proposed by Wee and Wu (EUROCRYPT '23) that can be utilized to construct the mercurial vector commitment from lattice $\textit{satisfying}$ updatability and aggregatability with $\textit{smaller}$ auxiliary information. We $\textit{first}$ extend stateless update and differential update to the mercurial vector commitment and define a $\textit{new}$ property, named updatable mercurial hiding. Then, we show how to modify our constructions to obtain the updatable mercurial vector commitment that satisfies these properties. To aggregate the openings, our constructions perfectly inherit the ability to aggregate in the $\mathsf{BASIS}$ assumption, which can break the limitation of $\textit{weak}$ binding in the current aggregatable MVCs. In the end, we show that our constructions can be used to build the various kinds of lattice-based ZKS and ZK-EDB directly within the existing framework.
Last updated:  2024-04-20
Revisiting Differential-Linear Attacks via a Boomerang Perspective with Application to AES, Ascon, CLEFIA, SKINNY, PRESENT, KNOT, TWINE, WARP, LBlock, Simeck, and SERPENT
Hosein Hadipour, Patrick Derbez, and Maria Eichlseder
In 1994, Langford and Hellman introduced differential-linear (DL) cryptanalysis, with the idea of decomposing the block cipher E into two parts, EU and EL, such that EU exhibits a high-probability differential trail, while EL has a high-correlation linear trail.Combining these trails forms a distinguisher for E, assuming independence between EU and EL. The dependency between the two parts of DL distinguishers remained unaddressed until EUROCRYPT 2019, where Bar-On et al. introduced the DLCT framework, resolving the issue up to one S-box layer. However, extending the DLCT framework to formalize the dependency between the two parts for multiple rounds remained an open problem. In this paper, we first tackle this problem from the perspective of boomerang analysis. By examining the relationships between DLCT, DDT, and LAT, we introduce a set of new tables facilitating the formulation of dependencies between the two parts of the DL distinguisher across multiple rounds. Then, as the main contribution, we introduce a highly versatile and easy-to-use automatic tool for exploring DL distinguishers, inspired by automatic tools for boomerang distinguishers. This tool considers the dependency between differential and linear trails across multiple rounds. We apply our tool to various symmetric-key primitives, and in all applications, we either present the first DL distinguishers or enhance the best-known ones. We achieve successful results against Ascon, AES, SERPENT, PRESENT, SKINNY, TWINE, CLEFIA, WARP, LBlock, Simeck, and KNOT. Furthermore, we demonstrate that, in some cases, DL distinguishers outperform boomerang distinguishers significantly.
Last updated:  2024-04-20
New Security Proofs and Techniques for Hash-and-Sign with Retry Signature Schemes
Benoît Cogliati, Pierre-Alain Fouque, Louis Goubin, and Brice Minaud
Hash-and-Sign with Retry is a popular technique to design efficient signature schemes from code-based or multivariate assumptions. Contrary to Hash-and-Sign signatures based on preimage-sampleable functions as defined by Gentry, Peikert and Vaikuntanathan (STOC 2008), trapdoor functions in code-based and multivariate schemes are not surjective. Therefore, the standard approach uses random trials. Kosuge and Xagawa (PKC 2024) coined it the Hash-and-Sign with Retry paradigm. As many attacks have appeared on code-based and multivariate schemes, we think it is important for the ongoing NIST competition to look at the security proofs of these schemes. The original proof of Sakumoto, Shirai, and Hiwatari (PQCrypto 2011) was flawed, then corrected by Chatterjee, Das and Pandit (INDOCRYPT 2022). The fix is still not sufficient, as it only works for very large finite fields. A new proof in the Quantum ROM model was proposed by Kosuge and Xagawa (PKC 2024), but it is rather loose, even when restricted to the classical setting. In this paper, we introduce several tools that yield tighter security bounds for Hash-and-Sign with Retry signatures in the classical setting. These include the Hellinger distance, stochastic dominance arguments, and a new combinatorial tool to transform a proof in the non-adaptative setting to the adaptative setting. Ultimately, we obtain a sharp bound for the security of Hash-and-Sign with Retry signatures, applicable to various code-based and multivariate schemes. Focusing on NIST candidates, we apply these results to the MAYO, PROV, and modified UOV signature schemes. In most cases, our bounds are tight enough to apply with the real parameters of those schemes; in some cases, smaller parameters would suffice.
Last updated:  2024-04-20
The Practical Advantage of RSA over ECC and Pairings
Zhengjun Cao and Lihua Liu
The coexistence of RSA and elliptic curve cryptosystem (ECC) had continued over forty years. It is well-known that ECC has the advantage of shorter key than RSA, which often leads a newcomer to assume that ECC runs faster. In this report, we generate the Mathematica codes for RSA-2048 and ECC-256, which visually show that RSA-2048 runs three times faster than ECC-256. It is also estimated that RSA-2048 runs 48,000 times faster than Weil pairing with 2 embedding degree and a fixed point.
Last updated:  2024-04-19
Low-latency Secure Integrated Sensing and Communication with Transmitter Actions
Truman Welling, Onur Gunlu, and Aylin Yener
This paper considers an information theoretic model of secure integrated sensing and communication, represented as a wiretap channel with action dependent states. This model allows one to secure a part of the transmitted message against a sensed target that eavesdrops the communication, while allowing transmitter actions to change the channel statistics. An exact secrecy-distortion region is given for a physically-degraded channel. Moreover, a finite-length achievability region is established for the model using an output statistics of random binning method, giving an achievable bound for low-latency applications.
Last updated:  2024-04-19
Classical Commitments to Quantum States
Sam Gunn, Yael Tauman Kalai, Anand Natarajan, and Agi Villanyi
We define the notion of a classical commitment scheme to quantum states, which allows a quantum prover to compute a classical commitment to a quantum state, and later open each qubit of the state in either the standard or the Hadamard basis. Our notion is a strengthening of the measurement protocol from Mahadev (STOC 2018). We construct such a commitment scheme from the post-quantum Learning With Errors (LWE) assumption, and more generally from any noisy trapdoor claw-free function family that has the distributional strong adaptive hardcore bit property (a property that we define in this work). Our scheme is succinct in the sense that the running time of the verifier in the commitment phase depends only on the security parameter (independent of the size of the committed state), and its running time in the opening phase grows only with the number of qubits that are being opened (and the security parameter). As a corollary we obtain a classical succinct argument system for QMA under the post-quantum LWE assumption. Previously, this was only known assuming post-quantum secure indistinguishability obfuscation. As an additional corollary we obtain a generic way of converting any X/Z quantum PCP into a succinct argument system under the quantum hardness of LWE.
Last updated:  2024-04-19
Security Analysis of XHASH8/12
Léo Perrin
We have investigated both the padding scheme and the applicability of algebraic attacks to both XHash8 and XHash12. The only vulnerability of the padding scheme we can find is plausibly applicable only in the multi-rate setting---for which the authors make no claim---and is safe otherwise. For algebraic attack relying on the computation and exploitation of a Gröbner basis, our survey of the literature suggests to base a security argument on the complexity of the variable elimination step rather than that of the computation of the Gröbner basis itself. Indeed, it turns out that the latter complexity is hard to estimate---and is sometimes litteraly non-existent. Focusing on the elimination step, we propose a generalization of the "FreeLunch" approach which, under a reasonable conjecture about the behaviour of the degree of polynomial ideals of dimension 0, is sufficient for us to argue that both XHash8 and XHash12 are safe against such attacks. We implemented a simplified version of the generation (and resolution) of the corresponding set of equations in SAGE, which allowed us to validate our conjecture at least experimentally, and in fact to show that the lower bound it provides on the ideal degree is not tight---meaning we are a priori understimating the security of these permutations against the algebraic attacks we consider. At this stage, if used as specified, these hash functions seem safe from Gröbner bases-based algebraic attacks.
Last updated:  2024-04-19
Analyzing the complexity of reference post-quantum software: the case of lattice-based KEMs
Daniel J. Bernstein
Software for various post-quantum KEMs has been submitted by the KEM design teams to the SUPERCOP testing framework. The ref/*.c and ref/*.h files together occupy, e.g., 848 lines for ntruhps4096821, 928 lines for ntruhrss701, 1316 lines for sntrup1277, and 2633 lines for kyber1024. It is easy to see that these numbers overestimate the inherent complexity of software for these KEMs. It is more difficult to systematically measure this complexity. This paper takes these KEMs as case studies and applies consistent rules to streamline the ref software for the KEMs, while still passing SUPERCOP's tests and preserving the decomposition of specified KEM operations into functions. The resulting software occupies 381 lines for ntruhps4096821, 385 lines for ntruhrss701, 472 lines for kyber1024, and 478 lines for sntrup1277. This paper also identifies the external subroutines used in each case, identifies the extent to which code is shared across different parameter sets, quantifies various software complications specific to each KEM, and traces how differences in KEM design goals produced different software complications. As a spinoff, this paper presents a kyber512 key-recovery demo exploiting variations in timings of the Kyber reference code.
Last updated:  2024-04-19
PriDe CT: Towards Public Consensus, Private Transactions, and Forward Secrecy in Decentralized Payments
Yue Guo, Harish Karthikeyan, Antigoni Polychroniadou, and Chaddy Huussin
Anonymous Zether, proposed by Bunz et al. (FC, 2020) and subsequently improved by Diamond (IEEE S&P, 2021) is an account-based confidential payment mechanism that works by using a smart contract to achieve privacy (i.e. identity of receivers to transactions and payloads are hidden). In this work, we look at simplifying the existing protocol while also achieving batching of transactions for multiple receivers, while ensuring consensus and forward secrecy. To the best of our knowledge, this work is the first to formally study the notion of forward secrecy in the setting of blockchain, borrowing a very popular and useful idea from the world of secure messaging. Specifically, we introduce: - FUL-Zether, a forward-secure version of Zether (Bunz et al., FC, 2020). - PRIvate DEcentralized Confidental Transactions (PriDe CT), a much-simplified version of Anonymous Zether that achieves competitive performance and enables batching of transactions for multiple receivers. - PRIvate DEcentralized Forward-secure Until Last update Confidential Transactions (PriDeFUL CT), a forward-secure version of PriDe CT. We also present an open-source, Ethereum-based implementation of our system. PriDe CT uses linear homomorphic encryption as Anonymous Zether but with simpler zero-knowledge proofs. PriDeFUL CT uses an updatable public key encryption scheme to achieve forward secrecy by introducing a new DDH-based construction in the standard model. In terms of transaction sizes, Quisquis (Asiacrypt, 2019), which is the only cryptocurrency that supports batchability (albeit in the UTXO model), has 15 times more group elements than PriDe CT. Meanwhile, for a ring of $N$ receivers, Anonymous Zether requires $6\log N$ more terms even without accounting for the ability to batch in PriDe CT. Further, our implementation indicates that, for $N=32$, even if there were 7 intended receivers, PriDe CT outperforms Anonymous Zether in proving time and gas consumption.
Last updated:  2024-04-19
SoK: Polynomial Multiplications for Lattice-Based Cryptosystems
Vincent Hwang
We survey various mathematical tools used in software works multiplying polynomials in $\mathbb{Z}_q [x] / ⟨x^n −αx−β⟩$. In particular, we survey implementation works targeting polynomial multiplications in lattice-based cryptosystems Dilithium, Kyber, NTRU, NTRU Prime, and Saber with instruction set architectures/extensions Armv7-M, Armv7E-M, Armv8-A, and AVX2. There are three emphases in this paper: (i) modular arithmetic, (ii) homomorphisms, and (iii) vectorization. For modular arithmetic, we survey Montgomery, Barrett, and Plantard multiplications. For homomorphisms, we survey (a) various homomorphisms such as Cooley–Tukey FFT, Bruun’s FFT, Rader’s FFT, Karatsuba, and Toom– Cook; (b) various algebraic techniques for adjoining nice properties to the coefficient rings, including injections, Schönhage’s FFT, Nussbaumer’s FFT, and localization; and (c) various algebraic techniques related to the polynomial moduli, including twisting, composed multiplication, evaluation at ∞, Good–Thomas FFT, truncation, incomplete transformation, and Toeplitz matrix-vector product. For vectorization, we survey the relations between homomorphisms and the support of vector arithmetic. We then go through several case studies: We compare the implementations of modular multiplications used in Dilithium and Kyber, explain how the matrix-to-vector structure was exploited in Saber, and review the design choices of transformations for NTRU and NTRU Prime with vectorization. Finally, we outline several interesting implementation projects.
Last updated:  2024-04-19
Formal Verification of Emulated Floating-Point Arithmetic in Falcon
Vincent Hwang
We show that there is a discrepancy between the emulated floating-point multiplications in the submission package of Falcon and the claimed behavior. In particular, we show that floating-point products with absolute values the smallest normal positive floating-point number are incorrectly zeroized. However, we show that the discrepancy doesn’t effect the complex fast Fourier transform by modeling the floating-point addition, subtraction, and multiplication in CryptoLine. We later implement our own floating-point multiplications in Armv7-M assembly and Jasmin and prove their equivalence with our model, demonstrating the possibility of transferring the challenging verification task (verifying highly-optimized assembly) to the presumably more readable code base (Jasmin).
Last updated:  2024-04-19
Combined Threshold Implementation
Jakob Feldtkeller, Jan Richter-Brockmann, Pascal Sasdrich, and Tim Güneysu
Physical security is an important aspect of devices for which an adversary can manipulate the physical execution environment. Recently, more and more attention has been directed towards a security model that combines the capabilities of passive and active physical attacks, i.e., an adversary that performs fault-injection and side-channel analysis at the same time. Implementing countermeasures against such a powerful adversary is not only costly but also requires the skillful combination of masking and redundancy to counteract all reciprocal effects. In this work, we propose a new methodology to generate combined-secure circuits. We show how to transform TI-like constructions to resist any adversary with the capability to tamper with internal gates and probe internal wires. For the resulting protection scheme, we can prove the combined security in a well-established theoretical security model. Since the transformation preserves the advantages of TI-like structures, the resulting circuits prove to be more efficient in the number of required bits of randomness (up to 100%), the latency in clock cycles (up to 40%), and even the area for pipelined designs (up to 40%) than the state of the art for an adversary restricted to manipulating a single gate and probing a single wire.
Last updated:  2024-04-19
Lattice-based, more general anti-leakage model and its application in decentralization
Xiaokang Dai, Jingwei Chen, Wenyuan Wu, and Yong Feng
In the case of standard \LWE samples $(\mathbf{A},\mathbf{b = sA + e})$, $\mathbf{A}$ is typically uniformly over $\mathbb{Z}_q^{n \times m}$. Under the \DLWE assumption, the conditional distribution of $\mathbf{s}|(\mathbf{A}, \mathbf{b})$ and $\mathbf{s}$ is expected to be consistent. However, in the case where an adversary chooses $\mathbf{A}$ adaptively, the disparity between the two entities may be larger. In this work, our primary focus is on the quantification of the Average Conditional Min-Entropy $\tilde{H}_\infty(\mathbf{s}|\mathbf{sA + e})$ of $\mathbf{s}$, where $\mathbf{A}$ is chosen by the adversary. Brakerski and D\"{o}ttling answered the question in one case: they proved that when $\mathbf{s}$ is uniformly chosen from $\mathbb{Z}_q^n$, it holds that $\tilde{H}_\infty(\mathbf{s}|\mathbf{sA + e}) \varpropto \rho_\sigma(\Lambda_q(\mathbf{A}))$. We prove that for any $d \leq q$, when $\mathbf{s}$ is uniformly chosen from $\mathbb{Z}_d^n$ or is sampled from a discrete Gaussian distribution, there are also similar results. As an independent result, we have also proved the regularity of the hash function mapped to the prime-order group and its Cartesian product. As an application of the above results, we improved the multi-key fully homomorphic encryption\cite{TCC:BraHalPol17} and answered the question raised at the end of their work positively: we have GSW-type ciphertext rather than Dual-GSW, and the improved scheme has shorter keys and ciphertexts.
Last updated:  2024-04-19
The Direction of Updatable Encryption Does Matter
Ryo Nishimaki
We introduce a new definition for key updates, called backward-leak uni-directional key updates, in updatable encryption (UE). This notion is a variant of uni-directional key updates for UE. We show that existing secure UE schemes in the bi-directional key updates setting are not secure in the backward-leak uni-directional key updates setting. Thus, security in the backward-leak uni-directional key updates setting is strictly stronger than security in the bi-directional key updates setting. This result is in sharp contrast to the equivalence theorem by Jiang (Asiacrypt 2020), which says security in the bi-directional key updates setting is equivalent to security in the existing uni-directional key updates setting. We call the existing uni-directional key updates ``forward-leak uni-directional'' key updates to distinguish two types of uni-directional key updates in this paper. We also present a UE scheme that is secure in the backward-leak uni-directional key updates setting under the learning with errors assumption.
Last updated:  2024-04-19
The Insecurity of SHA2 under the Differential Fault Characteristic of Boolean Functions
Weiqiong Cao, Hua Chen, Hongsong Shi, Haoyuan Li, Jian Wang, and Jingyi Feng
SHA2 has been widely adopted across various traditional public-key cryptosystems, post-quantum cryptography, personal identification, and network communication protocols, etc. Hence, ensuring the robust security of SHA2 is of critical importance. There have been several differential fault attacks based on random word faults targeting SHA1 and SHACAL-2. However, extending such random word-based fault attacks to SHA2 proves significantly more difficult due to the heightened complexity of the boolean functions in SHA2. In this paper, assuming random word faults, we find some distinctive differential properties within the boolean functions in SHA2. Leveraging these findings, we propose a new differential fault attack methodology that can be effectively utilized to recover the final message block and its corresponding initial vector in SHA2, forge HMAC-SHA2 messages, extract the key of SHACAL-2, and extend our analysis to similar algorithm like SM3. We validate the effectiveness of these attacks through rigorous simulations and theoretical deductions, revealing that they indeed pose substantial threats to the security of SHA2. In our simulation-based experiments, our approach necessitates guessing $T$ bits within a register, with $T$ being no more than $5$ at most, and having a approximate $95\%$ (for SHA512) probability of guessing just $1$ bit. Moreover, upon implementing a consecutive series of 15 fault injections, the success probability for recovering one register (excluding the guessed bits) approaches $100\%$. Ultimately, approximately 928 faulty outputs based on random word faults are required to carry out the attack successfully.
Last updated:  2024-04-19
Quantum Algorithms for Lattice Problems
Uncategorized
Yilei Chen
Show abstract
Uncategorized
We show a polynomial time quantum algorithm for solving the learning with errors problem (LWE) with certain polynomial modulus-noise ratios. Combining with the reductions from lattice problems to LWE shown by Regev [J.ACM 2009], we obtain polynomial time quantum algorithms for solving the decisional shortest vector problem (GapSVP) and the shortest independent vector problem (SIVP) for all $n$-dimensional lattices within approximation factors of $\tilde{\Omega}(n^{4.5})$. Previously, no polynomial or even subexponential time quantum algorithms were known for solving GapSVP or SIVP for all lattices within any polynomial approximation factors. To develop a quantum algorithm for solving LWE, we mainly introduce two new techniques. First, we introduce Gaussian functions with complex variances in the design of quantum algorithms. In particular, we exploit the feature of the Karst wave in the discrete Fourier transform of complex Gaussian functions. Second, we use windowed quantum Fourier transform with complex Gaussian windows, which allows us to combine the information from both time and frequency domains. Using those techniques, we first convert the LWE instance into quantum states with purely imaginary Gaussian amplitudes, then convert purely imaginary Gaussian states into classical linear equations over the LWE secret and error terms, and finally solve the linear system of equations using Gaussian elimination. This gives a polynomial time quantum algorithm for solving LWE.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.