Papers updated in last 7 days (75 results)

Last updated:  2024-04-26
Earn While You Reveal: Private Set Intersection that Rewards Participants
Aydin Abadi
In Private Set Intersection protocols (PSIs), a non-empty result always reveals something about the private input sets of the parties. Moreover, in various variants of PSI, not all parties necessarily receive or are interested in the result. Nevertheless, to date, the literature has assumed that those parties who do not receive or are not interested in the result still contribute their private input sets to the PSI for free, although doing so would cost them their privacy. In this work, for the first time, we propose a multi-party PSI, called “Anesidora”, that rewards parties who contribute their private input sets to the protocol. Anesidora is efficient; it mainly relies on symmetric key primitives and its computation and communication complexities are linear with the number of parties and set cardinality. It remains secure even if the majority of parties are corrupted by active colluding adversaries.
Last updated:  2024-04-26
A note on -Tweakable HCTR: A BBB Secure Tweakable Enciphering Scheme-
Mustafa Khairallah
Tweakable HCTR is an tweakable enciphering proposed by Dutta and Nandi in Indocrypt 2018. It provides beyond birthday bound security when each tweak value is not used too frequently. More importantly for this note, its security bound degrades linearly with the maximum input length. We show in this note that this is not true by showing a single query distinguisher with advantage $O(l^2/2^n)$ where $l$ is the length of that query. The distinguisher does not break the beyond-birthday-bound claim but gives higher advantage than the claimed bound.
Last updated:  2024-04-26
Properties of Lattice Isomorphism as a Cryptographic Group Action
Benjamin Benčina, Alessandro Budroni, Jesús-Javier Chi-Domínguez, and Mukul Kulkarni
In recent years, the Lattice Isomorphism Problem (LIP) has served as an underlying assumption to construct quantum-resistant cryptographic primitives, e.g. the zero-knowledge proof and digital signature scheme by Ducas and van Woerden (Eurocrypt 2022), and the HAWK digital signature scheme (Asiacrypt 2022). While prior lines of work in group action cryptography, e.g. the works of Brassard and Yung (Crypto 1990), and more recently Alamati, De Feo, Montgomery and Patranabis (Asiacrypt 2020), focused on studying the discrete logarithm problem and isogeny-based problems in the group action framework, in recent years this framing has been used for studying the cryptographic properties of computational problems based on the difficulty of determining equivalence between algebraic objects. Examples include Permutation and Linear Code Equivalence Problems used in LESS (Africacrypt 2020), and the Tensor Isomorphism Problem (TCC 2019). This study delves into the quadratic form version of LIP, examining it through the lens of group actions. In this work we (1) give formal definitions and study the cryptographic properties of this group action (LIGA), (2) demonstrate that LIGA lacks both weak unpredictability and weak pseudorandomness, and (3) under certain assumptions, establish a theoretical trade-off between time complexity and the required number of samples for breaking weak unpredictability, for large dimensions. We also conduct experiments supporting our analysis. Additionally, we employ our findings to formulate new hard problems on quadratic forms.
Last updated:  2024-04-26
Computational Attestations of Polynomial Integrity Towards Verifiable Machine Learning
Dustin Ray and Caroline El Jazmi
Machine-learning systems continue to advance at a rapid pace, demonstrating remarkable utility in various fields and disciplines. As these systems continue to grow in size and complexity, a nascent industry is emerging which aims to bring machine-learning-as-a-service (MLaaS) to market. Outsourcing the operation and training of these systems to powerful hardware carries numerous advantages, but challenges arise when needing to ensure privacy and the correctness of work carried out by a potentially untrusted party. Recent advancements in the discipline of applied zero-knowledge cryptography, and probabilistic proof systems in general, have led to a means of generating proofs of integrity for any computation, which in turn can be efficiently verified by any party, in any place, at any time. In this work we present the application of a non-interactive, plausibly-post-quantum-secure, probabilistically-checkable argument system utilized as an efficiently verifiable guarantee that a privacy mechanism was irrefutably applied to a machine-learning model during the training process. That is, we prove the correct training of a differentially-private (DP) linear regression over a dataset of 60,000 samples on a single machine in 55 minutes, verifying the entire computation in 47 seconds. To our knowledge, this result represents the fastest known instance in the literature of provable-DP over a dataset of this size. Finally, we show how this task can be run in parallel, leading to further dramatic reductions in prover and verifier runtime complexity. We believe this result constitutes a key stepping-stone towards end-to-end private MLaaS.
Last updated:  2024-04-26
A note on ``a lightweight mutual and transitive authentication mechanism for IoT network''
Zhengjun Cao and Lihua Liu
We show the authentication mechanism [Ad Hoc Networks, 2023, 103003] fails to keep user anonymity, not as claimed.
Last updated:  2024-04-25
Towards Permissionless Consensus in the Standard Model via Fine-Grained Complexity
Marshall Ball, Juan Garay, Peter Hall, Aggelos Kiayias, and Giorgos Panagiotakos
We investigate the feasibility of permissionless consensus (aka Byzantine agreement) under standard assumptions. A number of protocols have been proposed to achieve permissionless consensus, most notably based on the Bitcoin protocol; however, to date no protocol is known that can be provably instantiated outside of the random oracle model. In this work, we take the first steps towards achieving permissionless consensus in the standard model. In particular, we demonstrate that worst-case conjectures in fine-grained complexity, in particular the orthogonal vectors conjecture (implied by the Strong Exponential Time Hypothesis), imply permissionless consensus in the random beacon model—a setting where a fresh random value is delivered to all parties at regular intervals. This gives a remarkable win-win result: either permissionless consensus exists relative to a random beacon, or there are non-trivial worst-case algorithmic speed-ups for a host of natural algorithmic problems (including SAT). Our protocol achieves resilience against adversaries that control an inverse-polynomial fraction of the honest computational power, i.e., adversarial power $A = T^{1−ε} $ for some constant $ε > 0$, where $T$ denotes the honest computational power. This relatively low threshold is a byproduct of the slack in the fine-grained complexity conjectures. One technical highlight is the construction of a Seeded Proof of Work: a Proof of Work where many (correlated) challenges can be derived from a single short public seed, and yet still no non-trivial amortization is possible.
Last updated:  2024-04-25
Regev Factoring Beyond Fibonacci: Optimizing Prefactors
Seyoon Ragavan
In this note, we improve the space-efficient variant of Regev's quantum factoring algorithm [Reg23] proposed by Ragavan and Vaikuntanathan [RV24] by constant factors in space and/or size. This allows us to bridge the significant gaps in concrete efficiency between the circuits by [Reg23] and [RV24]; [Reg23] uses far fewer gates, while [RV24] uses far fewer qubits. The main observation is that the space-efficient quantum modular exponentiation technique by [RV24] can be modified to work with more general sequences of integers than the Fibonacci numbers. We parametrize this in terms of a linear recurrence relation, and through this formulation construct three different circuits for quantum factoring: - A circuit that uses $\approx 12.4n$ qubits and $\approx 54.9n^{1/2}$ multiplications of $n$-bit integers. - A circuit that uses $(9+\epsilon)n$ qubits and $O_\epsilon(n^{1/2})$ multiplications of $n$-bit integers, for any $\epsilon > 0$. - A circuit that uses $(24+\epsilon)n^{1/2}$ multiplications of $n$-bit integers, and $O_\epsilon(n)$ qubits, for any $\epsilon > 0$. In comparison, the original circuit by [Reg23] uses at least $\approx 3n^{3/2}$ qubits and $\approx 6n^{1/2}$ multiplications of $n$-bit integers, while the space-efficient variant by [RV24] uses $\approx 10.32n$ qubits and $\approx 138.3n^{1/2}$ multiplications of $n$-bit integers (although a very simple modification of their Fibonacci-based circuit uses $\approx 11.32n$ qubits and only $\approx 103.7n^{1/2}$ multiplications of $n$-bit integers). The improvements proposed in this note take effect for sufficiently large values of $n$; it remains to be seen whether they can also provide benefits for practical problem sizes.
Last updated:  2024-04-25
Memory adds no cost to lattice sieving for computers in 3 or more spatial dimensions
Samuel Jaques
The security of lattice-based crytography (LWE, NTRU, and FHE) depends on the hardness of the shortest-vector problem (SVP). Sieving algorithms give the lowest asymptotic runtime to solve SVP, but depend on exponential memory. Memory access costs much more in reality than in the RAM model, so we consider a computational model where processors, memory, and meters of wire are in constant proportions to each other. While this adds substantial costs to route data during lattice sieving, we modify existing algorithms to amortize these costs and find that, asymptotically, a classical computer can achieve the previous RAM model cost of $2^{0.2925d+o(d)}$ to sieve a $d$-dimensional lattice for a computer existing in 3 or more spatial dimensions, and can reach $2^{0.3113d+o(d)}$ in 2 spatial dimensions, where "spatial dimensions" are the dimensions of the physical geometry in which the computer exists. Under some assumptions about the constant terms of memory access, we estimate increases in bit security between $7$ to $23$ bits for different Kyber parameter sets and $8$ to $22$ bits for Dilithium.
Last updated:  2024-04-25
Tight Security of TNT and Beyond: Attacks, Proofs and Possibilities for the Cascaded LRW Paradigm
Ashwin Jha, Mustafa Khairallah, Mridul Nandi, and Abishanka Saha
Liskov, Rivest and Wagner laid the theoretical foundations for tweakable block ciphers (TBC). In a seminal paper, they proposed two (up to) birthday-bound secure design strategies --- LRW1 and LRW2 --- to convert any block cipher into a TBC. Several of the follow-up works consider cascading of LRW-type TBCs to construct beyond-the-birthday bound (BBB) secure TBCs. Landecker et al. demonstrated that just two-round cascading of LRW2 can already give a BBB security. Bao et al. undertook a similar exercise in context of LRW1 with TNT --- a three-round cascading of LRW1 --- that has been shown to achieve BBB security as well. In this paper, we present a CCA distinguisher on TNT that achieves a non-negligible advantage with $ O(2^{n/2}) $ queries, directly contradicting the security claims made by the designers. We provide a rigorous and complete advantage calculation coupled with experimental verification that further support our claim. Next, we provide new and simple proofs of birthday-bound CCA security for both TNT and its single-key variant, which confirm the tightness of our attack. Furthering on to a more positive note, we show that adding just one more block cipher call, referred as 4-LRW1, does not just re-establish the BBB security, but also amplifies it up to $ 2^{3n/4} $ queries. As a side-effect of this endeavour, we propose a new abstraction of the cascaded LRW-design philosophy, referred to as the LRW+ paradigm, comprising two block cipher calls sandwiched between a pair of tweakable universal hashes. This helps us to provide a modular proof covering all cascaded LRW constructions with at least $ 2 $ rounds, including 4-LRW1, and its more established relative, the well-known CLRW2, or more aptly, 2-LRW2.
Last updated:  2024-04-25
Organizing Records for Retrieval in Multi-Dimensional Range Searchable Encryption
Mahdieh Heidaripour, Ladan Kian, Maryam Rezapour, Mark Holcomb, Benjamin Fuller, Gagan Agrawal, and Hoda Maleki
Storage of sensitive multi-dimensional arrays must be secure and efficient in storage and processing time. Searchable encryption allows one to trade between security and efficiency. Searchable encryption design focuses on building indexes, overlooking the crucial aspect of record retrieval. Gui et al. (PoPETS 2023) showed that understanding the security and efficiency of record retrieval is critical to understand the overall system. A common technique for improving security is partitioning data tuples into parts. When a tuple is requested, the entire relevant part is retrieved, hiding the tuple of interest. This work assesses tuple partitioning strategies in the dense data setting, considering parts that are random, $1$-dimensional, and multi-dimensional. We consider synthetic datasets of $2$, $3$ and $4$ dimensions, with sizes extending up to $2$M tuples. We compare security and efficiency across a variety of record retrieval methods. Our findings are: 1. For most configurations, multi-dimensional partitioning yields better efficiency and less leakage. 2. 1-dimensional partitioning outperforms multi-dimensional partitioning when the first (indexed) dimension is any size as long as the query is large in all other dimensions except the (the first dimension can be any size). 3. The leakage of 1-dimensional partitioning is reduced the most when using a bucketed ORAM (Demertiz et al., USENIX Security 2020).
Last updated:  2024-04-25
Sing a song of Simplex
Victor Shoup
We flesh out some details of the recently proposed Simplex atomic broadcast protocol, and modify it so that leaders disperse blocks in a more communication-efficient fashion. The resulting protocol, called DispersedSimplex, maintains the simplicity and excellent -- indeed, optimal -- latency characteristics of the original Simplex protocol. We also present several variations, including a variant that supports "stable leaders", variants that incorporate very recently developed data dissemination techniques that allow us to disperse blocks even more efficiently, and variants that are "signature free". We also suggest a number of practical optimizations and provide concrete performance estimates that take into account not just network latency but also network bandwidth limitations and computational costs. Based on these estimates, we argue that despite its simplicity, DispersedSimplex should, in principle, perform in practice as well as or better than any other state-of-the-art atomic broadcast protocol, at least in terms of common-case throughput and latency.
Last updated:  2024-04-25
NTRU-based FHE for Larger Key and Message Space
Robin Jadoul, Axel Mertens, Jeongeun Park, and Hilder V. L. Pereira
The NTRU problem has proven a useful building block for efficient bootstrapping in Fully Homomorphic Encryption (FHE) schemes, and different such schemes have been proposed. FINAL (ASIACRYPT 2022) first constructed FHE using homomorphic multiplexer (CMux) gates for the blind rotation operation. Later, XZD+23 (CRYPTO 2023) gave an asymptotic optimization by changing the ciphertext format to enable ring automorphism evaluations. In this work, we examine an adaptation to FINAL to evaluate CMux gates of higher arity and the resulting tradeoff to running times and bootstrapping key sizes. In this setting, we can compare the time and space efficiency of both bootstrapping protocols with larger key space against each other and the state of the art.
Last updated:  2024-04-25
Vision Mark-32: ZK-Friendly Hash Function Over Binary Tower Fields
Tomer Ashur, Mohammad Mahzoun, Jim Posen, and Danilo Šijačić
Zero-knowledge proof systems are widely used in different applications on the Internet. Among zero-knowledge proof systems, SNARKs are a popular choice because of their fast verification time and small proof size. The efficiency of zero-knowledge systems is crucial for usability, resulting in the development of so-called arithmetization-oriented ciphers. In this work, we introduce Vision Mark-32, a modified instance of Vision defined over binary tower fields, with an optimized number of rounds and an efficient MDS matrix. We implement a fully-pipelined Vision Mark-32 permutation on Alveo U55C FPGA accelerator card and argue an order of magnitude better hardware efficiency compared to the popular Poseidon hash. Our fully-pipelined Vision Mark-32 implementation runs at 250 MHz and uses 398 kLUT and 104 kFF. Lastly, we delineate how to implement each step efficiently in hardware.
Last updated:  2024-04-25
Further Investigations on Nonlinear Complexity of Periodic Binary Sequences
Qin Yuan, Chunlei Li, Xiangyong Zeng, Tor Helleseth, and Debiao He
Nonlinear complexity is an important measure for assessing the randomness of sequences. In this paper we investigate how circular shifts affect the nonlinear complexities of finite-length binary sequences and then reveal a more explicit relation between nonlinear complexities of finite-length binary sequences and their corresponding periodic sequences. Based on the relation, we propose two algorithms that can generate all periodic binary sequences with any prescribed nonlinear complexity.
Last updated:  2024-04-25
Kronos: A Secure and Generic Sharding Blockchain Consensus with Optimized Overhead
Yizhong Liu, Andi Liu, Yuan Lu, Zhuocheng Pan, Yinuo Li, Jianwei Liu, Song Bian, and Mauro Conti
Sharding enhances blockchain scalability by dividing the network into shards, each managing specific unspent transaction outputs or accounts. As an introduced new transaction type, cross-shard transactions pose a critical challenge to the security and efficiency of sharding blockchains. Currently, there is a lack of a generic sharding consensus pattern that achieves both security and low overhead. In this paper, we present Kronos, a secure sharding blockchain consensus achieving optimized overhead. In particular, we propose a new secure sharding consensus pattern, based on a buffer managed jointly by shard members. Valid transactions are transferred to the payee via the buffer, while invalid ones are rejected through happy or unhappy paths. Kronos is proved to achieve security with atomicity under malicious clients with optimal intra-shard overhead $k\mathcal{B}$ ($k$ for involved shard number and $\mathcal{B}$ for a Byzantine fault tolerance (BFT) cost). Efficient rejection even requires no BFT execution in happy paths, and the cost in unhappy paths is still lower than a two-phase commit. Besides, we propose secure cross-shard certification methods based on batch certification and reliable cross-shard transfer. The former combines hybrid trees or vector commitments, while the latter integrates erasure coding. Handling $b$ transactions, Kronos is proved to achieve reliability with low cross-shard overhead $\mathcal{O}(n b \lambda)$ ($n$ for shard size and $\lambda$ for the security parameter). Notably, Kronos imposes no restrictions on BFT and does not rely on time assumptions, offering optional constructions in various modules. Kronos could serve as a universal framework for enhancing the performance and scalability of existing BFT protocols, supporting generic models, including asynchronous networks, increasing the throughput by several orders of magnitude. We implement Kronos using two prominent BFT protocols: asynchronous Speeding Dumbo (NDSS'22) and partial synchronous Hotstuff (PODC'19). Extensive experiments (over up to 1000 AWS EC2 nodes across 4 AWS regions) demonstrate Kronos scales the consensus nodes to thousands, achieving a substantial throughput of 320 ktx/sec with 2.0 sec latency. Compared with the past solutions, Kronos outperforms, achieving up to a 12$\times$ improvement in throughput and a 50% reduction in latency when cross-shard transactions dominate the workload.
Last updated:  2024-04-25
Attribute-based Keyed (Fully) Homomorphic Encryption
Keita Emura, Shingo Sato, and Atsushi Takayasu
Keyed homomorphic public key encryption (KHPKE) is a variant of homomorphic public key encryption, where only users who have a homomorphic evaluation key can perform a homomorphic evaluation. Then, KHPKE satisfies the CCA2 security against users who do not have a homomorphic evaluation key, while it satisfies the CCA1 security against users who have the key. Thus far, several KHPKE schemes have been proposed under the standard Diffie-Hellman-type assumptions and keyed fully homomorphic encryption (KFHE) schemes have also been proposed from lattices although there are no KFHE schemes secure solely under the LWE assumption in the standard model. As a natural extension, there is an identity-based variant of KHPKE; however, the security is based on a $q$-type assumption and there are no attribute-based variants. Moreover, there are no identity-based variants of KFHE schemes due to the complex design of the known KFHE schemes. In this paper, we provide two constructions of attribute-based variants. First, we propose an attribute-based KFHE (ABKFHE) scheme from lattices. We start by designing the first KFHE scheme secure solely under the LWE assumption in the standard model. Since the design is conceptually much simpler than known KFHE schemes, we replace their building blocks with attribute-based ones and obtain the proposed ABKFHE schemes. Next, we propose an efficient attribute-based KHPKE (ABKHE) scheme from a pair encoding scheme (PES). Due to the benefit of PES, we obtain various ABKHE schemes that contain the first identity-based KHPKE scheme secure under the standard $k$-linear assumption and the first pairing-based ABKHE schemes supporting more expressive predicates.
Last updated:  2024-04-25
BackMon: IC Backside Tamper Detection using On-Chip Impedance Monitoring
Tahoura Mosavirik and Shahin Tajik
The expansion of flip-chip technologies and a lack of backside protection make the integrated circuit (IC) vulnerable to certain classes of physical attacks mounted from the IC’s backside. Laser-assisted probing, electromagnetic, and body-basing injection attacks are examples of such attacks. Unfortunately, there are few countermeasures proposed in the literature, and none are available commercially. Those that do exist are not only expensive but are incompatible with current IC manufacturing processes. They also cannot be integrated into legacy systems, such as field programmable gate arrays (FPGAs), which are integral parts of many of the industrial and defense systems. In this paper, we demonstrate how the impedance monitoring of the printed circuit board (PCB) and IC package’s power distribution network (PDN) using on-chip circuit-based network analyzers can detect IC backside tampering. Our method is based on the fact that any tampering attempt to expose the backside silicon substrate, such as the removal of the fan and heat sinks, leads to changes in the equivalent impedance of the package’s PDN, and hence, scanning the package impedance will reveal whether the package integrity has been violated. To validate our claims, we deploy an on-FPGA network analyzer on an AMD Zynq UltraScale+ MPSoC manufactured with 16 nm technology, which is part of a multi-PCB system. We conduct a series of experiments at different temperatures, leveraging the difference of means as the statistical metric, to demonstrate the effectiveness of our method in detecting tamper events required to expose the IC backside silicon.
Last updated:  2024-04-24
Conditional disclosure of secrets with quantum resources
Vahid R. Asadi, Kohdai Kuroiwa, Debbie Leung, Alex May, Sabrina Pasterski, and Chris Waddell
The conditional disclosure of secrets (CDS) primitive is among the simplest cryptographic settings in which to study the relationship between communication, randomness, and security. CDS involves two parties, Alice and Bob, who do not communicate but who wish to reveal a secret $z$ to a referee if and only if a Boolean function $f$ has $f(x,y)=1$. Alice knows $x,z$, Bob knows $y$, and the referee knows $x,y$. Recently, a quantum analogue of this primitive called CDQS was defined and related to f-routing, a task studied in the context of quantum position-verification. CDQS has the same inputs, outputs, and communication pattern as CDS but allows the use of shared entanglement and quantum messages. We initiate the systematic study of CDQS, with the aim of better understanding the relationship between privacy and quantum resources in the information theoretic setting. We begin by looking for quantum analogues of results already established in the classical CDS literature. Doing so we establish a number of basic properties of CDQS, including lower bounds on entanglement and communication stated in terms of measures of communication complexity. Because of the close relationship to the $f$-routing position-verification scheme, our results have relevance to the security of these schemes.
Last updated:  2024-04-24
A provably masked implementation of BIKE Key Encapsulation Mechanism
Loïc Demange and Mélissa Rossi
BIKE is a post-quantum key encapsulation mechanism (KEM) selected for the 4th round of the NIST’s standardization campaign. It relies on the hardness of the syndrome decoding problem for quasi-cyclic codes and on the indistinguishability of the public key from a random element, and provides the most competitive performance among round 4 candidates, which makes it relevant for future real-world use cases. Analyzing its side-channel resistance has been highly encouraged by the community and several works have already outlined various side-channel weaknesses and proposed ad-hoc countermeasures. However, in contrast to the well-documented research line on masking lattice-based algorithms, the possibility of generically protecting code-based algorithms by masking has only been marginally studied in a 2016 paper by Cong Chen et al. At this stage of the standardization campaign, it is important to assess the possibility of fully masking BIKE scheme and the resulting cost in terms of performances. In this work, we provide the first high-order masked implementation of a code-based algorithm. We had to tackle many issues such as finding proper ways to handle large sparse polynomials, masking the key-generation algorithm or keeping the benefit of the bitslicing. In this paper, we present all the gadgets necessary to provide a fully masked implementation of BIKE, we discuss our different implementation choices and we propose a full proof of masking in the Ishai Sahai and Wagner (Crypto 2003) model. More practically, we also provide an open C-code masked implementation of the key-generation, encapsulation and decapsulation algorithms with extensive benchmarks. While the obtained performance is slower than existing masked lattice-based algorithms, the scaling in the masking order is still encouraging and no Boolean to Arithmetic conversion has been used. We hope that this work can be a starting point for future analysis and optimization.
Last updated:  2024-04-24
Encryption Based Covert Channel for Large Language Models
Yongge Wang
Transformer neural networks have gained significant traction since their introduction, becoming pivotal across diverse domains. Particularly in large language models like Claude and ChatGPT, the transformer architecture has demonstrated remarkable efficacy. This paper provides a concise overview of transformer neural networks and delves into their security considerations, focusing on covert channel attacks and their implications for the safety of large language models. We present a covert channel utilizing encryption and demonstrate its efficacy in circumventing Claude.ai's security measures. Our experiment reveals that Claude.ai appears to log our queries and blocks our attack within two days of our initial successful breach. This raises two concerns within the community: (1) The extensive logging of user inputs by large language models could pose privacy risks for users. (2) It may deter academic research on the security of such models due to the lack of experiment repeatability.
Last updated:  2024-04-24
Simple constructions of linear-depth t-designs and pseudorandom unitaries
Tony Metger, Alexander Poremba, Makrand Sinha, and Henry Yuen
Uniformly random unitaries, i.e., unitaries drawn from the Haar measure, have many useful properties, but cannot be implemented efficiently. This has motivated a long line of research into random unitaries that ``look'' sufficiently Haar random while also being efficient to implement. Two different notions of derandomisation have emerged: $t$-designs are random unitaries that information-theoretically reproduce the first $t$ moments of the Haar measure, and pseudorandom unitaries (PRUs) are random unitaries that are computationally indistinguishable from Haar random. In this work, we take a unified approach to constructing $t$-designs and PRUs. For this, we introduce and analyse the ``$PFC$ ensemble'', the product of a random computational basis permutation $P$, a random binary phase operator $F$, and a random Clifford unitary $C$. We show that this ensemble reproduces exponentially high moments of the Haar measure. We can then derandomise the $PFC$ ensemble to show the following: 1. Linear-depth $t$-designs. We give the first construction of a (diamond-error) approximate $t$-design with circuit depth linear in $t$. This follows from the $PFC$ ensemble by replacing the random phase and permutation operators with their $2t$-wise independent counterparts. 2. Non-adaptive PRUs. We give the first construction of PRUs with non-adaptive security, i.e., we construct unitaries that are indistinguishable from Haar random to polynomial-time distinguishers that query the unitary in parallel on an arbitary state. This follows from the $PFC$ ensemble by replacing the random phase and permutation operators with their pseudorandom counterparts. 3. Adaptive pseudorandom isometries. We show that if one considers isometries (rather than unitaries) from $n$ to $n + \omega(\log n)$ qubits, a small modification of our PRU construction achieves adaptive security, i.e., even a distinguisher that can query the isometry adaptively in sequence cannot distinguish it from Haar random isometries. This gives the first construction of adaptive pseudorandom isometries. Under an additional conjecture, this proof also extends to adaptive PRUs.
Last updated:  2024-04-24
Unconditional correctness of recent quantum algorithms for factoring and computing discrete logarithms
Cédric Pilatte
In 1994, Shor introduced his famous quantum algorithm to factor integers and compute discrete logarithms in polynomial time. In 2023, Regev proposed a multi-dimensional version of Shor's algorithm that requires far fewer quantum gates. His algorithm relies on a number-theoretic conjecture on the elements in $(\mathbb{Z}/N\mathbb{Z})^{\times}$ that can be written as short products of very small prime numbers. We prove a version of this conjecture using tools from analytic number theory such as zero-density estimates. As a result, we obtain an unconditional proof of correctness of this improved quantum algorithm and of subsequent variants.
Last updated:  2024-04-24
Rate-1 Fully Local Somewhere Extractable Hashing from DDH
Pedro Branco, Nico Döttling, Akshayaram Srinivasan, and Riccardo Zanotto
Somewhere statistically binding (SSB) hashing allows us to sample a special hashing key such that the digest statistically binds the input at $m$ secret locations. This hash function is said to be somewhere extractable (SE) if there is an additional trapdoor that allows the extraction of the input bits at the $m$ locations from the digest. Devadas, Goyal, Kalai, and Vaikuntanathan (FOCS 2022) introduced a variant of somewhere extractable hashing called rate-1 fully local SE hash functions. The rate-1 requirement states that the size of the digest is $m + \mathsf{poly}(\lambda)$ (where $\lambda$ is the security parameter). The fully local property requires that for any index $i$, there is a "very short" opening showing that $i$-th bit of the hashed input is equal to $b$ for some $b \in \{0,1\}$. The size of this opening is required to be independent of $m$ and in particular, this means that its size is independent of the size of the digest. Devadas et al. gave such a construction from Learning with Errors (LWE). In this work, we give a construction of a rate-1 fully local somewhere extractable hash function from Decisional Diffie-Hellman (DDH) and BARGs. Under the same assumptions, we give constructions of rate-1 BARG and RAM SNARG with partial input soundness whose proof sizes are only matched by prior constructions based on LWE.
Last updated:  2024-04-24
MUSEN: Aggregatable Key-Evolving Verifiable Random Functions and Applications
Bernardo David, Rafael Dowsley, Anders Konring, and Mario Larangeira
A Verifiable Random Function (VRF) can be evaluated on an input by a prover who holds a secret key, generating a pseudorandom output and a proof of output validity that can be verified using the corresponding public key. VRFs are a central building block of committee election mechanisms that sample parties to execute tasks in cryptographic protocols, e.g. generating blocks in a Proof-of-Stake (PoS) blockchain or executing a round of MPC protocols. We propose the notion, and a matching construction, of an Aggregatable Key-Evolving VRF (A-KE-VRF) with the following extra properties: 1. Aggregation: combining proofs for several VRF evaluations of different inputs under different secret keys into a single constant size proof; 2. Key-Evolving: preventing adversaries who corrupt a party (learning their secret key) from ``forging'' proofs of past VRF evaluations. As an immediate application, we improve on the block size of PoS blockchains and on the efficiency of Proofs of Proof-of-Stake (PoPoS). Furthermore, the A-KE-VRF notion allows us to construct Encryption to the Future (EtF) and Authentication from the Past (AfP) schemes with a Key-Evolving property, which provides forward security. An EtF scheme allows for sending a message to a party who is randomly selected to execute a role in the future, while an AfP scheme allows for this party to authenticate their messages as coming from a past execution of this role. These primitives are essential for realizing the YOSO MPC Framework (CRYPTO'21).
Last updated:  2024-04-24
Distributed & Scalable Oblivious Sorting and Shuffling
Nicholas Ngai, Ioannis Demertzis, Javad Ghareh Chamani, and Dimitrios Papadopoulos
Existing oblivious systems offer robust security by concealing memory access patterns, but they encounter significant scalability and performance challenges. Recent efforts to enhance the practicality of these systems involve embedding oblivious computation, e.g., oblivious sorting and shuffling, within Trusted Execution Environments (TEEs). For instance, oblivious sort has been heavily utilized: in Oblix (S&P'18), when oblivious indexes are created and accessed; in Snoopy's high-throughput oblivious key-value (SOSP'21) during initialization and when the input requests are deduplicated and prepared for delivery; in Opaque (NSDI'17) for all the proposed oblivious SQL operators; in the state-of-the-art non-foreign key oblivious join approach (PVLDB'20). Additionally, oblivious sort/shuffle find applications in Signal's commercial solution for contact discovery, anonymous Google's Key Transparency, Searchable Encryption, software monitoring, and differentially private federated learning with user privacy. In this work, we address the scalability bottleneck of oblivious sort and shuffle by re-designing these approaches to achieve high efficiency in distributed multi-enclave environments. First, we propose a multi-threaded bitonic sort optimized for the distributed setting, making it the most performant oblivious sort for small number of enclaves (up to 4). For larger numbers of enclaves, we propose a novel oblivious bucket sort, which improves data locality and network consumption and outperforms our optimized distributed bitonic-sort by up to 5-6x. To the best of our knowledge, these are the first distributed oblivious TEE-based sorting solutions. For reference, we are able to sort 2 GiB of data in 1 second and 128 GiB in 53.4 seconds in a multi-enclave test. A fundamental building block of our oblivious bucket-sort is an oblivious shuffle that improves the prior state-of-the-art result (CCS'22) by up to 9.5x in the distributed multi-enclave setting---interestingly it is better by 10% even in the single-enclave/multi-thread setting.
Last updated:  2024-04-24
Hadamard Product Argument from Lagrange-Based Univariate Polynomials
Jie Xie, Yuncong Hu, and Yu Yu
Hadamard product is a point-wise product for two vectors. This paper presents a new scheme to prove Hadamard-product relation as a sub-protocol for SNARKs based on univariate polynomials. Prover uses linear cryptographic operations to generate the proof containing logarithmic field elements. The verification takes logarithmic cryptographic operations with constant numbers of pairings in bilinear group. The construction of the scheme is based on the Lagrange-based KZG commitments (Kate, Zaverucha, and Goldberg at Asiacrypt 2010) and the folding technique. We construct an inner-product protocol from folding technique on univariate polynomials in Lagrange form, and by carefully choosing the random polynomials suitable for folding technique, we construct a Hadamard-product protocol from the inner-product protocol, giving an alternative to prove linear algebra relations in linear time, and the protocol has a better concrete proof size than previous works.
Last updated:  2024-04-23
Exponential Quantum Speedup for the Traveling Salesman Problem
Anant Sharma, Nupur Deshpande, Sanchita Ghosh, Sreetama Das, and Shibdas Roy
The traveling salesman problem is the problem of finding out the shortest route in a network of cities, that a salesman needs to travel to cover all the cities, without visiting the same city more than once. This problem is known to be $NP$-hard with a brute-force complexity of $O(N^N)$ or $O(N^{2N})$ for $N$ number of cities. This problem is equivalent to finding out the shortest Hamiltonian cycle in a given graph, if at least one Hamiltonian cycle exists in it. Quantum algorithms for this problem typically provide with a quadratic speedup only, using Grover's search, thereby having a complexity of $O(N^{N/2})$ or $O(N^N)$. We present a bounded-error quantum polynomial-time (BQP) algorithm for solving the problem, providing with an exponential speedup. The overall complexity of our algorithm is $O(N^3\log(N)\kappa/\epsilon + 1/\epsilon^3)$, where the errors $\epsilon$ are $O(1/{\rm poly}(N))$, and $\kappa$ is the not-too-large condition number of the matrix encoding all Hamiltonian cycles.
Last updated:  2024-04-23
Guarding the First Order: The Rise of AES Maskings
Amund Askeland, Siemen Dhooghe, Svetla Nikova, Vincent Rijmen, and Zhenda Zhang
We provide three first-order hardware maskings of the AES, each allowing for a different trade-off between the number of shares and the number of register stages. All maskings use a generalization of the changing of the guards method enabling the re-use of randomness between masked S-boxes. As a result, the maskings do not require fresh randomness while still allowing for a minimal number of shares and providing provable security in the glitch-extended probing model. The low-area variant has five cycles of latency and a serialized area cost of $8.13~kGE$. The low-latency variant reduces the latency to three cycles while increasing the serialized area by $67.89\%$ compared to the low-area variant. The maskings of the AES encryption are implemented on FPGA and evaluated with Test Vector Leakage Assessment (TVLA).
Last updated:  2024-04-23
A Characterization of AE Robustness as Decryption Leakage Indistinguishability
Ganyuan Cao
Robustness has emerged as an important criterion for authenticated encryption, alongside the requirements of confidentiality and integrity. We introduce a novel notion, denoted as IND-rCCA, to formalize the robustness of authenticated encryption from the perspective of decryption leakage. This notion is an augmentation of common notions defined for AEAD schemes by considering indistinguishability of potential leakage due to decryption failure in the presence of multiple checks for errors. With this notion, we analyze the disparity between a single-error decryption function and the actual leakage incurred during decryption. We introduce the notion of error unity to require that only one error is disclosed whether implicitly or explicitly even there are multiple checks for errors. We further extend this notion to IND-sf-rCCA to formalize the stateful security involving out-of-order ciphertext. Additionally, we present a modification to the Encode-then-Encrypt-then-MAC (EEM) paradigm to boost its robustness and provide a concrete security proof for the modification.
Last updated:  2024-04-23
Interactive Threshold Mercurial Signatures and Applications
Masaya Nanri, Octavio Perez Kempner, Mehdi Tibouchi, and Masayuki Abe
Equivalence class signatures allow a controlled form of malleability based on equivalence classes defined over the message space. As a result, signatures can be publicly randomized and adapted to a new message representative in the same equivalence class. Notably, security requires that an adapted signature-message pair looks indistinguishable from a random signature-message pair in the space of valid signatures for the new message representative. Together with the decisional Diffie-Hellman assumption, this yields an unlinkability notion (class-hiding), making them a very attractive building block for privacy-preserving primitives. Mercurial signatures are an extension of equivalence class signatures that allow malleability for the key space. Unfortunately, the most efficient construction to date suffers a severe limitation that limits their application: only a weak form of public key class-hiding is supported. In other words, given knowledge of the original signing key and randomization of the corresponding public key, it is possible to identify whether they are related. In this work, we put forth the notion of interactive threshold mercurial signatures and show how they help to overcome the above-mentioned limitation. Moreover, we present constructions in the two-party and multi-party settings, assuming at least one honest signer. We also discuss related applications, including blind signatures, multi-signatures, and threshold ring signatures. To showcase the practicality of our approach, we implement the proposed constructions, comparing them against related alternatives.
Last updated:  2024-04-23
Quantum Implementation and Analysis of SHA-2 and SHA-3
Kyungbae Jang, Sejin Lim, Yujin Oh, Hyunjun Kim, Anubhab Baksi, Sumanta Chakraborty, and Hwajeong Seo
Quantum computers have the potential to solve hard problems that are nearly impossible to solve by classical computers, this has sparked a surge of research to apply quantum technology and algorithm against the cryptographic systems to evaluate for its quantum resistance. In the process of selecting post-quantum standards, NIST categorizes security levels based on the complexity that quantum computers would require to crack AES encryption (levels 1, 3 and 5) and SHA-2 or SHA-3 (levels 2 and 4). In assessing the security strength of cryptographic algorithms against quantum threats, accurate predictions of quantum resources are crucial. Following the work of Jaques et al. in Eurocrypt 2020, NIST estimated security levels 1, 3, and 5, corresponding to quantum circuit size for finding the key for AES-128, AES-192, and AES-256, respectively. This work has been recently followed-up by Huang et al. (Asiacrypt'22) and Liu et al. (Asiacrypt'23) among others; though the most up-to-date results are available in the work by Jang et al. (ePrint'22). However, for levels 2 and 4, which relate to the collision finding for the SHA-2 and SHA-3 hash functions, quantum attack complexities are probably not well-studied. In this paper, we present novel techniques for optimizing the quantum circuit implementations for SHA-2 and SHA-3 algorithms in all the categories specified by NIST. After that, we evaluate the quantum circuits of target cryptographic hash functions for quantum collision search. Finally, we define the quantum attack complexity for levels 2 and 4, and comment on the security strength of the extended level. We present new concepts to optimize the quantum circuits at the component level and the architecture level.
Last updated:  2024-04-23
SQIAsignHD: SQIsignHD Adaptor Signature
Farzin Renan and Péter Kutas
Adaptor signatures can be viewed as a generalized form of the standard digital signature schemes where a secret randomness is hidden within a signature. Adaptor signatures are a recent cryptographic primitive and are becoming an important tool for blockchain applications such as cryptocurrencies to reduce on-chain costs, improve fungibility, and contribute to off-chain forms of payment in payment-channel networks, payment-channel hubs, and atomic swaps. However, currently used adaptor signature constructions are vulnerable to quantum adversaries due to Shor's algorithm. In this work, we introduce $\mathsf{SQIAsignHD}$, a new quantum-resistant adaptor signature scheme based on isogenies of supersingular elliptic curves, using SQIsignHD - as the underlying signature scheme - and exploiting the idea of the artificial orientation on the supersingular isogeny Diffie-Hellman key exchange protocol, SIDH, as the underlying hard relation. We, furthermore, show that our scheme is secure in the Quantum Random Oracle Model (QROM).
Last updated:  2024-04-22
Constrained Pseudorandom Functions for Inner-Product Predicates from Weaker Assumptions
Sacha Servan-Schreiber
In this paper, we build a framework for constructing Constrained Pseudorandom Functions (CPRFs) with inner-product constraint predicates, using ideas from subtractive secret sharing and related-key-attack security. Our framework can be instantiated using a random oracle or any suitable Related-Key-Attack (RKA) secure pseudorandom function. We provide three instantiations of our framework: 1. an adaptively-secure construction in the random oracle model; 2. a selectively-secure construction under the DDH assumption; and 3. a selectively-secure construction with a polynomial domain under the assumption that one-way functions exist. All three instantiations are constraint-hiding and support inner-product predicates, leading to the first constructions of such expressive CPRFs under each corresponding assumption. Moreover, while the OWF-based construction is primarily of theoretical interest, the random oracle and DDH-based constructions are concretely efficient, which we show via an implementation.
Last updated:  2024-04-22
Worst-Case to Average-Case Hardness of LWE: A Simple and Practical Perspective
Divesh Aggarwal, Leong Jin Ming, and Alexandra Veliche
In this work, we study the worst-case to average-case hardness of the Learning with Errors problem (LWE) under an alternative measure of hardness $−$ the maximum success probability achievable by a probabilistic polynomial-time (PPT) algorithm. Previous works by Regev (STOC 2005), Peikert (STOC 2009), and Brakerski, Peikert, Langlois, Regev, Stehle (STOC 2013) give worst-case to average-case reductions from lattice problems, specifically the approximate decision variant of the Shortest Vector Problem (GapSVP) and the Bounded Distance Decoding (BDD) problem, to LWE. These reductions, however, are lossy in the sense that even the strongest assumption on the worst-case hardness of GapSVP or BDD implies only mild hardness of LWE. Our alternative perspective gives a much tighter reduction and strongly relates the hardness of LWE to that of BDD. In particular, we show that under a reasonable assumption about the success probability of solving BDD via a PPT algorithm, we obtain a nearly tight lower bound on the highest possible success probability for solving LWE via a PPT algorithm. Furthermore, we show a tight relationship between the best achievable success probability by any probabilistic polynomial-time algorithm for decision-LWE to that of search-LWE. Our results not only refine our understanding of the computational complexity of LWE, but also provide a useful framework for analyzing the practical security implications.
Last updated:  2024-04-22
POKE: A Framework for Efficient PKEs, Split KEMs, and OPRFs from Higher-dimensional Isogenies
Andrea Basso
We introduce a new framework, POKE, to build cryptographic protocols from irrational isogenies using higher-dimensional representations. The framework enables two parties to manipulate higher-dimensional representations of isogenies to efficiently compute their pushforwards, and ultimately to obtain a shared secret. We provide three constructions based on POKE: the first is a PKE protocol, which is one of the most compact post-quantum PKEs and possibly the most efficient isogeny-based PKE to date. We then introduce a validation technique to ensure the correctness of uniSIDH public keys: by combining the validation method with a POKE-based construction, we obtain a split KEM, a primitive that generalizes NIKEs and can be used to instantiate a post-quantum version of the Signal's X3DH protocol. The third construction builds upon the split KEM and its validation method to obtain a round-optimal verifiable OPRF. It is the first such construction that does not require more than $\lambda$ isogeny computations, and it is significantly more compact and more efficient than all other isogeny-based OPRFs.
Last updated:  2024-04-22
flookup: Fractional decomposition-based lookups in quasi-linear time independent of table size
Ariel Gabizon and Dmitry Khovratovich
We present a protocol for checking the values of a committed polynomial $\phi(X)$ over a multiplicative subgroup $H\subset \mathbb{F}$ of size $m$ are contained in a table $T\in \mathbb{F}^N$. After an $O(N \log^2 N)$ preprocessing step, the prover algorithm runs in *quasilinear* time $O(m\log ^2 m)$. We improve upon the recent breakthrough results Caulk[ZBK+22] and Caulk+[PK22], which were the first to achieve the complexity sublinear in the full table size $N$ with prover time being $O(m^2+m\log N)$ and $O(m^2)$, respectively. We pose further improving this complexity to $O(m\log m)$ as the next important milestone for efficient zk-SNARK lookups.
Last updated:  2024-04-22
Proofs for Deep Thought: Accumulation for large memories and deterministic computations
Benedikt Bünz and Jessica Chen
We construct two new accumulation schemes. The first one is for checking that $\ell$ read and write operations were performed correctly from a memory of size $T$. The prover time is entirely independent of $T$ and only requires committing to $6\ell$ field elements, which is an over $100$X improvement over prior work. The second one is for deterministic computations. It does not require committing to the intermediate wires of the computation but only to the input and output. This is achieved by building an accumulation scheme for a modified version of the famous GKR protocol. We show that these schemes are highly compatible and that the accumulation for GKR can further reduce the cost of the memory-checking scheme. Using the BCLMS (Crypto 21) compiler, these protocols yield an efficient, incrementally verifiable computation (IVC) scheme that is particularly useful for machine computations with large memories and deterministic steps.
Last updated:  2024-04-22
Complete group law for genus 2 Jacobians on Jacobian coordinates
Elif Ozbay Gurler and Huseyin Hisil
This manuscript provides complete, inversion-free, and explicit group law formulas in Jacobian coordinates for the genus 2 hyperelliptic curves of the form $y^2 = x^5 + a_3 x^3 + a_2 x^2 + a_1 x + a_0$ over a field $K$ with $char(K) \ne 2$. The formulas do not require the use of polynomial arithmetic operations such as resultant, mod, or gcd computations but only operations in $K$.
Last updated:  2024-04-22
Deep Selfish Proposing in Longest-Chain Proof-of-Stake Protocols
Roozbeh Sarenche, Svetla Nikova, and Bart Preneel
It has been shown that the selfish mining attack enables a miner to achieve an unfair relative revenue, posing a threat to the progress of longest-chain blockchains. Although selfish mining is a well-studied attack in the context of Proof-of-Work blockchains, its impact on the longest-chain Proof-of-Stake (LC-PoS) protocols needs yet to be addressed. This paper involves both theoretical and implementation-based approaches to analyze the selfish proposing attack in the LC-PoS protocols. We discuss how factors such as the nothing-at-stake phenomenon and the proposer predictability in PoS protocols can make the selfish proposing attack in LC-PoS protocols more destructive compared to selfish mining in PoW. In the first part of the paper, we use combinatorial tools to theoretically assess the selfish proposer’s block ratio in simplistic LC-PoS environments and under simplified network connection. However, these theoretical tools or classical MDP-based approaches cannot be applied to analyze the selfish proposing attack in real-world and more complicated LC-PoS environments. To overcome this issue, in the second part of the paper, we employ deep reinforcement learning techniques to find the near-optimal strategy of selfish proposing in more sophisticated protocols. The tool implemented in the paper can help us analyze the selfish proposing attack across diverse blockchain protocols with different reward mechanisms, predictability levels, and network conditions.
Last updated:  2024-04-22
How to Lose Some Weight - A Practical Template Syndrome Decoding Attack
Sebastian Bitzer, Jeroen Delvaux, Elena Kirshanova, Sebastian Maaßen, Alexander May, and Antonia Wachter-Zeh
We study the hardness of the Syndrome Decoding problem, the base of most code-based cryptographic schemes, such as Classic McEliece, in the presence of side-channel information. We use ChipWhisperer equipment to perform a template attack on Classic McEliece running on an ARM Cortex-M4, and accurately classify the Hamming weights of consecutive 32-bit blocks of the secret error vector. With these weights at hand, we optimize Information Set Decoding algorithms. Technically, we show how to speed up information set decoding via a dimension reduction, additional parity-check equations, and an improved information set search, all derived from the Hamming weight information. Consequently, using our template attack, we can practically recover an error vector in dimension n=2197 in a matter of seconds. Without side-channel information, such an instance has a complexity of around 88 bit. We also estimate how our template attack affects the security of the proposed McEliece parameter sets. Roughly speaking, even an error-prone leak of our Hamming weight information leads for n=3488 to a security drop of 89 bits.
Last updated:  2024-04-22
New SAT-based Model for Quantum Circuit Decision Problem: Searching for Low-Cost Quantum Implementation
Jingwen Chen, Qun Liu, Yanhong Fan, Lixuan Wu, Boyun Li, and Meiqin Wang
In recent years, quantum technology has been rapidly developed. As security analyses for symmetric ciphers continue to emerge, many require an evaluation of the resources needed for the quantum circuit implementation of the encryption algorithm. In this regard, we propose the quantum circuit decision problem, which requires us to determine whether there exists a quantum circuit for a given permutation f using M ancilla qubits and no more than K quantum gates within the circuit depth D. Firstly, we investigate heuristic algorithms and classical SAT-based models in previous works, revealing their limitations in solving the problem. Hence, we innovatively propose an improved SAT-based model incorporating three metrics of quantum circuits. The model enables us to find the optimal quantum circuit of an arbitrary 3 or 4-bit S-box under a given optimization goal based on SAT solvers, which has proved the optimality of circuits constructed by the tool, LIGHTER-R. Then, by combining different criteria in the model, we find more compact quantum circuit implementations of S-boxes such as RECTANGLE and GIFT. For GIFT S-box, our model provides the optimal quantum circuit that only requires 8 gates with a depth of 31. Furthermore, our model can be generalized to linear layers and improve the previous SAT-based model proposed by Huang et al. in ASIACRYPT 2022 by adding the criteria on the number of qubits and the circuit depth.
Last updated:  2024-04-22
WESP: An encryption method that, as the key size increases, require an exponentially growing time to break
Sam Widlund
WESP is a new encryption algorithm that is based on equation systems, in which the equations are generated using the values of tables that act as the encryption key, and the equations having features making them suitable for cryptographic use. The algorithm is defined, and its properties are discussed. Besides just describing the algorithm, also reasons are presented why the algorithm works the way it works. The key size in WESP can be altered and has no upper limit, and typically the key size is bigger than currently commonly used keys. A calculation is presented, calculating how many bytes can be securely encrypted before the algorithm might start to repeat it’s sequence of encrypting bytes, and that this period can be adjusted to be arbitrarily large. It is also shown that withing the period the resulting stream of encrypting bytes is statistically uniformly distributed. It is also shown that if the encryption tables are not known, the equations in the system of equations cannot be known, and it is demonstrated that the system of equations cannot be solved if the equations are not known, and thus the encryption cannot be broken in a closed form. Then, we calculate for all symbols used in the algorithm, the minimum amount of trials needed, in order to be able to verity the trials. Since the algorithm is constantly updating key values, verification becomes impossible if equations are not evaluated in order. The calculation shows that the minimum number of trials required is such that the number of trials, i.e., the time required to break the encryption, increases exponentially as the key size grows. Since there is no upper limit on the key size there is neither any upper limit on the time it requires to break the encryption.
Last updated:  2024-04-22
BPDTE: Batch Private Decision Tree Evaluation via Amortized Efficient Private Comparison
Huiqiang Liang, Haining Lu, and Geng Wang
Machine learning as a service requires the client to trust the server and provide its own private information to use this service. Usually, clients may worry that their private data is being collected by server without effective supervision, the server aims to ensure proper management of the user data, thereby fostering the advancement of its services. In this work, we focus on private decision tree evaluation (PDTE) which can alleviates the privacy concerns associated with classification tasks using decision tree. After the evaluation, except for some hyperparameters, the client only receives the classification results of their private data from server, while the server learns nothing. Firstly, we propose three amortized efficient private comparison algorithms: TECMP, RDCMP and CDCMP, which are based on leveled homomorphic encryption. They are non-interactive, high precision (up to 26624-bit), many-to-many, and output expressive, achieving an amortized cost of less than 1 ms under 32-bit, which is an order of magnitude faster than the state-of-the-art. Secondly, we propose three batch PDTE schemes using our private comparison: TECMP-PDTE, RDCMP-PDTE and CDCMP-PDTE. Due to the batch operations, we utilized a clear rows relation (CRR) algorithm, which obfuscates the position and classification results of the different row data. Finally, in decision tree exceeding 1000 nodes with 16-bit each, the amortized runtime of TECMP-PDTE and RDCMP-PDTE both more than 56$\times$ faster than state-of-the-art, while the TECMP-PDTE with CRR still achieves 14$\times$ speedup. Even in a single row and a tree of fewer than 100 nodes with 64-bit, and the TECMP-PDTE maintains a comparable performance with the current work.
Last updated:  2024-04-22
Secure Integrated Sensing and Communication Under Correlated Rayleigh Fading
Martin Mittelbach, Rafael F. Schaefer, Matthieu Bloch, Aylin Yener, and Onur Gunlu
We consider a secure integrated sensing and communication (ISAC) scenario, in which a signal is transmitted through a state-dependent wiretap channel with one legitimate receiver with which the transmitter communicates and one honest-but-curious target that the transmitter wants to sense. The secure ISAC channel is modeled as two state-dependent fast-fading channels with correlated Rayleigh fading coefficients and independent additive Gaussian noise components. Delayed channel outputs are fed back to the transmitter to improve the communication performance and to estimate the channel state sequence. We establish and illustrate an achievable secrecy-distortion region for degraded secure ISAC channels under correlated Rayleigh fading. We also evaluate the inner bound for a large set of parameters to derive practical design insights for secure ISAC methods. The presented results include in particular parameter ranges for which the secrecy capacity of a classical wiretap channel setup is surpassed and for which the channel capacity is approached.
Last updated:  2024-04-22
Efficient KZG-based Univariate Sum-check and Lookup Argument
Yuncong Zhang, Shi-Feng Sun, and Dawu Gu
We propose a novel KZG-based sum-check scheme, dubbed $\mathsf{Losum}$, with optimal efficiency. Particularly, its proving cost is one multi-scalar-multiplication of size $k$---the number of non-zero entries in the vector, its verification cost is one pairing plus one group scalar multiplication, and the proof consists of only one group element. Using $\mathsf{Losum}$ as a component, we then construct a new lookup argument, named $\mathsf{Locq}$, which enjoys a smaller proof size and a lower verification cost compared to the state of the arts $\mathsf{cq}$, $\mathsf{cq}$+ and $\mathsf{cq}$++. Specifically, the proving cost of $\mathsf{Locq}$ is comparable to $\mathsf{cq}$, keeping the advantage that the proving cost is independent of the table size after preprocessing. For verification, $\mathsf{Locq}$ costs four pairings, while $\mathsf{cq}$, $\mathsf{cq}$+ and $\mathsf{cq}$++ require five, five and six pairings, respectively. For proof size, a $\mathsf{Locq}$ proof consists of four $\mathbb{G}_1$ elements and one $\mathbb{G}_2$ element; when instantiated with the BLS12-381 curve, the proof size of $\mathsf{Locq}$ is $2304$ bits, while $\mathsf{cq}$, $\mathsf{cq}$+ and $\mathsf{cq}$++ have $3840$, $3328$ and $2944$ bits, respectively. Moreover, $\mathsf{Locq}$ is zero-knowledge as $\mathsf{cq}$+ and $\mathsf{cq}$++, whereas $\mathsf{cq}$ is not. $\mathsf{Locq}$ is more efficient even compared to the non-zero-knowledge (and more efficient) versions of $\mathsf{cq}$+ and $\mathsf{cq}$++.
Last updated:  2024-04-22
Lattice-Based Succinct Mercurial Functional Commitment for Circuits: Definitions and Constructions
Hongxiao Wang, Siu-Ming Yiu, Yanmin Zhao, Zoe L. Jiang, and Min Xie
Vector commitments gain a lot of attention because of their wide usage in applications such as blockchain and accumulator. Mercurial vector commitments and mercurial functional commitments (MFC), as significant variants of VC, are the central techniques to construct more advanced cryptographic primitives such as zero-knowledge set and zero-knowledge functional elementary database (ZK-FEDB). However, the current MFC only supports linear functions, limiting its application, i.e. building the ZK-FEDB that only supports linear function queries. Besides, to our best knowledge, the existing MFC and ZK-FEDBs, including the one proposed by Zhang and Deng (ASIACRYPT '23) using RSA accumulators, are all in the group model and cannot resist the attack of quantum computers. To break these limitations, we formalize the first system model and security model of MFC for circuits. Then, we target some specific properties of a new falsifiable assumption, i.e. the $\mathsf{BASIS}$ assumption proposed by Wee and Wu (EUROCRYPT '23) to construct the first lattice-based succinct mercurial functional commitment for circuits. To the application, we show that our constructions can be used to build the first lattice-based ZK-FEDB directly within the existing generic framework.
Last updated:  2024-04-22
On the {\sf P/poly} Validity of the Agr17 FE Scheme
Yupu Hu, Siyue Dong, Baocang Wang, Jun Liu, Yanbin Pan, and Xingting Dong
Functional encryption (FE) is a cutting-edge research topic in cryptography. The Agr17 FE scheme is a major scheme of FE area. This scheme had the novelty of “being applied for the group of general functions (that is, {\sf P/poly} functions) without IO”. It took the BGG+14 ABE scheme as a bottom structure, which was upgraded into a “partially hiding attribute” scheme, and combined with a fully homomorphic encryption (FHE) scheme. However, the Agr17 FE scheme had a strange operation. For noise cancellation of FHE decryption stage, it used bulky “searching noise” rather than elegant “filtering”. It searched total modulus interval, so that the FHE modulus should be polynomially large. In this paper we discuss the {\sf P/poly} validity of the Agr17 FE scheme. First, we obtain the result that the Agr17 FE scheme is {\sf P/poly} invalid. More detailedly, when the Agr17 FE scheme is applied for the group of randomly chosen {\sf P/poly} Boolean functions, FHE modulus at the “searching” stage cannot be polynomially large. Our analysis is based on three restrictions of the BGG+14 ABE scheme: (1) The modulus of the BGG+14 ABE should be adapted to being super-polynomially large, if it is applied for the group of randomly chosen {\sf P/poly} functions. (2) The modulus of the BGG+14 ABE cannot be switched. (3) If the BGG+14 ABE is upgraded into a “partially hiding attribute” scheme, permitted operations about hidden part of the attribute can only be affine operations. Then, to check whether the {\sf P/poly} validity can be obtained by modifying the scheme, we consider two modified versions. The first modified version is controlling the FHE noise by repeatedly applying bootstrapping, and replacing a modular inner product with an arithmetic inner product. The second modified version is replacing the search for the modulus interval with the search for a public noise interval, hoping such noise interval polynomially large and tolerating the modulus which may be super-polynomially large. The first modified version may be {\sf P/poly} valid, but it is weaker. There is no evidence to support the {\sf P/poly} validity of the second modified version. Finally, we present an additional conclusion that there is no evidence to support the {\sf P/poly} validity of the GVW15 PE scheme.
Last updated:  2024-04-22
$\mathsf{Cougar}$: Cubic Root Verifier Inner Product Argument under Discrete Logarithm Assumption
Hyeonbum Lee, Seunghun Paik, Hyunjung Son, and Jae Hong Seo
An inner product argument (IPA) is a cryptographic primitive used to construct a zero-knowledge proof (ZKP) system, which is a notable privacy-enhancing technology. We propose a novel efficient IPA called $\mathsf{Cougar}$. $\mathsf{Cougar}$ features cubic root verifier and logarithmic communication under the discrete logarithm (DL) assumption. At Asiacrypt2022, Kim et al. proposed two square root verifier IPAs under the DL assumption. Our main objective is to overcome the limitation of square root complexity in the DL setting. To achieve this, we combine two distinct square root IPAs from Kim et al.: one with pairing ($\mathsf{Protocol3}$) and one without pairing ($\mathsf{Protocol4}$). To construct $\mathsf{Cougar}$, we first revisit $\mathsf{Protocol4}$ and reconstruct it to make it compatible with the proof system for the homomorphic commitment scheme. Next, we utilize $\mathsf{Protocol3}$ as the proof system for the reconstructed $\mathsf{Protocol4}$. Furthermore, we provide a soundness proof for $\mathsf{Cougar}$ in the DL assumption.
Last updated:  2024-04-22
Subverting Cryptographic Protocols from A Fine-Grained Perspective - A Case Study on 2-Party ECDSA
Jialiu Cheng, Yi Wang, Rongmao Chen, and Xinyi Huang
The revelations of Edward Snowden in 2013 rekindled concerns within the cryptographic community regarding the potential subversion of cryptographic systems. Bellare et al. (CRYPTO'14) introduced the notion of Algorithm Substitution Attacks (ASAs), which aim to covertly leak sensitive information by undermining individual cryptographic primitives. In this work, we delve deeply into the realm of ASAs against protocols built upon cryptographic primitives. In particular, we revisit the existing ASA model proposed by Berndt et al. (AsiaCCS'22), providing a more fine-grained perspective. We introduce a novel ASA model tailored for protocols, capable of capturing a wide spectrum of subversion attacks. Our model features a modular representation of subverted parties within protocols, along with fine-grained definitions of undetectability. To illustrate the practicality of our model, we applied it to Lindell's two-party ECDSA protocol (CRYPTO'17), unveiling a range of ASAs targeting the protocol's parties with the objective of extracting secret key shares. Our work offers a comprehensive ASA model suited to cryptographic protocols, providing a useful framework for understanding ASAs against protocols.
Last updated:  2024-04-22
Non-interactive Blind Signatures from Lattices
Foteini Baldimtsi, Jiaqi Cheng, Rishab Goyal, and Aayush Yadav
Blind signatures enable a receiver to obtain signatures on messages of its choice without revealing any message to the signer. Round-optimal blind signatures are designed as a two-round interactive protocol between a signer and receiver. Coincidentally, the choice of message is not important in many applications, and is routinely set as a random (unstructured) message by a receiver. With the goal of designing more efficient blind signatures for such applications, Hanzlik (Eurocrypt '23) introduced a new variant called non-interactive blind signatures (NIBS). These allow a signer to asynchronously generate partial signatures for any recipient such that only the intended recipient can extract a blinded signature for a random message. This bypasses the two-round barrier for traditional blind signatures, yet enables many known applications. Hanzlik provided new practical designs for NIBS from bilinear pairings. In this work, we investigate efficient NIBS with post-quantum security. We design the first practical NIBS, as well as non-interactive partially blind signatures called tagged NIBS, from lattice-based assumptions. We also propose a new generic paradigm for NIBS from circuit-private leveled homomorphic encryption achieving optimal-sized signatures (i.e., same as any non-blind signature). Finally, we propose new enhanced security properties for NIBS, that could be of practical and theoretical interest.
Last updated:  2024-04-22
Practical Delegatable Attribute-Based Anonymous Credentials with Chainable Revocation
Min Xie, Peichen Ju, Yanqi Zhao, Zoe L. Jiang, Junbin Fang, Yong Yu, and Xuan Wang
Delegatable Anonymous Credentials (DAC) are an enhanced Anonymous Credentials (AC) system that allows credential owners to use credentials anonymously, as well as anonymously delegate them to other users. In this work, we introduce a new concept called Delegatable Attribute-based Anonymous Credentials with Chainable Revocation (DAAC-CR), which extends the functionality of DAC by allowing 1) fine-grained attribute delegation, 2) issuers to restrict the delegation capabilities of the delegated users at a fine-grained level, including the depth of delegation and the sets of delegable attributes, and 3) chainable revocation, meaning if a credential within the delegation chain is revoked, all subsequent credentials derived from it are also invalid. We provide a practical DAAC-CR instance based on a novel primitive that we identify as structure-preserving signatures on equivalence classes on vector commitments (SPSEQ-VC). This primitive may be of independent interest, and we detail an efficient construction. Compared to traditional DAC systems that rely on non-interactive zero-knowledge (NIZK) proofs, the credential size in our DAAC-CR instance is constant, independent of the length of delegation chain and the number of attributes. We formally prove the security of our scheme in the generic group model and demonstrate its practicality through performance benchmarks.
Last updated:  2024-04-22
Zero-Knowledge Proof Vulnerability Analysis and Security Auditing
Xueyan Tang, Lingzhi Shi, Xun Wang, Kyle Charbonnet, Shixiang Tang, and Shixiao Sun
Zero-Knowledge Proof (ZKP) technology marks a revolutionary advancement in the field of cryptography, enabling the verification of certain information ownership without revealing any specific details. This technology, with its paradoxical yet powerful characteristics, provides a solid foundation for a wide range of applications, especially in enhancing the privacy and security of blockchain technology and other cryptographic systems. As ZKP technology increasingly becomes a part of the blockchain infrastructure, its importance for security and completeness becomes more pronounced. However, the complexity of ZKP implementation and the rapid iteration of the technology introduce various vulnerabilities, challenging the privacy and security it aims to offer. This study focuses on the completeness, soundness, and zero-knowledge properties of ZKP to meticulously classify existing vulnerabilities and deeply explores multiple categories of vulnerabilities, including completeness issues, soundness problems, information leakage, and non-standardized cryptographic implementations. Furthermore, we propose a set of defense strategies that include a rigorous security audit process and a robust distributed network security ecosystem. This audit strategy employs a divide-and-conquer approach, segmenting the project into different levels, from the application layer to the platform-nature infrastructure layer, using threat modelling, line-by-line audit, and internal cross-review, among other means, aimed at comprehensively identifying vulnerabilities in ZKP circuits, revealing design flaws in ZKP applications, and accurately identifying inaccuracies in the integration process of ZKP primitives.
Last updated:  2024-04-22
Quantum copy-protection of compute-and-compare programs in the quantum random oracle model
Andrea Coladangelo, Christian Majenz, and Alexander Poremba
Copy-protection allows a software distributor to encode a program in such a way that it can be evaluated on any input, yet it cannot be "pirated" - a notion that is impossible to achieve in a classical setting. Aaronson (CCC 2009) initiated the formal study of quantum copy-protection schemes, and speculated that quantum cryptography could offer a solution to the problem thanks to the quantum no-cloning theorem. In this work, we introduce a quantum copy-protection scheme for a large class of evasive functions known as "compute-and-compare programs" - a more expressive generalization of point functions. A compute-and-compare program $\mathsf{CC}[f,y]$ is specified by a function $f$ and a string $y$ within its range: on input $x$, $\mathsf{CC}[f,y]$ outputs $1$, if $f(x) = y$, and $0$ otherwise. We prove that our scheme achieves non-trivial security against fully malicious adversaries in the quantum random oracle model (QROM), which makes it the first copy-protection scheme to enjoy any level of provable security in a standard cryptographic model. As a complementary result, we show that the same scheme fulfils a weaker notion of software protection, called "secure software leasing", introduced very recently by Ananth and La Placa (eprint 2020), with a standard security bound in the QROM, i.e. guaranteeing negligible adversarial advantage. Finally, as a third contribution, we elucidate the relationship between unclonable encryption and copy-protection for multi-bit output point functions.
Last updated:  2024-04-22
On the Two-sided Permutation Inversion Problem
Gorjan Alagic, Chen Bai, Alexander Poremba, and Kaiyan Shi
In the permutation inversion problem, the task is to find the preimage of some challenge value, given oracle access to the permutation. This is a fundamental problem in query complexity, and appears in many contexts, particularly cryptography. In this work, we examine the setting in which the oracle allows for quantum queries to both the forward and the inverse direction of the permutation---except that the challenge value cannot be submitted to the latter. Within that setting, we consider two options for the inversion algorithm: whether it can get quantum advice about the permutation, and whether it must produce the entire preimage (search) or only the first bit (decision). We prove several theorems connecting the hardness of the resulting variations of the inversion problem, and establish a number of lower bounds. Our results indicate that, perhaps surprisingly, the inversion problem does not become significantly easier when the adversary is granted oracle access to the inverse, provided it cannot query the challenge itself.
Last updated:  2024-04-22
Generic MitM Attack Frameworks on Sponge Constructions
Xiaoyang Dong, Boxin Zhao, Lingyue Qin, Qingliang Hou, Shun Zhang, and Xiaoyun Wang
This paper proposes general meet-in-the-middle (MitM) attack frameworks for preimage and collision attacks on hash functions based on (generalized) sponge construction. As the first contribution, our MitM preimage attack framework covers a wide range of sponge-based hash functions, especially those with lower claimed security level for preimage compared to their output size. Those hash functions have been very widely standardized (e.g., Ascon-Hash, PHOTON, etc.), but are rarely studied against preimage attacks. Even the recent MitM attack framework on sponge construction by Qin et al. (EUROCRYPT 2023) cannot attack those hash functions. As the second contribution, our MitM collision attack framework shows a different tool for the collision cryptanalysis on sponge construction, while previous collision attacks on sponge construction are mainly based on differential attacks. Most of the results in this paper are the first third-party cryptanalysis results. If cryptanalysis previously existed, our new results significantly improve the previous results, such as improving the previous 2-round collision attack on Ascon-Hash to the current 4 rounds, improving the previous 3.5-round quantum preimage attack on SPHINCS$^+$-Haraka to our 4-round classical preimage attack, etc.
Last updated:  2024-04-22
LLRing: Logarithmic Linkable Ring Signatures with Transparent Setup
Xiangyu Hui and Sid Chi-Kin Chau
Linkable ring signatures are an important cryptographic primitive for anonymized applications, such as e-voting, e-cash and confidential transactions. To eliminate backdoor and overhead in a trusted setup, transparent setup in the discrete logarithm or pairing settings has received considerable attention in practice. Recent advances have improved the proof sizes and verification efficiency of linkable ring signatures with a transparent setup to achieve logarithmic bounds. Omniring (CCS '19) and RingCT 3.0 (FC '20) proposed linkable ring signatures in the discrete logarithm setting with logarithmic proof sizes with respect to the ring size, whereas DualDory (ESORICS '22) achieves logarithmic verifiability in the pairing setting. We make three novel contributions in this paper to improve the efficiency and soundness of logarithmic linkable ring signatures: (1) We identify an attack on DualDory that breaks its linkability. (2) To eliminate such attacks, we present a new linkable ring signature scheme in the pairing setting with logarithmic verifiability. (3) We also improve the verification efficiency of linkable ring signatures in the discrete logarithm setting, by a technique of reducing the number of group exponentiations for verification in Omniring by 50%. Furthermore, our technique is applicable to general inner-product relation proofs, which might be of independent interest. Finally, we empirically evaluate our schemes and compare them with the extant linkable ring signatures in concrete implementation.
Last updated:  2024-04-21
FHERMA: Building the Open-Source FHE Components Library for Practical Use
Gurgen Arakelov, Nikita Kaskov, Daria Pianykh, and Yuriy Polyakov
Fully Homomorphic Encryption (FHE) is a powerful Privacy-Enhancing Technology (PET) that enables computations on encrypted data without having access to the secret key. While FHE holds immense potential for enhancing data privacy and security, creating its practical applications is associated with many difficulties. A significant barrier is the absence of easy-to-use, standardized components that developers can utilize as foundational building blocks. Addressing this gap requires constructing a comprehensive library of FHE components, a complex endeavor due to multiple inherent problems. We propose a competition-based approach for building such a library. More concretely, we present FHERMA, a new challenge platform that introduces black-box and white-box challenges, and fully automated evaluation of submitted FHE solutions. The initial challenges on the FHERMA platform are motivated by practical problems in machine learning and blockchain. The winning solutions get integrated into an open-source library of FHE components, which is available to all members of the PETs community under the Apache 2.0 license.
Last updated:  2024-04-21
Ponyta: Foundations of Side-Contract-Resilient Fair Exchange
Hao Chung, Elisaweta Masserova, Elaine Shi, and Sri AravindaKrishnan Thyagarajan
This paper is subsumed by Rapidash (https://eprint.iacr.org/2022/1063). Please use Rapidash for the citation. Fair exchange is a fundamental primitive for blockchains, and is widely adopted in applications such as atomic swaps, payment channels, and DeFi. Most existing designs of blockchain-based fair exchange protocols consider only the users as strategic players, and assume honest miners. However, recent works revealed that the fairness of commonly deployed fair exchange protocols can be completely broken in the presence of user-miner collusion. In particular, a user can bribe the miners to help it cheat — a phenomenon also referred to as Miner Extractable Value (MEV). We provide the first formal treatment of side-contract-resilient fair exchange. We propose a new fair exchange protocol called Ponyta, and we prove that the protocol is incentive compatible in the presence of user-miner collusion. In particular, we show that Ponyta satisfies a coalition-resistant Nash equilibrium. Further, we show how to use Ponyta to realize a cross-chain coin swap application, and prove that our coin swap protocol also satisfies coalition-resistant Nash equilibrium. Our work helps to lay the theoretical groundwork for studying side-contract-resilient fair exchange. Finally, we present practical instantiations of Ponyta in Bitcoin and Ethereum with minimal overhead in terms of costs for the users involved in the fair exchange, thus showcasing instantiability of Ponyta with a wide range of cryptocurrencies.
Last updated:  2024-04-21
Maximizing Miner Revenue in Transaction Fee Mechanism Design
Ke Wu, Elaine Shi, and Hao Chung
Transaction fee mechanism design is a new decentralized mechanism design problem where users bid for space on the blockchain. Several recent works showed that the transaction fee mechanism design fundamentally departs from classical mechanism design. They then systematically explored the mathematical landscape of this new decentralized mechanism design problem in two settings: in the plain setting where no cryptography is employed, and in a cryptography-assisted setting where the rules of the mechanism are enforced by a multi-party computation protocol. Unfortunately, in both settings, prior works showed that if we want the mechanism to incentivize honest behavior for both users as well as miners (possibly colluding with users), then the miner revenue has to be zero. Although adopting a relaxed, approximate notion of incentive compatibility gets around this zero miner-revenue limitation, the scaling of the miner revenue is nonetheless poor. In this paper, we show that if we make a mildly stronger reasonable-world assumption than prior works, we can circumvent the known limitations on miner revenue, and design auctions that generate optimal miner revenue. We also systematically explore the mathematical landscape of transaction fee mechanism design under the new reasonable-world and demonstrate how such assumptions can alter the feasibility and infeasibility landscape.
Last updated:  2024-04-21
A Security Analysis of Restricted Syndrome Decoding Problems
Ward Beullens, Pierre Briaud, and Morten Øygarden
Restricted syndrome decoding problems (R-SDP and R-SDP($G$)) provide an interesting basis for post-quantum cryptography. Indeed, they feature in CROSS, a submission in the ongoing process for standardizing post-quantum signatures. This work improves our understanding of the security of both problems. Firstly, we propose and implement a novel collision attack on R-SDP($G$) that provides the best attack under realistic restrictions on memory. Secondly, we derive precise complexity estimates for algebraic attacks on R-SDP that are shown to be accurate by our experiments. We note that neither of these improvements threatens the updated parameters of CROSS.
Last updated:  2024-04-21
Large Language Models for Blockchain Security: A Systematic Literature Review
Zheyuan He, Zihao Li, and Sen Yang
Large Language Models (LLMs) have emerged as powerful tools in various domains involving blockchain security (BS). Several recent studies are exploring LLMs applied to BS. However, there remains a gap in our understanding regarding the full scope of applications, impacts, and potential constraints of LLMs on blockchain security. To fill this gap, we conduct a literature review on LLM4BS. As the first review of LLM's application on blockchain security, our study aims to comprehensively analyze existing research and elucidate how LLMs contribute to enhancing the security of blockchain systems. Through a thorough examination of scholarly works, we delve into the integration of LLMs into various aspects of blockchain security. We explore the mechanisms through which LLMs can bolster blockchain security, including their applications in smart contract auditing, identity verification, anomaly detection, vulnerable repair, and so on. Furthermore, we critically assess the challenges and limitations associated with leveraging LLMs for blockchain security, considering factors such as scalability, privacy concerns, and adversarial attacks. Our review sheds light on the opportunities and potential risks inherent in this convergence, providing valuable insights for researchers, practitioners, and policymakers alike.
Last updated:  2024-04-21
Greco: Fast Zero-Knowledge Proofs for Valid FHE RLWE Ciphertexts Formation
Enrico Bottazzi
Fully homomorphic encryption (FHE) allows for evaluating arbitrary functions over encrypted data. In Multi-party FHE applications, different parties encrypt their secret data and submit ciphertexts to a server, which, according to the application logic, performs homomorphic operations on them. For example, in a secret voting application, the tally is computed by summing up the ciphertexts encoding the votes. Valid encrypted votes are of the form $E(0)$ and $E(1)$. A malicious voter could send an invalid encrypted vote such as $E(145127835)$, which can mess up the whole election. Because of that, users must prove that the ciphertext they submitted is a valid Ring-Learning with Errors (RLWE) ciphertext and that the plaintext message they encrypted is a valid vote (for example, either a 1 or 0). Greco uses zero-knowledge proof to let a user prove that their RLWE ciphertext is well-formed. Or, in other words, that the encryption operation was performed correctly. The resulting proof can be, therefore, composed with additional application-specific logic and subject to public verification in a non-interactive setting. Considering the secret voting application, one can prove further properties of the message being encrypted or even properties about the voter, allowing the application to support anonymous voting as well. The prover has been implemented using Halo2-lib as a proving system, and the benchmarks have shown that Greco can already be integrated into user-facing applications without creating excessive friction for the user. The implementation is available at https://github.com/privacy-scaling-explorations/greco
Last updated:  2024-04-21
Updatable, Aggregatable, Succinct Mercurial Vector Commitment from Lattice
Hongxiao Wang, Siu-Ming Yiu, Yanmin Zhao, and Zoe L. Jiang
Vector commitments (VC) and their variants attract a lot of attention due to their wide range of usage in applications such as blockchain and accumulator. Mercurial vector commitment (MVC), as one of the important variants of VC, is the core technique for building more complicated cryptographic applications, such as the zero-knowledge set (ZKS) and zero-knowledge elementary database (ZK-EDB). However, to the best of our knowledge, the only post-quantum MVC construction is trivially implied by a generic framework proposed by Catalano and Fiore (PKC '13) with lattice-based components which causes $\textit{large}$ auxiliary information and $\textit{cannot satisfy}$ any additional advanced properties, that is, updatable and aggregatable. A major difficulty in constructing a $\textit{non-black-box}$ lattice-based MVC is that it is not trivial to construct a lattice-based VC that satisfies a critical property called ``mercurial hiding". In this paper, we identify some specific features of a new falsifiable family of basis-augmented SIS assumption ($\mathsf{BASIS}$) proposed by Wee and Wu (EUROCRYPT '23) that can be utilized to construct the mercurial vector commitment from lattice $\textit{satisfying}$ updatability and aggregatability with $\textit{smaller}$ auxiliary information. We $\textit{first}$ extend stateless update and differential update to the mercurial vector commitment and define a $\textit{new}$ property, named updatable mercurial hiding. Then, we show how to modify our constructions to obtain the updatable mercurial vector commitment that satisfies these properties. To aggregate the openings, our constructions perfectly inherit the ability to aggregate in the $\mathsf{BASIS}$ assumption, which can break the limitation of $\textit{weak}$ binding in the current aggregatable MVCs. In the end, we show that our constructions can be used to build the various kinds of lattice-based ZKS and ZK-EDB directly within the existing framework.
Last updated:  2024-04-20
Revisiting Differential-Linear Attacks via a Boomerang Perspective with Application to AES, Ascon, CLEFIA, SKINNY, PRESENT, KNOT, TWINE, WARP, LBlock, Simeck, and SERPENT
Hosein Hadipour, Patrick Derbez, and Maria Eichlseder
In 1994, Langford and Hellman introduced differential-linear (DL) cryptanalysis, with the idea of decomposing the block cipher E into two parts, EU and EL, such that EU exhibits a high-probability differential trail, while EL has a high-correlation linear trail.Combining these trails forms a distinguisher for E, assuming independence between EU and EL. The dependency between the two parts of DL distinguishers remained unaddressed until EUROCRYPT 2019, where Bar-On et al. introduced the DLCT framework, resolving the issue up to one S-box layer. However, extending the DLCT framework to formalize the dependency between the two parts for multiple rounds remained an open problem. In this paper, we first tackle this problem from the perspective of boomerang analysis. By examining the relationships between DLCT, DDT, and LAT, we introduce a set of new tables facilitating the formulation of dependencies between the two parts of the DL distinguisher across multiple rounds. Then, as the main contribution, we introduce a highly versatile and easy-to-use automatic tool for exploring DL distinguishers, inspired by automatic tools for boomerang distinguishers. This tool considers the dependency between differential and linear trails across multiple rounds. We apply our tool to various symmetric-key primitives, and in all applications, we either present the first DL distinguishers or enhance the best-known ones. We achieve successful results against Ascon, AES, SERPENT, PRESENT, SKINNY, TWINE, CLEFIA, WARP, LBlock, Simeck, and KNOT. Furthermore, we demonstrate that, in some cases, DL distinguishers outperform boomerang distinguishers significantly.
Last updated:  2024-04-20
New Security Proofs and Techniques for Hash-and-Sign with Retry Signature Schemes
Benoît Cogliati, Pierre-Alain Fouque, Louis Goubin, and Brice Minaud
Hash-and-Sign with Retry is a popular technique to design efficient signature schemes from code-based or multivariate assumptions. Contrary to Hash-and-Sign signatures based on preimage-sampleable functions as defined by Gentry, Peikert and Vaikuntanathan (STOC 2008), trapdoor functions in code-based and multivariate schemes are not surjective. Therefore, the standard approach uses random trials. Kosuge and Xagawa (PKC 2024) coined it the Hash-and-Sign with Retry paradigm. As many attacks have appeared on code-based and multivariate schemes, we think it is important for the ongoing NIST competition to look at the security proofs of these schemes. The original proof of Sakumoto, Shirai, and Hiwatari (PQCrypto 2011) was flawed, then corrected by Chatterjee, Das and Pandit (INDOCRYPT 2022). The fix is still not sufficient, as it only works for very large finite fields. A new proof in the Quantum ROM model was proposed by Kosuge and Xagawa (PKC 2024), but it is rather loose, even when restricted to the classical setting. In this paper, we introduce several tools that yield tighter security bounds for Hash-and-Sign with Retry signatures in the classical setting. These include the Hellinger distance, stochastic dominance arguments, and a new combinatorial tool to transform a proof in the non-adaptative setting to the adaptative setting. Ultimately, we obtain a sharp bound for the security of Hash-and-Sign with Retry signatures, applicable to various code-based and multivariate schemes. Focusing on NIST candidates, we apply these results to the MAYO, PROV, and modified UOV signature schemes. In most cases, our bounds are tight enough to apply with the real parameters of those schemes; in some cases, smaller parameters would suffice.
Last updated:  2024-04-20
The Practical Advantage of RSA over ECC and Pairings
Zhengjun Cao and Lihua Liu
The coexistence of RSA and elliptic curve cryptosystem (ECC) had continued over forty years. It is well-known that ECC has the advantage of shorter key than RSA, which often leads a newcomer to assume that ECC runs faster. In this report, we generate the Mathematica codes for RSA-2048 and ECC-256, which visually show that RSA-2048 runs three times faster than ECC-256. It is also estimated that RSA-2048 runs 48,000 times faster than Weil pairing with 2 embedding degree and a fixed point.
Last updated:  2024-04-19
Low-latency Secure Integrated Sensing and Communication with Transmitter Actions
Truman Welling, Onur Gunlu, and Aylin Yener
This paper considers an information theoretic model of secure integrated sensing and communication, represented as a wiretap channel with action dependent states. This model allows one to secure a part of the transmitted message against a sensed target that eavesdrops the communication, while allowing transmitter actions to change the channel statistics. An exact secrecy-distortion region is given for a physically-degraded channel. Moreover, a finite-length achievability region is established for the model using an output statistics of random binning method, giving an achievable bound for low-latency applications.
Last updated:  2024-04-19
Classical Commitments to Quantum States
Sam Gunn, Yael Tauman Kalai, Anand Natarajan, and Agi Villanyi
We define the notion of a classical commitment scheme to quantum states, which allows a quantum prover to compute a classical commitment to a quantum state, and later open each qubit of the state in either the standard or the Hadamard basis. Our notion is a strengthening of the measurement protocol from Mahadev (STOC 2018). We construct such a commitment scheme from the post-quantum Learning With Errors (LWE) assumption, and more generally from any noisy trapdoor claw-free function family that has the distributional strong adaptive hardcore bit property (a property that we define in this work). Our scheme is succinct in the sense that the running time of the verifier in the commitment phase depends only on the security parameter (independent of the size of the committed state), and its running time in the opening phase grows only with the number of qubits that are being opened (and the security parameter). As a corollary we obtain a classical succinct argument system for QMA under the post-quantum LWE assumption. Previously, this was only known assuming post-quantum secure indistinguishability obfuscation. As an additional corollary we obtain a generic way of converting any X/Z quantum PCP into a succinct argument system under the quantum hardness of LWE.
Last updated:  2024-04-19
Security Analysis of XHASH8/12
Léo Perrin
We have investigated both the padding scheme and the applicability of algebraic attacks to both XHash8 and XHash12. The only vulnerability of the padding scheme we can find is plausibly applicable only in the multi-rate setting---for which the authors make no claim---and is safe otherwise. For algebraic attack relying on the computation and exploitation of a Gröbner basis, our survey of the literature suggests to base a security argument on the complexity of the variable elimination step rather than that of the computation of the Gröbner basis itself. Indeed, it turns out that the latter complexity is hard to estimate---and is sometimes litteraly non-existent. Focusing on the elimination step, we propose a generalization of the "FreeLunch" approach which, under a reasonable conjecture about the behaviour of the degree of polynomial ideals of dimension 0, is sufficient for us to argue that both XHash8 and XHash12 are safe against such attacks. We implemented a simplified version of the generation (and resolution) of the corresponding set of equations in SAGE, which allowed us to validate our conjecture at least experimentally, and in fact to show that the lower bound it provides on the ideal degree is not tight---meaning we are a priori understimating the security of these permutations against the algebraic attacks we consider. At this stage, if used as specified, these hash functions seem safe from Gröbner bases-based algebraic attacks.
Last updated:  2024-04-19
Analyzing the complexity of reference post-quantum software: the case of lattice-based KEMs
Daniel J. Bernstein
Software for various post-quantum KEMs has been submitted by the KEM design teams to the SUPERCOP testing framework. The ref/*.c and ref/*.h files together occupy, e.g., 848 lines for ntruhps4096821, 928 lines for ntruhrss701, 1316 lines for sntrup1277, and 2633 lines for kyber1024. It is easy to see that these numbers overestimate the inherent complexity of software for these KEMs. It is more difficult to systematically measure this complexity. This paper takes these KEMs as case studies and applies consistent rules to streamline the ref software for the KEMs, while still passing SUPERCOP's tests and preserving the decomposition of specified KEM operations into functions. The resulting software occupies 381 lines for ntruhps4096821, 385 lines for ntruhrss701, 472 lines for kyber1024, and 478 lines for sntrup1277. This paper also identifies the external subroutines used in each case, identifies the extent to which code is shared across different parameter sets, quantifies various software complications specific to each KEM, and traces how differences in KEM design goals produced different software complications. As a spinoff, this paper presents a kyber512 key-recovery demo exploiting variations in timings of the Kyber reference code.
Last updated:  2024-04-19
PriDe CT: Towards Public Consensus, Private Transactions, and Forward Secrecy in Decentralized Payments
Yue Guo, Harish Karthikeyan, Antigoni Polychroniadou, and Chaddy Huussin
Anonymous Zether, proposed by Bunz et al. (FC, 2020) and subsequently improved by Diamond (IEEE S&P, 2021) is an account-based confidential payment mechanism that works by using a smart contract to achieve privacy (i.e. identity of receivers to transactions and payloads are hidden). In this work, we look at simplifying the existing protocol while also achieving batching of transactions for multiple receivers, while ensuring consensus and forward secrecy. To the best of our knowledge, this work is the first to formally study the notion of forward secrecy in the setting of blockchain, borrowing a very popular and useful idea from the world of secure messaging. Specifically, we introduce: - FUL-Zether, a forward-secure version of Zether (Bunz et al., FC, 2020). - PRIvate DEcentralized Confidental Transactions (PriDe CT), a much-simplified version of Anonymous Zether that achieves competitive performance and enables batching of transactions for multiple receivers. - PRIvate DEcentralized Forward-secure Until Last update Confidential Transactions (PriDeFUL CT), a forward-secure version of PriDe CT. We also present an open-source, Ethereum-based implementation of our system. PriDe CT uses linear homomorphic encryption as Anonymous Zether but with simpler zero-knowledge proofs. PriDeFUL CT uses an updatable public key encryption scheme to achieve forward secrecy by introducing a new DDH-based construction in the standard model. In terms of transaction sizes, Quisquis (Asiacrypt, 2019), which is the only cryptocurrency that supports batchability (albeit in the UTXO model), has 15 times more group elements than PriDe CT. Meanwhile, for a ring of $N$ receivers, Anonymous Zether requires $6\log N$ more terms even without accounting for the ability to batch in PriDe CT. Further, our implementation indicates that, for $N=32$, even if there were 7 intended receivers, PriDe CT outperforms Anonymous Zether in proving time and gas consumption.
Last updated:  2024-04-19
SoK: Polynomial Multiplications for Lattice-Based Cryptosystems
Vincent Hwang
We survey various mathematical tools used in software works multiplying polynomials in $\mathbb{Z}_q [x] / ⟨x^n −αx−β⟩$. In particular, we survey implementation works targeting polynomial multiplications in lattice-based cryptosystems Dilithium, Kyber, NTRU, NTRU Prime, and Saber with instruction set architectures/extensions Armv7-M, Armv7E-M, Armv8-A, and AVX2. There are three emphases in this paper: (i) modular arithmetic, (ii) homomorphisms, and (iii) vectorization. For modular arithmetic, we survey Montgomery, Barrett, and Plantard multiplications. For homomorphisms, we survey (a) various homomorphisms such as Cooley–Tukey FFT, Bruun’s FFT, Rader’s FFT, Karatsuba, and Toom– Cook; (b) various algebraic techniques for adjoining nice properties to the coefficient rings, including injections, Schönhage’s FFT, Nussbaumer’s FFT, and localization; and (c) various algebraic techniques related to the polynomial moduli, including twisting, composed multiplication, evaluation at ∞, Good–Thomas FFT, truncation, incomplete transformation, and Toeplitz matrix-vector product. For vectorization, we survey the relations between homomorphisms and the support of vector arithmetic. We then go through several case studies: We compare the implementations of modular multiplications used in Dilithium and Kyber, explain how the matrix-to-vector structure was exploited in Saber, and review the design choices of transformations for NTRU and NTRU Prime with vectorization. Finally, we outline several interesting implementation projects.
Last updated:  2024-04-19
Formal Verification of Emulated Floating-Point Arithmetic in Falcon
Vincent Hwang
We show that there is a discrepancy between the emulated floating-point multiplications in the submission package of Falcon and the claimed behavior. In particular, we show that floating-point products with absolute values the smallest normal positive floating-point number are incorrectly zeroized. However, we show that the discrepancy doesn’t effect the complex fast Fourier transform by modeling the floating-point addition, subtraction, and multiplication in CryptoLine. We later implement our own floating-point multiplications in Armv7-M assembly and Jasmin and prove their equivalence with our model, demonstrating the possibility of transferring the challenging verification task (verifying highly-optimized assembly) to the presumably more readable code base (Jasmin).
Last updated:  2024-04-19
Combined Threshold Implementation
Jakob Feldtkeller, Jan Richter-Brockmann, Pascal Sasdrich, and Tim Güneysu
Physical security is an important aspect of devices for which an adversary can manipulate the physical execution environment. Recently, more and more attention has been directed towards a security model that combines the capabilities of passive and active physical attacks, i.e., an adversary that performs fault-injection and side-channel analysis at the same time. Implementing countermeasures against such a powerful adversary is not only costly but also requires the skillful combination of masking and redundancy to counteract all reciprocal effects. In this work, we propose a new methodology to generate combined-secure circuits. We show how to transform TI-like constructions to resist any adversary with the capability to tamper with internal gates and probe internal wires. For the resulting protection scheme, we can prove the combined security in a well-established theoretical security model. Since the transformation preserves the advantages of TI-like structures, the resulting circuits prove to be more efficient in the number of required bits of randomness (up to 100%), the latency in clock cycles (up to 40%), and even the area for pipelined designs (up to 40%) than the state of the art for an adversary restricted to manipulating a single gate and probing a single wire.
Last updated:  2024-04-19
Lattice-based, more general anti-leakage model and its application in decentralization
Xiaokang Dai, Jingwei Chen, Wenyuan Wu, and Yong Feng
In the case of standard \LWE samples $(\mathbf{A},\mathbf{b = sA + e})$, $\mathbf{A}$ is typically uniformly over $\mathbb{Z}_q^{n \times m}$. Under the \DLWE assumption, the conditional distribution of $\mathbf{s}|(\mathbf{A}, \mathbf{b})$ and $\mathbf{s}$ is expected to be consistent. However, in the case where an adversary chooses $\mathbf{A}$ adaptively, the disparity between the two entities may be larger. In this work, our primary focus is on the quantification of the Average Conditional Min-Entropy $\tilde{H}_\infty(\mathbf{s}|\mathbf{sA + e})$ of $\mathbf{s}$, where $\mathbf{A}$ is chosen by the adversary. Brakerski and D\"{o}ttling answered the question in one case: they proved that when $\mathbf{s}$ is uniformly chosen from $\mathbb{Z}_q^n$, it holds that $\tilde{H}_\infty(\mathbf{s}|\mathbf{sA + e}) \varpropto \rho_\sigma(\Lambda_q(\mathbf{A}))$. We prove that for any $d \leq q$, when $\mathbf{s}$ is uniformly chosen from $\mathbb{Z}_d^n$ or is sampled from a discrete Gaussian distribution, there are also similar results. As an independent result, we have also proved the regularity of the hash function mapped to the prime-order group and its Cartesian product. As an application of the above results, we improved the multi-key fully homomorphic encryption\cite{TCC:BraHalPol17} and answered the question raised at the end of their work positively: we have GSW-type ciphertext rather than Dual-GSW, and the improved scheme has shorter keys and ciphertexts.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.