Introduction
Imagine storing millions of dollars in Bitcoin with only a single laptop protecting the private key that controls it all. If an attacker compromises that laptop, your funds can be stolen in minutes with no recourse. This terrifying scenario is precisely why threshold cryptography has become increasingly important in cryptocurrency security.
In this article, we’ll dive into the world of threshold ECDSA (Elliptic Curve Digital Signature Algorithm) with a focus on a novel three-round protocol. This protocol represents a significant advancement in how we can secure private keys by distributing them across multiple devices while maintaining efficiency.
The core idea of threshold signatures is elegant: rather than storing a private key on a single device (a single point of failure), we distribute it across multiple devices or parties. A quorum of these parties must cooperate to produce a valid signature, meaning an attacker would need to compromise multiple devices to steal your funds. This approach leverages Multi-Party Computation (MPC) to manage and mitigate the risk associated with private key security.
Throughout this article, we’ll explore:
- The background of digital signatures, focusing on Schnorr and ECDSA
- Why ECDSA is challenging to distribute compared to other signature schemes
- The evolution of threshold ECDSA techniques
- Our novel three-round threshold ECDSA protocol that achieves both security and efficiency
- How this protocol uses a statistical consistency check to verify correct behavior
Whether you’re a cryptographer, security engineer, or blockchain enthusiast, understanding these advances in threshold signatures can help you build more secure systems in an increasingly decentralized world.
Background on Digital Signatures
Schnorr Signatures: A Review
Before diving into the complexities of ECDSA, let’s review Schnorr signatures, which are known for their efficiency, compactness, and natural suitability for MPC-based distribution. Their linear algebraic structure makes them relatively easy to work with in threshold settings.
Schnorr signatures operate within a cyclic group \(\mathbb{G}\), typically derived from elliptic curves. Here are the essential components:
- A cryptographic hash function \(\mathcal{H}\)
- A cyclic group \(\mathbb{G}\) generated by \(G\)
- The order of the group is a large prime \(q\) (typically around \(2^{256}\))
- Group elements are represented by capital letters (e.g., \(X, R \in \mathbb{G}\))
- Their corresponding scalars (discrete logarithms) are represented by lowercase letters (e.g., \(x, k \in \mathbb{Z}_q\)), such that \(X = xG\)
- The group operation is denoted additively: \((x+y)G = X + Y = xG + yG\)
- Security relies on the computational hardness of the discrete logarithm problem in \(\mathbb{G}\)
Schnorr Key Generation
The key generation process is straightforward:
- Secret Key Sampling: Randomly select a secret key \(\mathrm{sk}\) from the scalar field \(\mathbb{Z}_q\)
- Public Key Derivation: Compute the public key \(\mathrm{pk}\) by multiplying the secret key with the group generator: \(\mathrm{pk} = \mathrm{sk} \cdot G\)
Schnorr Signing
To generate a Schnorr signature for a message \(\mathrm{msg}\) under the secret key \(\mathrm{sk}\):
- Nonce Generation: Sample a random nonce (ephemeral key) \(k \leftarrow \mathbb{Z}_q\). This nonce is crucial for each signature and must be kept secret.
- Nonce Point Computation: Calculate the nonce point \(R = k \cdot G\), a point on the elliptic curve.
- Challenge Computation: Compute the cryptographic hash of the nonce point \(R\), the message \(\mathrm{msg}\), and the public key \(\mathrm{pk}\): \(e = \mathcal{H}(R || \mathrm{msg} || \mathrm{pk})\). Including the public key in the hash is a recommended security practice.
- Signature Component Calculation: Compute the signature component \(s\) as a linear combination: \(s = k + e \cdot \mathrm{sk} \pmod{q}\). This is performed in the scalar field \(\mathbb{Z}_q\).
- Signature Output: The Schnorr signature is the pair \(\mathrm{sig} = (s, e)\).
Schnorr Verification
To verify a Schnorr signature \(\mathrm{sig} = (s, e)\) for a message \(\mathrm{msg}\) under the public key \(\mathrm{pk}\):
- Recompute Nonce Point: Recompute a point \(R' = s \cdot G - e \cdot \mathrm{pk}\). If the signature is valid, \(R'\) should equal the original nonce point \(R\).
- Verification Check: Verify by checking if \(e \stackrel{?}{=} \mathcal{H}(R' || \mathrm{msg} || \mathrm{pk})\). If this equality holds, the signature is valid.
The linear structure of the signing equation \(s = k + e \cdot \mathrm{sk}\) makes Schnorr signatures particularly advantageous for MPC. Since addition and scalar multiplication are linear operations, they can be efficiently computed in a distributed manner using techniques like additive secret sharing, making Schnorr signatures “MPC-friendly.”
Elliptic Curve Digital Signature Algorithm (ECDSA)
ECDSA is the most widely deployed digital signature scheme in practice, used across a vast range of internet infrastructure and applications, including cryptocurrencies like Bitcoin and Ethereum. ECDSA gained prominence partly because it appeared to circumvent patents associated with Schnorr signatures when it was first standardized.
However, unlike Schnorr, ECDSA’s algebraic structure creates significant challenges when attempting to create efficient and secure MPC protocols for threshold signatures. Let’s examine why.
ECDSA Signing Process
ECDSA Key Generation
ECDSA key generation mirrors that of Schnorr signatures:
- Secret Key Sampling: Sample a secret key \(\mathrm{sk} \leftarrow \mathbb{Z}_q\)
- Public Key Derivation: Compute the public key \(\mathrm{pk} = \mathrm{sk} \cdot G\)
ECDSA Signing
To sign a message \(\mathrm{msg}\) using ECDSA with secret key \(\mathrm{sk}\):
- Nonce Generation: Sample a random nonce \(k \leftarrow \mathbb{Z}_q\)
- Nonce Point Computation: Compute the nonce point \(R = k \cdot G = (x_R, y_R)\), where \((x_R, y_R)\) are the coordinates of point \(R\)
- Calculate \(r\): Compute \(r = x_R \pmod{q}\). If \(r = 0\), restart with a new nonce
- Message Hash: Compute the cryptographic hash of the message: \(e = \mathcal{H}(\mathrm{msg})\)
- Calculate \(s\): Compute \(s = k^{-1} (e + r \cdot \mathrm{sk}) \pmod{q}\). If \(s = 0\), restart with a new nonce
- Signature Output: The ECDSA signature is the pair \(\mathrm{sig} = (r, s)\)
In point 5. \(k^{-1}\) refers to the modular inverse of the nonce \(k\) modulo \(q\). This inversion operation is a significant challenge in MPC settings due to its non-linearity. To make this less abstract and more concrete, let’s make a super easy example:
Let’s see a simple example to illustrate the ECDSA signing process. Suppose we have the following parameters:
- Secret key \(\mathrm{sk} = 7\)
- Nonce \(k = 3\)
- Message hash \(e = 5\)
- Group generator \(G = (2, 3)\) (for simplicity, assume these are coordinates on the elliptic curve)
The steps are as follows:
- Nonce Point Computation: Calculate the nonce point \(R = k \cdot G\). For our example, let’s assume \(R = (6, 1)\).
- Calculate \(r\): Compute \(r = x_R \pmod{q}\). Here, \(x_R = 6\), so \(r = 6\).
- Calculate \(s\): Compute \(s = k^{-1} (e + r \cdot \mathrm{sk}) \pmod{q}\). Assuming \(k^{-1} = 2\) (since \(3 \cdot 2 \equiv 1 \pmod{q}\)), we get \(s = 2 \cdot (5 + 6 \cdot 7) \pmod{q} = 2 \cdot (5 + 42) = 2 \cdot 47 = 94 \pmod{q}\). If \(q = 97\), then \(s = 94\).
The ECDSA signature is \((r, s) = (6, 94)\).
ECDSA Verification
To verify an ECDSA signature \(\mathrm{sig} = (r, s)\) for a message \(\mathrm{msg}\) under the public key \(\mathrm{pk}\):
- Range Verification: Verify that both \(r\) and \(s\) are within the valid range \([1, q-1]\)
- Message Hash: Compute the hash of the message: \(e = \mathcal{H}(\mathrm{msg})\)
- Compute Inverses and Intermediate Values: Compute \(s^{-1} \pmod{q}\), then calculate \(u_1 = e \cdot s^{-1} \pmod{q}\) and \(u_2 = r \cdot s^{-1} \pmod{q}\)
- Recompute Point \(R'\): Compute \(R' = u_1 \cdot G + u_2 \cdot \mathrm{pk} = (x_{R'}, y_{R'})\)
- Calculate \(v\): Compute \(v = x_{R'} \pmod{q}\)
- Verification Check: Check if \(v = r\). If so, the signature is valid
Challenges for MPC
While ECDSA is widely adopted, its signing process creates significant challenges for MPC protocols, particularly in dishonest majority settings. These challenges stem from the non-linear operations within the signing algorithm:
Secure Multiplication of Secrets: The term \(r \cdot \mathrm{sk}\) in calculating \(s = k^{-1} (e + r \cdot \mathrm{sk}) \pmod{q}\) requires multiplying two values that must remain secret in a threshold setting. Securely performing this multiplication in an MPC context is non-trivial.
Secure Modular Inversion of Secrets: Computing \(s\) involves the modular inversion of the secret nonce \(k\) (i.e., \(k^{-1} \pmod{q}\)). Secure inversion is generally a computationally expensive operation in MPC. Naïve approaches using arithmetic circuits for inversion are prohibitively inefficient.
These non-linearities contrast sharply with the linear structure of Schnorr signatures. Directly translating the standard ECDSA signing algorithm into a secure MPC protocol is inefficient and complex due to these operations. Consequently, significant research has been dedicated to developing techniques to circumvent or efficiently implement these non-linear operations.
Threshold Signatures and Secure Multi-Party Computation (MPC)
Motivation: Securing Bitcoin Bob’s Keys
Consider “Bitcoin Bob,” who stores significant Bitcoin wealth secured by a private key on his personal laptop. When Bob sends Bitcoin to his friend Alice, he must sign a transaction message using this private key, creating a critical vulnerability. If an attacker compromises Bob’s laptop, they could extract the private key.
The consequences of such a key compromise are severe:
- Stolen Funds: An attacker could forge transactions to transfer all of Bob’s Bitcoin
- Identity Theft: They could sign any message as if it came from Bob
- Long-Term Risk: If the compromised key is used for other purposes (secure communication, access control), the attacker gains persistent unauthorized access
This scenario illustrates the inherent risk of relying on a single device to safeguard a private key. Threshold signatures emerge as a powerful countermeasure to mitigate this single point of failure.
Threshold Signing Concept
Threshold signing enhances the security of digital signatures by distributing the private key across multiple participants or devices. Instead of a single entity holding the entire private key, in a threshold signature scheme, the private key is split into \(n\) shares, with each of \(n\) parties receiving one share.
A crucial parameter is the threshold value, denoted as \(t\). This defines the minimum number of parties required to cooperate to generate a valid signature. In a \((t+1, n)\)-threshold signature scheme, any group of \(t+1\) or more parties can jointly produce a signature. However, any coalition of \(t\) or fewer parties cannot generate a valid signature or reconstruct the complete private key.
The security benefits are significant:
- Distributed Risk: An attacker must compromise more than \(t\) parties to forge signatures
- Fault Tolerance: The system can tolerate the failure of up to \(t\) parties without losing signing capability
- Enhanced Security: By eliminating the single point of failure, threshold signatures significantly improve overall security
To generate a signature in a threshold scheme, the participating parties engage in a distributed protocol using MPC techniques. This allows them to compute a signature jointly without revealing their individual private key shares.
Adversary Model: Dishonest Majority
In threshold signature protocols, the adversary model defines the capabilities and limitations of attackers. A particularly strong model is the dishonest majority setting.
In this setting, we assume the adversary can corrupt and control up to \(n-1\) out of \(n\) total parties. This means the number of corrupted parties can be greater than or equal to half the total. In the most extreme scenario, only a single party might remain honest.
Furthermore, the adversary is considered malicious. A malicious adversary can actively and arbitrarily deviate from the protocol by:
- Sending malformed messages to disrupt the protocol
- Refusing to participate or halting execution
- Colluding with other adversarial parties
- Attempting to learn private information
Designing secure threshold ECDSA protocols in the dishonest majority setting is significantly more challenging than in weaker models like honest majority (where more than half the parties are honest). Protocols for dishonest majority must incorporate robust mechanisms to ensure correctness and security even against powerful, actively malicious adversaries.
Linear Secret Sharing and Operations
To enable secure distributed computation, secret sharing is a fundamental cryptographic technique. Among various schemes, linear secret sharing is particularly well-suited for MPC protocols due to algebraic properties that facilitate efficient computation of linear operations on secret values.
Additive Secret Sharing
Additive secret sharing is a simple yet powerful linear secret sharing scheme that’s conceptually straightforward and efficient for many MPC applications.
In additive secret sharing, to share a secret value \(x\), it’s split into multiple shares that sum to the original secret. For instance, to share \(x\) among two parties, we generate two random values \(x_A\) and \(x_B\) such that \(x = x_A + x_B\). Party A receives \(x_A\) as their share, and Party B receives \(x_B\).
Generalizing to \(n\) parties, we generate \(n-1\) random values \(x_1, x_2, \ldots, x_{n-1}\) and compute the \(n^{th}\) share as \(x_n = x - (x_1 + x_2 + \ldots + x_{n-1})\). The secret \(x\) is shared as \([x] = (x_1, x_2, \ldots, x_n)\), where party \(i\) holds share \(x_i\). By construction, \(x = \sum_{i=1}^{n} x_i\).
The crucial security property is privacy. Individual shares reveal no information about the secret value itself. In additive secret sharing, knowing any \(t\) shares (where \(t < n\)) provides no information about \(x\). Only by combining at least \(t+1\) shares can the secret be recovered.
Linear Combinations of Secrets
A significant benefit of linear secret sharing schemes is the ability to perform linear operations on secret-shared values locally, without requiring interaction between parties.
Suppose we have two secret-shared values \(X = [x]\) and \(Y = [y]\), meaning each party \(i\) holds shares \(x_i\) and \(y_i\) respectively, such that \(x = \sum_i x_i\) and \(y = \sum_i y_i\). If we want to compute a linear combination \(Z = cX + dY = [cx + dy]\), where \(c\) and \(d\) are public constants, each party can compute their share of \(Z\) independently.
Party \(i\) computes their share as \(z_i = c \cdot x_i + d \cdot y_i\). Then, if we sum up all the shares \(z_i\), we get:
\[ \sum_{i} z_i = \sum_{i} (c \cdot x_i + d \cdot y_i) = c \sum_{i} x_i + d \sum_{i} y_i = c \cdot x + d \cdot y = z \]
Thus, \(Z = [z] = [cx + dy]\) is indeed the secret sharing of the linear combination. This computation is entirely local; parties don’t need to communicate.
Example:
Let’s say we have two secrets, \(x = 10\) and \(y = 5\), shared additively among three parties. - Shares of \(x\): \(x_1 = 3, x_2 = 5, x_3 = 2\) (since \(3+5+2 = 10\)) - Shares of \(y\): \(y_1 = 1, y_2 = 2, y_3 = 2\) (since \(1+2+2 = 5\))
Suppose we want to compute \(Z = 2X + 3Y = [2x + 3y] = [2 \cdot 10 + 3 \cdot 5] = [35]\). Each party computes their share of \(Z\) locally: - \(z_1 = 2x_1 + 3y_1 = 2 \cdot 3 + 3 \cdot 1 = 9\) - \(z_2 = 2x_2 + 3y_2 = 2 \cdot 5 + 3 \cdot 2 = 16\) - \(z_3 = 2x_3 + 3y_3 = 2 \cdot 2 + 3 \cdot 2 = 10\)
Checking the sum: \(z_1 + z_2 + z_3 = 9 + 16 + 10 = 35 = 2x + 3y\). So, \(Z = [35]\) is secret shared as \((9, 16, 10)\).
This ability to perform linear operations locally is crucial for efficient MPC protocols, serving as building blocks for more complex computations while maintaining privacy.
Secure Two-Party Multiplication as a Building Block
While linear operations on secret shares are efficient and non-interactive, non-linear operations, such as multiplication of secret values, are more complex and typically require interaction. Secure two-party multiplication (\(\mathrm{2PMul}\)) is a fundamental building block that enables secure computation of products of secret values and is essential for constructing many dishonest majority MPC protocols, including threshold ECDSA.
Secure two-party multiplication (\(\mathrm{2PMul}\)) is a core MPC primitive that allows two parties to compute the product of their secret inputs while only revealing shares of the product, and nothing else about the inputs themselves. It is a cornerstone for building more complex MPC protocols.
A \(\mathrm{2PMul}\) protocol involves two parties, Party A and Party B. Party A holds a secret value \(\alpha\), and Party B holds a secret value \(\beta\). The goal is for them to interact and compute shares of the product \(\gamma = \alpha \cdot \beta\). After executing the protocol, Party A receives a share \(C\), and Party B receives a share \(D\), such that \(C + D = \alpha \cdot \beta\). Crucially, the individual shares \(C\) and \(D\) should not reveal information about \(\alpha\) or \(\beta\).
There are several cryptographic techniques to instantiate \(\mathrm{2PMul}\) protocols:
Oblivious Transfer (OT) based \(\mathrm{2PMul}\): OT is a cryptographic primitive where a sender transmits one of several possible messages to a receiver but remains oblivious as to which message was chosen. OT-based \(\mathrm{2PMul}\) protocols are widely used and can be quite efficient, especially with OT extension techniques.
Additively Homomorphic Encryption (AHE) based \(\mathrm{2PMul}\): AHE schemes, such as Paillier encryption, allow for addition of encrypted values without decryption. Party A can encrypt its input \(\alpha\) and send the ciphertext to Party B, who homomorphically multiplies it with \(\beta\) and partially decrypts to generate shares of the product.
The choice of \(\mathrm{2PMul}\) instantiation significantly impacts the overall performance of a threshold ECDSA protocol, including efficiency, round complexity, and security assumptions. The optimal choice depends on application requirements, desired security level, and computational environment.
Evolution of Threshold ECDSA Techniques
The development of efficient and secure threshold ECDSA protocols has been a central focus in cryptographic research for over two decades. The inherent non-linearities of ECDSA make it significantly more challenging to distribute compared to Schnorr signatures. Let’s trace the evolution of techniques aimed at overcoming these challenges.
Rewriting ECDSA for MPC Friendliness: Taming Non-Linearities
A foundational strategy in designing threshold ECDSA protocols is to reformulate or “rewrite” the standard signing algorithm to make it more amenable to MPC. The goal is to minimize or eliminate expensive operations like secure modular inversion and simplify the requirements for secure multiplication.
Avoiding Secure Inversion: A Primary Goal
The modular inversion of the nonce \(k^{-1}\) in standard ECDSA (\(s = k^{-1}(e + r \cdot \mathrm{sk}) \pmod{q}\)) is a major bottleneck in MPC. Directly implementing modular inversion using generic MPC techniques results in prohibitively high computational costs. Early research focused on finding ways to reformulate ECDSA signing to circumvent the need for secure inversion.
The Inverted Nonce Method: Shifting the Inversion
The inverted nonce method, pioneered by Langford et al. and further developed by Gennaro et al., represents an early approach to address the secure inversion challenge.
Instead of computing \(s = k^{-1}(e + r \cdot \mathrm{sk})\), this method modifies the nonce generation step. In standard ECDSA, the nonce point is \(R = k \cdot G\). In the inverted nonce method, this becomes \(R = k^{-1} \cdot G\).
This seemingly small change has significant implications for the signing equation. The goal was to move the inversion from the final signature component calculation into the nonce point generation, hoping to compute \(k^{-1} \cdot G\) more efficiently than performing inversion within the MPC protocol.
However, this approach doesn’t eliminate the challenge of inversion—it merely shifts it. Computing \(R = k^{-1} \cdot G\) securely still requires addressing the inversion of the secret nonce \(k\). Early works utilized techniques such as the Barland-Beaver trick combined with redundant computation and masking to handle this inverted nonce generation.
The ECDSA Tuple Method: Masking and Linearization
The ECDSA tuple method represents a more recent and conceptually cleaner approach to rewriting ECDSA for MPC. While implicitly present in earlier works like Lindell-Nof and Renalucci, it was explicitly formalized by Abraham et al.
This method introduces a random masking value \(\phi\) to linearize the signing equation. The core idea is to rewrite the intended signing value by separating it into a “numerator” and “denominator,” both masked by \(\phi\).
Let’s consider the standard ECDSA signature component \(s = k^{-1}(e + r \cdot \mathrm{sk}) \pmod{q}\). The ECDSA tuple method rewrites this as \(S = \frac{\alpha}{\beta}\), where both \(\alpha\) and \(\beta\) are masked by \(\phi\). If the original signing value was conceptually \(v = k^{-1}(e + r \cdot \mathrm{sk})\), then:
- Numerator: \(\alpha = \phi \cdot v = \phi \cdot k^{-1}(e + r \cdot \mathrm{sk})\)
- Denominator: \(\beta = \phi \cdot 1 = \phi\)
The signature \(S\) is obtained as the ratio \(\alpha / \beta\). By working with these masked values, the protocol primarily involves secure multiplications and linear operations, avoiding explicit secure inversion within the core MPC protocol. The random mask \(\phi\) ensures security and enables efficient verification.
Early Approaches and the Honest Majority Setting
Initial explorations into threshold ECDSA often operated under the assumption of an honest majority.
In the honest majority setting, it’s assumed that a majority of participating parties are honest and will faithfully follow the protocol. This is a weaker adversary model compared to dishonest majority.
This assumption significantly simplifies the design and analysis of MPC protocols. In honest majority settings, simpler MPC techniques and weaker security assumptions can often be tolerated.
Early threshold ECDSA protocols, developed by Langford et al., Gennaro et al., and others, often leveraged the inverted nonce method and employed MPC techniques efficient in honest majority settings. These protocols demonstrated the feasibility of threshold ECDSA but were limited by their reliance on the honest majority assumption.
Dishonest Majority Approaches and the Landscape of Tradeoffs
To achieve security in more realistic scenarios, particularly in the dishonest majority setting, research shifted towards more sophisticated techniques. Protocols for dishonest majority must withstand malicious adversaries who can arbitrarily deviate from the protocol and potentially control a majority of participants.
OT-based Secure Multiplication: Efficiency Focus
In contrast to Paillier-based methods, using Oblivious Transfer (OT) for secure multiplication offers different trade-offs, often prioritizing efficiency, as in the speaker’s work from 2018-2019.
This approach combines multiplicative rewriting of ECDSA with OT-based \(\mathrm{2PMul}\) protocols. OT-based \(\mathrm{2PMul}\) can be significantly faster than Paillier-based approaches, especially using OT extension techniques.
By employing OT for secure multiplication, researchers achieved a better balance between security and efficiency, with lower computational overhead and potentially lower latency. OT-based approaches represent a significant branch in threshold ECDSA evolution, focusing on practical efficiency while maintaining dishonest majority security.
Revisiting Inverted Nonce and Verification Challenges
The inverted nonce method continued to be explored for dishonest majority security. Gennaro and Goldfeder revisited this approach, while Kenetti-Gennaro-Goldfeder-Maclianis-Pellet incorporated zero-knowledge proofs for verification at each protocol step.
Zero-knowledge proofs allow a party to prove computation correctness without revealing sensitive information. In threshold ECDSA, these proofs ensure honest behavior and correct execution. Parties can prove in zero-knowledge that they have correctly computed shares, performed multiplications accurately, and followed protocol specifications.
While zero-knowledge proofs provide strong security guarantees, they introduce significant computational overhead. Generating and verifying these proofs can be expensive, and complexity scales with the computation being proven. Protocols heavily relying on zero-knowledge proofs for verification tend to be slower and less efficient than those using lighter-weight verification techniques.
Dimensions of Variation: A Comparative Landscape
When comparing threshold ECDSA protocols, several key dimensions highlight different design choices and trade-offs.
ECDSA Rewriting Methods: Shaping the Computation
The choice of ECDSA rewriting method significantly impacts the structure and efficiency of a protocol:
Inverted Nonce Method: Modifies nonce generation to use \(R = k^{-1} \cdot G\), shifting the inversion challenge but requiring techniques to handle inverted nonce generation securely
Multiplicative Rewriting: Breaks down the ECDSA signing equation into components expressible through multiplicative secret sharing, often combined with Paillier encryption or OT-based multiplication
ECDSA Tuple Method: Uses a random mask \(\phi\) to rewrite the signing value as a ratio of masked numerator and denominator, creating a process relying primarily on secure multiplications and avoiding explicit inversions
Secure Multiplication Techniques: The MPC Engine
The choice of secure multiplication technique heavily influences efficiency and security:
Paillier Encryption-based \(\mathrm{2PMul}\): Uses Paillier encryption’s homomorphic properties for secure multiplication. While offering dishonest majority security, these methods can be computationally intensive
Oblivious Transfer (OT)-based \(\mathrm{2PMul}\): Employs OT protocols and extension techniques for efficient secure multiplication, often offering a better balance of efficiency and security
Generic MPC (Arithmetic Circuits): Expresses multiplication as an arithmetic circuit evaluated with generic MPC protocols, generally inefficient for ECDSA due to the complexity of modular operations
Verification Strategies: Ensuring Correctness in MPC
In dishonest majority settings, verification strategies ensure honest behavior and correct computation:
Checking Relations in the Exponent: Parties check publicly verifiable relationships between values, often in the exponent of elliptic curve groups
Interactive Signature Verification: Some protocols use interactive verification where parties collaboratively verify signature shares before revealing the final signature
Zero-Knowledge Proofs (ZKPs): Proving correctness of each step provides strong security and verifiability but introduces significant computational overhead
Committed Computation Replay: Performing computation twice—once in private MPC and again in committed form—then proving relationships between committed values
Message Authentication Codes (MACs): Generic MPC compilers often use MACs to ensure computation integrity
Statistical Consistency Checks: Lightweight checks leveraging probabilistic arguments to detect cheating with high probability, offering efficient verification with minimal overhead
These dimensions provide a framework for analyzing the evolution of threshold ECDSA protocols towards more efficient, secure, and practical solutions.
Our Contribution: Three-Round Threshold ECDSA Protocol
Our work introduces a novel three-round threshold ECDSA protocol specifically designed for the dishonest majority setting. The protocol prioritizes simplicity and efficiency, minimizing communication rounds and computational overhead through two key innovations.
Leveraging the ECDSA Tuple Method for Efficient MPC
Our protocol strategically employs the ECDSA tuple method as its foundation. This method transforms the inherently non-linear ECDSA signing equation into a form more amenable to efficient MPC, shifting the computational burden from expensive secure inversions towards secure multiplications that can be handled more efficiently.
By adopting this approach, we construct a protocol that’s inherently streamlined for MPC. The signing process primarily involves linear operations and secure two-party multiplications (\(\mathrm{2PMul}\)), making it well-suited for implementation using efficient MPC primitives. This strategic choice is crucial in achieving low-round complexity and minimizing computational cost.
A Novel Statistical Consistency Check for Lightweight Verification
A key contribution is our novel statistical consistency check for verification. Instead of complex zero-knowledge proofs or heavyweight MAC-based approaches, we introduce a simple and efficient pairwise check. This check provides probabilistic guarantees of correctness with overwhelmingly high probability while being significantly cheaper than deterministic methods.
Detailed Explanation of the Pairwise Verification Mechanism
Our pairwise verification mechanism ensures two critical properties:
Consistency of the Random Mask \(\phi\): The security of the ECDSA tuple method relies on the random mask \(\phi\) being consistently used across all related computations. The same \(\phi\) must be employed in generating both \(\phi \cdot K\) and \(\phi \cdot \mathrm{sk}\) components.
Correctness of Discrete Logarithm Inputs in Multiplication: The protocol involves secure multiplications where one input is expected to be the discrete logarithm of a publicly known elliptic curve point. For instance, when computing \(\phi \cdot K\), we must verify that \(K\) is indeed the discrete logarithm of the nonce point \(R = K \cdot G\). Similarly, for \(\phi \cdot \mathrm{sk}\), we need to ensure \(\mathrm{sk}\) is the discrete logarithm of the public key \(\mathrm{pk} = \mathrm{sk} \cdot G\).
Our pairwise verification mechanism efficiently addresses both requirements using a combination of protocol design and a lightweight statistical check.
Ensuring Consistency of the Random Mask \(\phi\) using VOLE
To guarantee the consistency of the random mask \(\phi\) across multiple secure multiplications, our protocol leverages a property inherent in certain \(\mathrm{2PMul}\) protocols, particularly those based on two-message Vector Oblivious Linear Evaluation (VOLE).
Vector Oblivious Linear Evaluation (VOLE) is a cryptographic primitive that allows for efficient generation of correlated random values. In our protocol, VOLE is used to ensure that the random mask \(\phi\) is consistently used across multiple secure multiplications.
Many efficient \(\mathrm{2PMul}\) protocols are structured as two-message protocols. In such protocols, the first message sent by one party often depends solely on its input to the multiplication. This characteristic allows for a clever optimization to ensure \(\phi\) consistency.
Imagine two parties, Party A and Party B, engaging in \(\mathrm{2PMul}\). If Party A’s first message in the protocol is solely determined by its input, and Party A needs to multiply the same input (related to \(\phi\)) with two different inputs from Party B (inputs related to \(K\) and \(\mathrm{sk}\)), Party A can simply reuse the same first message for both \(\mathrm{2PMul}\) instances.
By reusing the first message, Party A effectively commits to the same underlying input (related to \(\phi\)) for both multiplications. Party B, upon receiving the same first message in both instances, is assured that Party A is using the same input. This inherent property provides a natural and efficient way to enforce the consistency of \(\phi\) without requiring additional communication or complex mechanisms.
Statistical Verification of Discrete Logarithm Inputs
The statistical consistency check is designed to verify that when a party claims to provide an input to a \(\mathrm{2PMul}\) protocol that is supposed to be the discrete logarithm of a public elliptic curve point, they are indeed using the correct value. This is crucial for ensuring the integrity of the cryptographic operations.
Let’s consider a simplified two-party scenario involving a “Phone” and a “Laptop” for illustration. Suppose the Phone party inputs a share of \(\phi\), denoted as \(\phi_{\text{phone}}\), and the Laptop party inputs a share of \(K\), denoted as \(K_{\text{laptop}}\), into a \(\mathrm{2PMul}\) protocol. After execution, they obtain output shares, \(T_{\phi}\) for the Phone and \(T_{K}\) for the Laptop, such that \(T_{\phi} + T_{K} = \phi \cdot K\). The Laptop party claims that its input \(K_{\text{laptop}}\) corresponds to a value \(K\) which is the discrete logarithm of a public point \(R = K \cdot G\). We need to verify this claim.
The verification process proceeds as follows:
Exponentiation of Phone’s Share: The Phone party computes its share of the product in the exponent by performing elliptic curve scalar multiplication: \(\mathcal{T}_{\phi} = T_{\phi} \cdot G\), resulting in an elliptic curve point.
Exponentiation of Laptop’s Share: Similarly, the Laptop party computes its share in the exponent: \(\mathcal{T}_{K} = T_{K} \cdot G\).
Phone Computes Expected Value: The Phone party, knowing its share \(T_{\phi}\) and the public point \(R = K \cdot G\), can compute the expected value in the exponent. Since \(T_{\phi} + T_{K} = \phi \cdot K\), ideally, \(\mathcal{T}_{\phi} + \mathcal{T}_{K} = (\phi \cdot K) \cdot G = \phi \cdot (K \cdot G) = \phi \cdot R\). The Phone can calculate the expected value of \(\mathcal{T}_{K}\) as: \(\mathcal{T}_{K}^{\text{expected}} = (\phi \cdot R) - \mathcal{T}_{\phi}\).
Laptop Sends Actual Exponent Share: The Laptop sends its computed elliptic curve point \(\mathcal{T}_{K}^{\text{actual}} = \mathcal{T}_{K}\) to the Phone.
Verification Check: The Phone performs the statistical consistency check by comparing the expected value \(\mathcal{T}_{K}^{\text{expected}}\) with the actual value \(\mathcal{T}_{K}^{\text{actual}}\). If they are equal, the check passes, indicating a high probability of correctness. If not, the check fails, indicating a likely cheating attempt.
Security Analysis of the Statistical Check
The security of this statistical check hinges on the fact that if the Laptop attempts to cheat by providing an incorrect input \(K' = K + \delta\) to the \(\mathrm{2PMul}\) protocol (where \(\delta \neq 0\)), it is overwhelmingly unlikely that they will pass the verification check.
Let’s analyze a cheating scenario where the Laptop uses an incorrect input \(K' = K + \delta\). When the Phone computes the expected value \(\mathcal{T}_{K}^{\text{expected}} = (\phi \cdot R) - \mathcal{T}_{\phi}\), it assumes the Laptop used the correct input \(K\). However, if the Laptop used \(K' = K + \delta\), the actual value computed in the exponent will be \(\mathcal{T}_{K}^{\text{actual}} = (T_{K}') \cdot G\), where \(T_{\phi} + T_{K}' = \phi \cdot K'\).
Therefore: \(T_{K}' = \phi \cdot K' - T_{\phi} = \phi \cdot (K + \delta) - T_{\phi} = (\phi \cdot K - T_{\phi}) + \phi \cdot \delta = T_{K} + \phi \cdot \delta\)
Thus, \(\mathcal{T}_{K}^{\text{actual}} = (T_{K} + \phi \cdot \delta) \cdot G = \mathcal{T}_{K} + (\phi \cdot \delta) \cdot G = \mathcal{T}_{K} + (\phi \cdot G) \cdot \delta\).
The expected value computed by the Phone is \(\mathcal{T}_{K}^{\text{expected}} = (\phi \cdot R) - \mathcal{T}_{\phi} = (\phi \cdot K) \cdot G - \mathcal{T}_{\phi} = \mathcal{T}_{K}\).
The difference between actual and expected values is \(\mathcal{T}_{K}^{\text{actual}} - \mathcal{T}_{K}^{\text{expected}} = (\phi \cdot \delta) \cdot G\). For the check to pass when cheating, the Laptop must make \(\mathcal{T}_{K}^{\text{actual}} = \mathcal{T}_{K}^{\text{expected}}\), meaning \((\phi \cdot \delta) \cdot G = 0\). Since \(G\) is the generator of a group of order \(q\), and \(\delta \neq 0\), this equality holds only if \(\phi \cdot \delta \equiv 0 \pmod{q}\), or equivalently, \(\phi \equiv 0 \pmod{q}\) (since \(\delta \not\equiv 0 \pmod{q}\)).
However, \(\phi\) is chosen randomly from \(\mathbb{Z}_q\). The probability that a randomly chosen \(\phi \in \mathbb{Z}_q\) equals \(0 \pmod{q}\) is negligible, specifically \(1/q\). Since \(q\) is a large prime (around \(2^{256}\)), this probability is astronomically small (approximately \(2^{-256}\)).
Therefore, if the Laptop uses an incorrect input, the probability of passing the check is negligibly small. If it uses the correct input, the check always passes. This binary outcome makes the statistical check highly effective in practice.
The random mask \(\phi\) acts as an information-theoretic MAC on the secret values \(K\) and \(\mathrm{sk}\). The check leverages $randomness to detect inconsistencies in the inputs to the secure multiplication, providing a lightweight yet robust verification mechanism.