Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: kzg batch verification #24

Merged
merged 40 commits into from
Jan 11, 2025
Merged
Show file tree
Hide file tree
Changes from 22 commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
55a5bf0
adding bare changes for batch verification
Jan 4, 2025
259f6b0
adding some comments
anupsv Jan 4, 2025
4c4bdd0
adding more comments
anupsv Jan 4, 2025
4ecd566
moving back to sha2
Jan 6, 2025
cb75b3b
removing a test which is no longer needed. Removing methods no longer…
Jan 6, 2025
42bc913
updates to method visibility, updating tests
Jan 7, 2025
f3dd7f4
merging main
Jan 7, 2025
02fa7ca
fmt fixes
anupsv Jan 7, 2025
0807ce0
clean up
anupsv Jan 7, 2025
da5e5ad
cleanup, optimization, inline docs
anupsv Jan 7, 2025
b58174b
removing unwanted const
anupsv Jan 7, 2025
c1c2f70
more docs and cleanup
anupsv Jan 7, 2025
fa02398
formatting
anupsv Jan 7, 2025
914e059
removing unwanted comments
anupsv Jan 7, 2025
c74accf
merging main
anupsv Jan 8, 2025
f054c19
cargo fmt and clippy
anupsv Jan 8, 2025
90b3d13
adding test for point at infinity
anupsv Jan 8, 2025
486f7da
cleaner errors, cleanup
anupsv Jan 8, 2025
32716fe
adding another test case
anupsv Jan 8, 2025
73bc809
removing unwanted errors
anupsv Jan 8, 2025
618e098
adding fixes per comments
anupsv Jan 8, 2025
f7eb705
adding 4844 spec references
anupsv Jan 8, 2025
61e1744
comment fixes
anupsv Jan 9, 2025
1c2a79d
formatting, adding index out of bound check, removing print statement
anupsv Jan 10, 2025
baf5a44
removing unwanted test, adding test for evaluate_polynomial_in_evalua…
anupsv Jan 10, 2025
aa1ded9
moving test to bottom section
anupsv Jan 10, 2025
86ceab3
Update src/polynomial.rs
anupsv Jan 10, 2025
d67adec
Update src/kzg.rs
anupsv Jan 10, 2025
02194ba
Update src/kzg.rs
anupsv Jan 10, 2025
bfd2fac
Update src/kzg.rs
anupsv Jan 10, 2025
1c9bcac
Update src/helpers.rs
anupsv Jan 10, 2025
f6c07eb
updating deps, and toolchain to 1.84
anupsv Jan 10, 2025
7531098
removing errors test, no longer useful
anupsv Jan 10, 2025
f9bb219
adding to_byte_array arg explanation
anupsv Jan 10, 2025
5f9fa77
fmt fixes
anupsv Jan 10, 2025
cdeae7c
fmt and clippy fixes
anupsv Jan 10, 2025
0624dc6
fixing function names and fmt
anupsv Jan 10, 2025
c6f0bdd
clippy fixes
anupsv Jan 10, 2025
34b3223
Update src/helpers.rs
bxue-l2 Jan 10, 2025
808d3bf
changes based on comments and discussion
anupsv Jan 10, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,8 @@ ark-poly = { version = "0.5.0", features = ["parallel"] }
crossbeam-channel = "0.5"
num_cpus = "1.13.0"
sys-info = "0.9"
itertools = "0.13.0"
thiserror = "1.0"
bxue-l2 marked this conversation as resolved.
Show resolved Hide resolved

[dev-dependencies]
criterion = "0.5"
Expand Down
6 changes: 3 additions & 3 deletions benches/bench_kzg_proof.rs
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ fn bench_kzg_proof(c: &mut Criterion) {
let index =
rand::thread_rng().gen_range(0..input_poly.len_underlying_blob_field_elements());
b.iter(|| {
kzg.compute_proof_with_roots_of_unity(&input_poly, index.try_into().unwrap())
kzg.compute_kzg_proof_with_known_z_fr_index(&input_poly, index.try_into().unwrap())
.unwrap()
});
});
Expand All @@ -37,7 +37,7 @@ fn bench_kzg_proof(c: &mut Criterion) {
let index =
rand::thread_rng().gen_range(0..input_poly.len_underlying_blob_field_elements());
b.iter(|| {
kzg.compute_proof_with_roots_of_unity(&input_poly, index.try_into().unwrap())
kzg.compute_kzg_proof_with_known_z_fr_index(&input_poly, index.try_into().unwrap())
.unwrap()
});
});
Expand All @@ -51,7 +51,7 @@ fn bench_kzg_proof(c: &mut Criterion) {
let index =
rand::thread_rng().gen_range(0..input_poly.len_underlying_blob_field_elements());
b.iter(|| {
kzg.compute_proof_with_roots_of_unity(&input_poly, index.try_into().unwrap())
kzg.compute_kzg_proof_with_known_z_fr_index(&input_poly, index.try_into().unwrap())
.unwrap()
});
});
Expand Down
6 changes: 3 additions & 3 deletions benches/bench_kzg_verify.rs
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ fn bench_kzg_verify(c: &mut Criterion) {
rand::thread_rng().gen_range(0..input_poly.len_underlying_blob_field_elements());
let commitment = kzg.commit_eval_form(&input_poly).unwrap();
let proof = kzg
.compute_proof_with_roots_of_unity(&input_poly, index.try_into().unwrap())
.compute_kzg_proof_with_known_z_fr_index(&input_poly, index.try_into().unwrap())
.unwrap();
let value_fr = input_poly.get_at_index(index).unwrap();
let z_fr = kzg.get_nth_root_of_unity(index).unwrap();
Expand All @@ -41,7 +41,7 @@ fn bench_kzg_verify(c: &mut Criterion) {
rand::thread_rng().gen_range(0..input_poly.len_underlying_blob_field_elements());
let commitment = kzg.commit_eval_form(&input_poly).unwrap();
let proof = kzg
.compute_proof_with_roots_of_unity(&input_poly, index.try_into().unwrap())
.compute_kzg_proof_with_known_z_fr_index(&input_poly, index.try_into().unwrap())
.unwrap();
let value_fr = input_poly.get_at_index(index).unwrap();
let z_fr = kzg.get_nth_root_of_unity(index).unwrap();
Expand All @@ -58,7 +58,7 @@ fn bench_kzg_verify(c: &mut Criterion) {
rand::thread_rng().gen_range(0..input_poly.len_underlying_blob_field_elements());
let commitment = kzg.commit_eval_form(&input_poly).unwrap();
let proof = kzg
.compute_proof_with_roots_of_unity(&input_poly, index.try_into().unwrap())
.compute_kzg_proof_with_known_z_fr_index(&input_poly, index.try_into().unwrap())
.unwrap();
let value_fr = input_poly.get_at_index(index).unwrap();
let z_fr = kzg.get_nth_root_of_unity(index).unwrap();
Expand Down
17 changes: 14 additions & 3 deletions src/blob.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,12 @@ use crate::{
polynomial::{PolynomialCoeffForm, PolynomialEvalForm},
};

/// A blob which is Eigen DA spec aligned.
/// A blob aligned with the Eigen DA specification.
/// TODO: we should probably move to a transparent repr like
/// <https://docs.rs/alloy-primitives/latest/alloy_primitives/struct.FixedBytes.html>
#[derive(Clone, Debug, PartialEq, Eq)]
pub struct Blob {
/// The binary data contained within the blob.
blob_data: Vec<u8>,
}

Expand Down Expand Up @@ -48,12 +49,22 @@ impl Blob {
&self.blob_data
}

/// Returns the length of the data in the blob.
/// Returns the length of the blob data.
///
/// This length reflects the size of the data, including any padding if applied.
///
/// # Returns
///
/// The length of the blob data as a `usize`.
samlaf marked this conversation as resolved.
Show resolved Hide resolved
pub fn len(&self) -> usize {
self.blob_data.len()
}

/// Checks if the blob data is empty.
/// Checks whether the blob data is empty.
///
/// # Returns
///
/// `true` if the blob data is empty, `false` otherwise.
pub fn is_empty(&self) -> bool {
self.blob_data.is_empty()
}
Expand Down
10 changes: 10 additions & 0 deletions src/consts.rs
Original file line number Diff line number Diff line change
@@ -1,3 +1,13 @@
pub const BYTES_PER_FIELD_ELEMENT: usize = 32;
pub const SIZE_OF_G1_AFFINE_COMPRESSED: usize = 32; // in bytes
pub const SIZE_OF_G2_AFFINE_COMPRESSED: usize = 64; // in bytes

pub const FIAT_SHAMIR_PROTOCOL_DOMAIN: &[u8] = b"EIGENDA_FSBLOBVERIFY_V1_"; // Adapted from 4844
pub const KZG_ENDIANNESS: Endianness = Endianness::Big; // Choose between Big or Little.

pub const RANDOM_CHALLENGE_KZG_BATCH_DOMAIN: &[u8] = b"EIGENDA_RCKZGBATCH___V1_"; // Adapted from 4844
anupsv marked this conversation as resolved.
Show resolved Hide resolved
#[derive(Debug, Clone, Copy)]
pub enum Endianness {
Big,
Little,
}
114 changes: 67 additions & 47 deletions src/errors.rs
Original file line number Diff line number Diff line change
@@ -1,64 +1,84 @@
use std::{error::Error, fmt};
use thiserror::Error;

#[derive(Clone, Debug, PartialEq)]
/// Errors related to Blob operations.
///
/// The `BlobError` enum encapsulates all possible errors that can occur during
/// operations on the `Blob` struct, such as padding and conversion errors.
#[derive(Clone, Debug, PartialEq, Error)]
anupsv marked this conversation as resolved.
Show resolved Hide resolved
pub enum BlobError {
/// A generic error with a descriptive message.
#[error("generic error: {0}")]
GenericError(String),
}

impl fmt::Display for BlobError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match *self {
BlobError::GenericError(ref msg) => write!(f, "generic error: {}", msg),
}
}
}

impl Error for BlobError {}

#[derive(Clone, Debug, PartialEq)]
/// Errors related to Polynomial operations.
///
/// The `PolynomialError` enum encapsulates all possible errors that can occur
/// during operations on the `Polynomial` struct, such as FFT transformations
/// and serialization errors.
#[derive(Clone, Debug, PartialEq, Error)]
pub enum PolynomialError {
SerializationFromStringError,

/// Error related to commitment operations with a descriptive message.
#[error("commitment error: {0}")]
CommitError(String),
GenericError(String),

/// Error related to Fast Fourier Transform (FFT) operations with a descriptive message.
#[error("FFT error: {0}")]
FFTError(String),

/// A generic error with a descriptive message.
#[error("generic error: {0}")]
GenericError(String),

/// Error indicating that the polynomial is already in the desired form.
#[error("incorrect form error: {0}")]
IncorrectFormError(String),
}
anupsv marked this conversation as resolved.
Show resolved Hide resolved

impl fmt::Display for PolynomialError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match *self {
PolynomialError::SerializationFromStringError => {
write!(f, "couldn't load string to fr vector")
},
PolynomialError::CommitError(ref msg) => write!(f, "Commitment error: {}", msg),
PolynomialError::FFTError(ref msg) => write!(f, "FFT error: {}", msg),
PolynomialError::GenericError(ref msg) => write!(f, "generic error: {}", msg),
PolynomialError::IncorrectFormError(ref msg) => {
write!(f, "Incorrect form error: {}", msg)
},
}
}
}
/// Errors related to KZG operations.
///
/// The `KzgError` enum encapsulates all possible errors that can occur during
/// KZG-related operations, including those from `PolynomialError` and `BlobError`.
/// It also includes additional errors specific to KZG operations.
#[derive(Clone, Debug, PartialEq, Error)]
pub enum KzgError {
/// Wraps errors originating from Polynomial operations.
#[error("polynomial error: {0}")]
PolynomialError(#[from] PolynomialError),

impl Error for PolynomialError {}
#[error("MSM error: {0}")]
MsmError(String),

#[derive(Clone, Debug, PartialEq)]
pub enum KzgError {
CommitError(String),
/// Wraps errors originating from Blob operations.
#[error("blob error: {0}")]
BlobError(#[from] BlobError),

/// Error related to serialization with a descriptive message.
#[error("serialization error: {0}")]
SerializationError(String),
FftError(String),

/// Error related to commitment processes with a descriptive message.
#[error("not on curve error: {0}")]
NotOnCurveError(String),

/// Error indicating an invalid commit operation with a descriptive message.
#[error("commit error: {0}")]
CommitError(String),

/// Error related to Fast Fourier Transform (FFT) operations with a descriptive message.
#[error("FFT error: {0}")]
FFTError(String),

/// A generic error with a descriptive message.
#[error("generic error: {0}")]
GenericError(String),
}

impl fmt::Display for KzgError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match *self {
KzgError::CommitError(ref msg) => write!(f, "Commitment error: {}", msg),
KzgError::SerializationError(ref msg) => write!(f, "Serialization error: {}", msg),
KzgError::FftError(ref msg) => write!(f, "FFT error: {}", msg),
KzgError::GenericError(ref msg) => write!(f, "Generic error: {}", msg),
}
}
}
/// Error indicating an invalid denominator scenario, typically in mathematical operations.
#[error("invalid denominator")]
InvalidDenominator,

impl Error for KzgError {}
/// Error indicating an invalid input length scenario, typically in data processing.
#[error("invalid input length")]
InvalidInputLength,
}
93 changes: 85 additions & 8 deletions src/helpers.rs
Original file line number Diff line number Diff line change
@@ -1,13 +1,17 @@
use ark_bn254::{Fq, Fq2, Fr, G1Affine, G1Projective, G2Affine, G2Projective};
use ark_ec::AffineRepr;
use ark_ec::{AffineRepr, CurveGroup, VariableBaseMSM};
use ark_ff::{sbb, BigInt, BigInteger, Field, LegendreSymbol, PrimeField};
use ark_std::{str::FromStr, vec::Vec, One, Zero};
use crossbeam_channel::Receiver;
use std::cmp;

use crate::{
arith,
consts::{BYTES_PER_FIELD_ELEMENT, SIZE_OF_G1_AFFINE_COMPRESSED, SIZE_OF_G2_AFFINE_COMPRESSED},
consts::{
Endianness, BYTES_PER_FIELD_ELEMENT, KZG_ENDIANNESS, SIZE_OF_G1_AFFINE_COMPRESSED,
SIZE_OF_G2_AFFINE_COMPRESSED,
},
errors::KzgError,
traits::ReadPointFromBytes,
};
use ark_ec::AdditiveGroup;
Expand Down Expand Up @@ -117,25 +121,51 @@ pub fn to_fr_array(data: &[u8]) -> Vec<Fr> {
}

pub fn to_byte_array(data_fr: &[Fr], max_data_size: usize) -> Vec<u8> {
anupsv marked this conversation as resolved.
Show resolved Hide resolved
// Calculate the number of field elements in input
bxue-l2 marked this conversation as resolved.
Show resolved Hide resolved
let n = data_fr.len();

// Calculate actual data size as minimum of:
// - Total size needed for all elements (n * bytes per element)
// - Maximum allowed size
bxue-l2 marked this conversation as resolved.
Show resolved Hide resolved
let data_size = cmp::min(n * BYTES_PER_FIELD_ELEMENT, max_data_size);

// Initialize output buffer with zeros
// Size is determined by data_size calculation above
anupsv marked this conversation as resolved.
Show resolved Hide resolved
let mut data = vec![0u8; data_size];

// Iterate through each field element
// Using enumerate().take(n) to process elements up to n
for (i, element) in data_fr.iter().enumerate().take(n) {
let v: Vec<u8> = element.into_bigint().to_bytes_be();
// Convert field element to bytes based on configured endianness
let v: Vec<u8> = match KZG_ENDIANNESS {
Endianness::Big => element.into_bigint().to_bytes_be(), // Big-endian conversion
Endianness::Little => element.into_bigint().to_bytes_le(), // Little-endian conversion
};

// Calculate start and end indices for this element in output buffer
let start = i * BYTES_PER_FIELD_ELEMENT;
let end = (i + 1) * BYTES_PER_FIELD_ELEMENT;

if end > max_data_size {
// Handle case where this element would exceed max_data_size
// Calculate how many bytes we can actually copy
let slice_end = cmp::min(v.len(), max_data_size - start);

// Copy partial element and break the loop
// We can't fit any more complete elements
data[start..start + slice_end].copy_from_slice(&v[..slice_end]);
break;
} else {
// Normal case: element fits within max_data_size
// Calculate actual end index considering data_size limit
let actual_end = cmp::min(end, data_size);

// Copy element bytes to output buffer
// Only copy up to actual_end in case this is the last partial element
anupsv marked this conversation as resolved.
Show resolved Hide resolved
data[start..actual_end].copy_from_slice(&v[..actual_end - start]);
}
}

data
}

Expand Down Expand Up @@ -179,11 +209,6 @@ pub fn lexicographically_largest(z: &Fq) -> bool {
let tmp = arith::montgomery_reduce(&z.0 .0[0], &z.0 .0[1], &z.0 .0[2], &z.0 .0[3]);
let mut borrow: u64 = 0;

// (_, borrow) = sbb(tmp.0, 0x9E10460B6C3E7EA4, 0);
// (_, borrow) = sbb(tmp.1, 0xCBC0B548B438E546, borrow);
// (_, borrow) = sbb(tmp.2, 0xDC2822DB40C0AC2E, borrow);
// (_, borrow) = sbb(tmp.3, 0x183227397098D014, borrow);

sbb!(tmp.0, 0x9E10460B6C3E7EA4, &mut borrow);
sbb!(tmp.1, 0xCBC0B548B438E546, &mut borrow);
sbb!(tmp.2, 0xDC2822DB40C0AC2E, &mut borrow);
Expand Down Expand Up @@ -374,3 +399,55 @@ pub fn is_on_curve_g2(g2: &G2Projective) -> bool {
right += &tmp;
left == right
}

/// Computes powers of a field element up to a given exponent.
/// Ref: https://github.com/ethereum/consensus-specs/blob/master/specs/deneb/polynomial-commitments.md#compute_powers
///
/// For a given field element x, computes [1, x, x², x³, ..., x^(count-1)]
///
/// # Arguments
/// * `base` - The field element to compute powers of
/// * `count` - The number of powers to compute (0 to count-1)
///
/// # Returns
/// * Vector of field elements containing powers: [x⁰, x¹, x², ..., x^(count-1)]
pub fn compute_powers(base: &Fr, count: usize) -> Vec<Fr> {
// Pre-allocate vector to avoid reallocations
let mut powers = Vec::with_capacity(count);

// Start with x⁰ = 1
let mut current = Fr::one();

// Compute successive powers by multiplying by base
for _ in 0..count {
// Add current power to vector
powers.push(current);
// Compute next power: x^(i+1) = x^i * x
current *= base;
}

powers
}

/// Computes a linear combination of G1 points weighted by scalar coefficients.
///
/// Given points P₁, P₂, ..., Pₙ and scalars s₁, s₂, ..., sₙ
/// Computes: s₁P₁ + s₂P₂ + ... + sₙPₙ
/// Uses Multi-Scalar Multiplication (MSM) for efficient computation.
///
/// # Arguments
/// * `points` - Array of G1 points in affine form
/// * `scalars` - Array of field elements as scalar weights
///
/// # Returns
/// * Single G1 point in affine form representing the linear combination
pub fn g1_lincomb(points: &[G1Affine], scalars: &[Fr]) -> Result<G1Affine, KzgError> {
// Use MSM (Multi-Scalar Multiplication) for efficient linear combination
// MSM is much faster than naive point addition and scalar multiplication
let lincomb =
G1Projective::msm(points, scalars).map_err(|e| KzgError::MsmError(e.to_string()))?;

// Convert result back to affine coordinates
// This is typically needed as most protocols expect points in affine form
Ok(lincomb.into_affine())
}
Loading
Loading