In compute_new_claim() method, we are allocating new polynomials with zeroed memory to batch instance and accumulator
` auto new_non_shifted_polynomial = bb::Polynomial(key.circuit_size);
new_non_shifted_polynomial += key.polynomials.batched_unshifted_instance;
new_non_shifted_polynomial.add_scaled(key.polynomials.batched_unshifted_accumulator, claim_batching_challenge);
auto new_shifted_polynomial = bb::Polynomial<FF>::shiftable(key.circuit_size);
new_shifted_polynomial += key.preshifted_instance;
new_shifted_polynomial.add_scaled(key.preshifted_accumulator, claim_batching_challenge);
`
This can be done more efficiently by overwriting the instance or accumulator poly with the result depending on the sizes. If Instance is >= acc, this is 7x faster than baseline, otherwise ~3x faster. Note that baseline is 3500 microseconds for polys of size $2^16$ and $2^17$. Would make sense if we have a bunch of transactions with bigger circuits.
In
compute_new_claim()method, we are allocating new polynomials with zeroed memory to batch instance and accumulator` auto new_non_shifted_polynomial = bb::Polynomial(key.circuit_size);
new_non_shifted_polynomial += key.polynomials.batched_unshifted_instance;
new_non_shifted_polynomial.add_scaled(key.polynomials.batched_unshifted_accumulator, claim_batching_challenge);
`$2^16$ and $2^17$ . Would make sense if we have a bunch of transactions with bigger circuits.
This can be done more efficiently by overwriting the instance or accumulator poly with the result depending on the sizes. If Instance is >= acc, this is 7x faster than baseline, otherwise ~3x faster. Note that baseline is 3500 microseconds for polys of size