top of page
• Jeffrey Scholz

Updated: 3 days ago

A Quadratic Arithmetic Program (QAP) is a system of equations where the coefficients are monovariate polynomials and a valid solution results in a single polynomial equality. They are quadratic because they have exactly one polynomial multiplication.

QAPs are the primary reason ZK-Snarks are able to be succinct.

A program can be represented as an R1CS, but evaluating it is not succinct due to the many operations of matrix multiplication.

Polynomials allow us to make statements succinctly due to the Schwartz-Zippel Lemma – in a sense, we can compress a polynomial down to a single point.

The key idea behind creating a Quadratic Arithmetic Program is as follows:

1. Operations in R1CS (addition and Hadamard product) form an algebraic ring when viewed as a set of column vectors (why this is the case will be explained later)

2. Polynomials under addition and multiplication are rings

3. There exists an easily computable homomorphism from R1CS to polynomials

Once we have a monovariate polynomial statement with only one polynomial equality, we can evaluate it succinctly.

## R1CS Notation

In the context of an R1CS, we have three matrices L, R, and O of dimension (m x n) and a solution vector s of dimension m such that Ls𝇇Rs = Os where 𝇇 is the Hadamard product (element-wise multiplication).

## Schwartz-Zippel Lemma - review

The Schwartz-Zippel Lemma states that if two polynomials are randomly evaluated at the same x value, and their y value is the same, then one can be nearly certain that they are the same polynomial.

In other words, let τ be a randomly chosen point. If P1(τ) = P2(τ) then P1 = P2.

Hey wait, can’t we just compare their coefficients of P1 and P2? Well, that would not be succinct, we’re back to where we started doing element-wise comparison on matrices. We want to evaluate the polynomials, and then compare the evaluations.

Remember, when I’m trying to prove I’ve carried out a calculation, I’m claiming I know a vector s such that Ls⊙Rs = Os. We already know the machinery to make this zero knowledge (use elliptic curves), but we don’t know how to make it succinct. The verifier still has to do a lot of math multiplying out the matrices (and made worse because the verifier is doing it in the elliptic curve space).

Lets say I have a transformation poly and a random value τ. I do the following

u = poly(Ls)(τ) v = poly(Rs)(τ) w = poly(Os)(τ)

If uv = w, then by Schwartz-Zippel, the polynomials are the same, and indeed Ls⊙Rs = Os

Better yet, we turn u, v, and w into elliptic curve points [A], [B], [C] to hide their value, and the verifier pairs [A] with [B] and compares the result to [C], so we can achieve zero knowledge.

Let’s restate that. The goal is to prove we know Ls𝇇Rs = Os using only 3 values.

That of course sounds crazy, but that’s where we are headed! There are some obvious problems here, namely that we can easily invent some [A], [B] and [C] that satisfy pairing([A],[B]) = C with no regard for the R1CS. That issue will be solved in due time.

(An implementation detail for the above is that [A] needs to be a G1 point, [B] needs to be a G2 point, and [C] needs to be a G12 point, per our bilinear pairing tutorial, but let’s not get bogged down in the details too early).

For now however, we need to figure out how to transform an R1CS into a single polynomial expression.

Going from vector multiplication to polynomials is straightforward when the problem is stated as a homomorphism between algebraic rings.

## Ring definition

A ring is a set with two binary operators □ and ◇ where

• the set is an abelian group under □

• ◇ is closed

• ◇ is associative

• ◇ has an identity element

• (a □ b) ◇ c = (a ◇ c) □ (b ◇ c) and c ◇ (a □ b) = (c ◇ a) □ (c ◇ b) (distributive property)

It is not required for ◇ to have an inverse, and in general, rings do not have an inverse for the second binary operator.

So another way we can think about it is that a ring is a set with two binary operators □ and ◇ where

• the set is an abelian group under □

• the set is a monoid under ◇ (no inverse)

• (a □ b) ◇ c = (a ◇ c) □ (b ◇ c) and c ◇ (a □ b) = (c ◇ a) □ (c ◇ b) (distributive property)

## Ring operators

Vectors, the way r1cs was using them, are a ring. A ring must:

• support closed, associative, and commutative addition

• have an identity element for addition

• each element must have an inverse for addition

• support closed and associative multiplication

• have an identity element for multiplication

Here’s a summary showing such vector math in r1cs is a ring Let’s explain this. Our ring is a set of n-dimensional column vectors. The addition operation is element-wise addition which obviously produces another n-dimensional vector. For example Addition is obviously associative and abelian, so let’s not bother demonstrating that.

The identity element for addition is a vector of all zeros. And the inverse is just the sign flipped Multiplication is element-wise multiplication, or Hadamard product which we denote with the operator ⊙ to avoid confusing it with the dot product. Hey wait a second, weren’t we using matrix multiplication with r1cs, why are we suddenly introducing the Hadamard product?

We will also see later how we can re-write the matrix multiplication in terms of vector addition and the Hadamard product, so don’t worry about this for now.

What you should care about is that we do not have multiplicative inverses, which is why our algebraic structure here is a ring and not a field. Specifically, if one of the elements is zero, then inverting that element means we would have to divide by zero, which we cannot do.

Hence, n-dimensional vectors of real numbers under the binary operators vector addition and Hadamard product are a ring.

### Polynomials under polynomial addition and polynomial multiplication are also a ring To be specific, we are talking about polynomials with real number coefficients. Polynomials by definition have positive degree. Raising x to a negative power is not valid.

Polynomial addition is what you would expect, and it’s obviously associative and abelian Zero is the identity element for addition because it doesn’t change the original polynomial. The inverse is just the signs flipped And we can indeed multiply polynomials. This of course means the degree will get larger (usually), but we didn’t specify our polynomials have to be of a certain degree. Polynomials are a bit annoying to multiply by hand, so we will use python to accomplish that:

```from numpy import poly1d

p1 = poly1d([9, 4, -6])
p2 = poly1d([4, 2])
print(p1 * p2)
# 36 x^3 + 34 x^2 - 16 x - 12```

The identity element for polynomials under multiplication is 1 (1 is a polynomial of degree zero).

### What about scalar multiplication of vectors and polynomials?

Well, this could really throw a wrench in our engine couldn’t it? What kind of an algebraic structure supports three binary operators?

Scalar multiplication is not a third operator. It is shorthand for hadamard product with a vector where each entry is the scalar.

That is Let’s think back to our elliptic curves. Scalar multiplication there is not real multiplication – it’s just repeated addition. In this case, scalar multiplication for a vector is just the Hadamard product of a vector with another vector where all the entries are the value of the scalar. For example For polynomials, multiplying by a scalar is the same as multiplying with a degree zero polynomial where the coefficient has the same value as the scalar, for example The vector ring and the polynomial ring look pretty similar, don’t they? Wouldn’t it be nice if there was a homomorphism from the vectors to the polynomials?

There is.

Theorem: there exists a Ring Homomorphism from column vectors of dimension n with real number elements to polynomials with real coefficients.

Proof: I’m going to trigger the mathematicians by not putting one here. Let’s move on.

### A set theoretic definition of polynomial arithmetic

In primary school, we are conditioned to thinking about polynomial addition as adding coefficients of the same power together, and polynomial multiplication as doing a distributive operation.

Of course this is how we compute polynomial math on a computer, but to appreciate what is going on here, we want to revisit the graphical, or set theoretic definition of polynomial arithmetic.

Instead of thinking of a polynomial as y = ax^2 + bx + c, let’s think of it as an infinite set consisting of pairs (x, y), where the (x, y) point satisfies the polynomial equation. In our infinite set perspective, we don’t have a notion of polynomial degree or polynomial coefficient, it’s literally just a massive bag of pairs that happen to fall along a pretty curve on an xy plane.

If we want to add two polynomials together from a set theoretic perspective, we of course need to define a relation that combines the two different sets. Hopefully, you are thinking of a cartesian product here!

Let’s define our two polynomials as P1 and P2, then their cartesian product P1 × P2. An element in our cartesian product will have the form ((x_p1, y_p1), (x_p2, y_p2)). To take the subset of our cartesian product, we only select pairs where the x_p1 = x_p2 and discard the rest. Then we produce a new pair (x_sum, y_p1 + y_p2).

In other words, we’re combining matching pairs of values from two polynomials (cartesian product), adding their “y” values together, and making a new pair with the shared x-value and summed “y” value.

Thinking about this graphically, it means drawing a vertical line, then adding the y values of the two polynomials at that particular x value, then drawing a new curve with the new y value. It just so happens that this new polynomial will have the same y = ax^2 + bx + c form if you had added the coefficients together instead.

From a set theoretic perspective, it is a total coincidence that the degree of the new polynomial produced by summing is at most max(deg(P1), deg(P2)). Similarly, it is a total coincidence that the degree of multiplying P1 and P2 is at most deg(P1) + deg(P2). We are just combining a set of points according to set of rules and getting a new set.

We want to have this mental model because when we add (or do a Hadamard product) of two vectors, we are doing the same thing: combining a set of points according to a rule defined in our set relation (the subset of the cartesian product). The only difference is that the polynomial has infinite points, and the vector has finite points.

The next natural question is, how do we define the function that gets us from vectors to polynomials?

## Computing the transformation function for the homomorphism

A polynomial contains an infinite amount of points, but an n-dimensional vector only encodes a finite number of points (n, to be specific). To make the polynomial convey the same information as a vector, we decide on n predetermined values of x whose y values will represent the elements of the original vector.

Theorem: Given n points on a cartesian (x, y) plane, they can be uniquely interpolated by a polynomial of degree n - 1. If the degree is not constrained, an infinite number of polynomials of degree n - 1 or higher can interpolate those points.

Example: A single point can be uniquely interpolated by a vertical line (a polynomial of degree zero). Two points can be uniquely interpolated by a straight line (a polynomial of degree one). This holds for all larger degrees.

To make this a little more concrete, if we are encoding n-dimensional vectors as polynomials, we need n predetermined points. Let’s say n = 3. We will then pick the points x = 1, x = 2, and x = 3 (this is arbitrary, but those values are convenient). If we are trying to encode [4, 12, 6], then we want a polynomial that travels through the following cartesian points:

(1, 4), (2, 12), (3, 6)

There are a lot of algorithms for interpolating polynomial points, but lagrange interpolation is probably the most well known, so we will use that.

```from scipy.interpolate import lagrange

x = np.array([1,2,3])
y = np.array([4,12,6])

polyC = lagrange(x, y)

print(polyC)
# -7x^2 + 29x - 18# check that it passes through x = {1,2,3} as expected
print(polyC(1))
# 4.0
print(polyC(2))
# 12.0
print(polyC(3))
# 6.0```

So ultimately, we can transform the y values [4, 12, 6] into ### But what about the infinite number of solutions?

Our original vector has 3 points, but our transformation to a polynomial has infinite points. Doesn’t it seem like we are adding information that should not be there?

As long as we stick to only looking at the y values of the polynomial at x = 1,2,3, nothing changes.

There is only one three dimensional vector that represents [4,5,6], but an infinite number of polynomials that interpolate (1,4), (2,5), and (3,6). This is not a problem however, as we are only going to look at those specific three x points.

The other point, it might seem very arbitrary that we mapped the first dimension of the vector to y at x = 1, the second dimension to y at x = 2 and so forth. However, this is totally valid as long as we remain consistent in our choices of x.

Going back to our set theoretic definition, a vector can be conceptualized as a set of pairs where the first entry is the dimension and the second entry is the value. This is extremely similar (ahem, homomorphic) to the polynomial encoding. The “dimension” is the x value and the y value is the value of the “vector” at that dimension.

## Showing binary operator equivalence between vectors and interpolating polynomials

In this section, we will be using examples, not proofs as these will demonstrate the concepts more clearly. The interested reader can work out the proof on their own or ask Google or their favorite chatbot for the proof.

When we say “polynomials” in these subheadings, we mean “polynomials with positive degree and real coefficients whose y-values interpolate the vector at x = 1, x = 2, … x = n where n is the dimension of the vector.” But to avoid being excessively verbose, we will simply say “polynomials” at the risk of offending some mathematicians.

Let Using x = 1, x = 2, x = 3 we transform v1 and v2 into their polynomial representations:

```import numpy as np
from scipy.interpolate import lagrange

x = np.array([1,2,3])
y_v1 = np.array([1, 0, 1])
y_v2 = np.array([-1, 5, 3])
poly_v1 = lagrange(x, y_v1)
poly_v2 = lagrange(x, y_v2)

print(poly_v1)
# 1 x^2 - 4 x + 4print(poly_v2)
#-4 x^2 + 18 x - 15

poly_v3 = poly_v1 + poly_v2

print(poly_v3)
# -3 x^2 + 14 x - 11
print([poly_v3(1), poly_v3(2), poly_v3(3)])
# [0.0, 5.0, 4.0]```

In the plot below, you can see that poly_v1 (red) + poly_v2(blue) = poly_v3(green) for all points, but especially the points we care about at the x values. To make sense of the plot below, add the y value of the red and blue dot (poly_v1 and poly_v2) to get the green dot (poly_v3).

Just read the plot as “red plus blue equals green” (the y values) and you’ll see the consistency between vectors and the polynomials. Our green vector is (the result of the addition) [0, 5, 4], just as we expected.

This operation also happens to be isomorphic if we force the degree to be the vector dimension - 1, but that will get in the way as we will see next. Homomorphic is good enough for us.

### The Hadamard product of two vectors is homomorphic to multiplying two polynomials

There is a very important catch here: when we multiply two polynomials, the product will usually (but not always) have a degree greater than either of the original two polynomials. However, that product polynomial will still interpolate the values at x = 1, x = 2, …, x = n as we expect. There are an infinite number of polynomials that can interpolate the points we care about, but this is not a problem. As stated earlier, we care about the values at x = 1,2,3.

Now that we have that out of the way, time for a real example: ```import numpy as np
from scipy.interpolate import lagrange

x = np.array([1,2,3])
y_v1 = np.array([1, -1, 2])
y_v2 = np.array([2, 2, -2])
poly_v1 = lagrange(x, y_v1)
poly_v2 = lagrange(x, y_v2)

print(poly_v1)
# 2.5 x^2 - 9.5 x + 8print(poly_v2)
# -2 x^2 + 6 x - 2

poly_v3 = poly_v1 * poly_v2
print(poly_v3)
# -5 x^4 + 34 x^3 - 78 x^2 + 67 x - 16
print([poly_v3(1), poly_v3(2), poly_v3(3)])
# [2.0, -2.0, -4.0]```

And here it is. Just look at the points and note that for the y values, “red times blue is green” ### Multiplying a vector by a scalar is homomorphic to multiplying the polynomial by the same scalar

Remember “scalar multiplication” is not a third binary operator. It’s just shorthand, in vector space, for multiplying a vector by [s, s, …, s] where s is a scalar. Similarly for polynomials, “multiplying by a scalar” is shorthand for “multiplying by a polynomial of degree zero.” This are really a special case of the multiplication binary operator.

This should be obvious, but we’ll work out the steps quickly. We call s’ a scalar transformed to its “true” vector representation, so that it can properly be part of the set that makes up the ring. Pay attention in the code how we are multiplying the scalar.

```import numpy as np
from scipy.interpolate import lagrange

x = np.array([1,2,3])
y_v1 = np.array([1, 2, -1])
poly_v1 = lagrange(x, y_v1)

print(poly_v1)
# -2 x^2 + 7 x - 4
# IMPORTANT: We multiply by a constant here
poly_final = poly_v1 * 3
print(poly_final)
# -6 x^2 + 21 x - 12
print([poly_final(1), poly_final(2), poly_final(3)])
# [3.0, 6.0, -3.0]```

Just follow the normal red times blue is green, and the pattern will be clear ### Important detail: it is not required for the polynomial representing a constant to be of zero degree

Our homomorphism is perfectly valid if, instead of multiplying the polynomial by s, we multiply it by a polynomial that interpolates s at the x values we care about. We don’t care about what the rest of the polynomial is doing, we only care about the values at x = 1,2,3, etc.

The blue plot does not intrinsically have to be a flat line: it could be a degree 3 polynomial that interpolates the same blue points, and the math would still work.

## Turning matrix multiplication into the Hadamard product for Ls⊙Rs = Os

When we do Ls in matrix multiplication, this is just doing the dot product of each of the rows of L with s. Matrix multiplication, of course is not our binary operator. But upon closer inspection, we see it is really shorthand for Hadamard product on each column followed by vector addition (both of which are the binary operators of our vector ring).

Let’s take the example This can be thought of as actually doing the following This applies to L, R, and O.

Now we are doing the operation in terms of our binary operators and ring elements defined earlier.

Since we have a well defined transformation from our vector ring to our polynomial ring, we are ready to make the jump.

## If Ls⊙Rs = Os can be computed with vector arithmetic, then it can be computed with polynomial arithmetic

At this point, computing Ls⊙Rs = Os in polynonmial space is obvious

We convert each column vector of L, R, and O to their polynomial representation by using our special points x = 1, x = 2, … x = n and the lagrange interpolation.

To compute Ls, Rs, and Os we do a scalar multiplication (which is just shorthand for the Hadamard product with a repeating vector) on each column of L, R and S with the appropriate scalar value from vector s (as seen in the section immediately above).

We then use Lagrange Interpolation to convert all the columns in each of the matrix into polynomials, and then add them (each column’s polynomial) up all together to get a polynomial (polynomial ring is closed under addition).

If we have U(x) = transform(Ls), V(x) = transform(Rs), and W(x) = transform (Os), then we have three polynomials U(x), V(x), and W(x).

Because of the homomorphism, we can (with some implementation details we will get to shortly) then do

U(x)V(x) = W(x)

and now our R1CS is written as a single expression!

It’s very important to understand that the polynomials are encoding the exact same information as the R1CS. And although polynomial multiplication and addition may “feel different” from the operations in matrices, this should not bother us as long as we understand this is a ring homomorphism. We are allowed to do any of the ring operators and the information transformations will be consistent.

## Notation for QAP

We’ve used L, R, O to refer to the matrices in the R1CS. Let’s call our transformation φ (in honor of homomorphism notation). Applying φ to the columns of the R1CS produces a list of polynomials. L, R and O are matrices (from R1CS) of n rows and m columns in a finite field of order p.

After interpolations on the columns, we get m polynomials (for each column) of degree n - 1 in the same finite field. The reason they are of degree n - 1, is because there are n rows, so the interpolation passes through n points. The smallest degree polynomial that interpolates n points will have degree n - 1.

This is a bit pedantic, but note that the transformation we apply to the matrices is the same transformation we apply to the witness. The homomorphism from integers to polynomials however is trivial and the same one we defined in this article (exercise for the reader 😊).

## Quadratic arithmetic program, initial transform

When we have all our matrices L, R, and O re-written as polynomials, we have what is called a Quadratic Arithmetic Program.

Because U, V, W are vectors of polynomials of length m, and the witness a is of length m, the correct way to combine them is via dot product: or equivalently ## (U·a) Notation

Note that U·a is a polynomial, because when we take the dot product of two vectors of polynomials, we get a polynomial. Only the prover can do this dot product, because they know the witness a. This expression evaluates to a single polynomial monovariate polynomial. The only indeterminate is x, because all the values of a are known to the prover.

## The above QAP is imbalanced

It should be clear that the two polynomials in the equation above will not be equal to each other! The polynomial on the left will in general have twice the degree of the polynomial on the right due to the multiplication.

We need to add a balancing term.

Luckily, we have a lot of flexibility in choosing our polynomials!

Remember: There are an infinite number of polynomials that interpolate W·a, but only one “correct” polynomial when multiplying polynomials U·a, V·a.

For example, let’s say When we multiply U·a, V·a we will get

(U·a)(V·a) = 3 x⁴ + 1 x³ + 2x² - 1 x + 1

But if we transform W·a, it’s going to be a polynomial of degree 2.

However, we can preserve the equality in polynomial land by introducing a degree four polynomial that interpolates zero at x = 1,2,3,4. Here is one such example: We are in fact doing the following in R1CS land: Although the polynomial is non-zero, it represents a zero vector! That is, it is valid for φ(0) to be a polynomial of degree 4, like the one in the image above.

If we add a zero vector, we aren’t changing the equality, but if the zero vector is transformed to a polynomial degree four (which interpolates y = 0 at x = {1, 2, 3, 4}), then we can have the right-hand side be a degree four polynomial, and the left-hand-side have a degree four polynomial, making it possible for them to be equal everywhere.

The next question becomes, how do we compute this extra polynomial?

It turns out, it’s more convenient to represent this zero polynomial as the product of two polynomials: h(x) and t(x).

## h(x)t(x)

Recall how we “agreed” that our transformation from vectors to polynomials would be done over the points x = {1,2,3}? We are actually able to enforce this by introducing what is called the target polynomial.

### t(x)

When we say the “zero vector” that does not mean a polynomial y = 0; rather, it means a polynomial that is zero at x = {1,2,3}. This polynomial is easy to create, just do Since we know in advance how big our circuit will be, we can just add as many roots as we need.

The polynomial t(x) is not secret, it is constructed during the trusted setup phase of the zero knowledge circuit.

### h(x)

It should be obvious that although t(x) represents the zero vector (it has roots at x = 1,2,3…), it won’t necessarily balance the equation (U·a)(V·a) = (W·a) + t(x). We need to multiply it by yet another polynomial that interpolates zero and balances out the equation.

Here’s a handy theorem: When two non-zero polynomials are multiplied, the roots of the product is the union of the roots of the individual polynomials.

Therefore, we can multiply t(x) by anything except zero, and it will still correspond to a zero vector in vector land.

Let’s do a little bit of algebra: So given U, V, W, a, and t(X), we can derive h(X) doing polynomial division.

Therefore, our final calculation is or equivalently, Remember, in vector land, h(x)t(x) is the zero vector. However, there are plenty of polynomials that represent the zero vector in polynomial land. We just pick one that balances out the equation.

By the way, let’s take a sneak peek at the groth16 paper on page 14 Some of the text captured in this photo may be mysterious, but you should now recognize the formula circled above!

### Why not just have h(x)t(x) be one polynomial?

The fact that t(x) is a public polynomial matters. It forces the prover to compute an h(x) that interpolates zero at x = 1,2,3. Otherwise, the prover might pick polynomial that satisfies the equation, but doesn’t actually correspond to anything from the R1CS. We want to know that the polynomials that correspond to Ls, Rs, and Os have roots at x = 1,2,3. If they do, then it is a homomorphism from vector math.

## Back to Schwartz-Zippel

Remember how the lemma from the beginning allowed us to check if two polynomials are equal by evaluating both of them at a random point? Well, we have two polynomials here! One is (U·a)(V·a) and the other is (W·a) + h(x)t(x).

For this to work, neither the prover nor the verifier should know the random point the polynomials are being evaluated at, otherwise the prover can cheat.

Therefore, we will need to do encrypted polynomial evaluation.

The trusted setup computes And the prover computes the polynomials (U·a), (V·a), (W·a), and h(x) by multiplying those elliptic curve points with their polynomial coefficients.

For U we have where [A] is an elliptic curve point.

For V we have For W we have And for the term h(x)t(x) we have ht(x) = h(x)t(x) and we have We can set [C] = [C’] + [HT]

The verifier then computes

pairing([A], [B]) = [C]

If it checks out, then the prover has a valid witness.

This works because [A], [B], and [C] represent the polynomials evaluated at a random point, and polynomial equality can be checked at a single point per Schwartz-Zippel.

### Couldn’t the prover just invent values?

In the current construction, yes. Instead of computing [A], [B], [C] as described above, they could randomly pick and a and b and compute c = ab, then compute [A] = aG, [B] = bG, [C] = cG.

This is why groth16 has extra parameters in it. Let’s look at the screenshot of Tornado cash’s verifier.sol again: The extra greek letters α,β,γ, and δ are encrypted values from the trusted setup that force the prover to be honest. By the way, the A, B, and C you see in the screenshot are very close to the [A], [B], and [C] we just computed. We are not far away from understanding and end-to-end zk proof!

That however, is the subject of explaining groth16.

## Real example for QAP

Let’s do a QAP transformation for the following equation After we break up polynomial into quadratic constraints, we get Please note that everything is done in a finite field, but we will use normal numbers for now to keep things simpler.

Our witness vector will have the form So our matrices L, R, S will be as follows Next, we need to do our ring homomorphism on the column vectors. Let’s set our x values to be x = {1,2,3} as usual.

We’ll show how to do the transformation per column for L. The zero vectors get turned into the polynomial 0 (this obviously interpolates all the zero values in the column). The column in bold above is a bit more interesting. We are looking for a polynomial that interpolates (1,1), (2,0), and (3,0). We can use python to compute that

```from scipy.interpolate import lagrange
xs = [1,2,3]
ys = [1,0,0]
print(lagrange(xs, ys))
# 0.5 x^2 - 2.5 x + 3```

Repeating the process for the column representing x_2 we get ```from scipy.interpolate import lagrange
xs = [1,2,3]
ys = [0,1,0]
print(lagrange(xs, ys))
# -x^2 + 4 x - 3```

And finally for x_4 ```from scipy.interpolate import lagrange
xs = [1,2,3]
ys = [0,0,4]
print(lagrange(xs, ys))
# 2 x^2 - 6 x + 4```

Now that we have our coefficients, let’s plug them into our new vector representing L We can repeat this process (exercise for the reader) to compute the other two matrices of interest You can think of these matrices as holding the coefficients of x^2, x, and the constant if you write the polynomials in a column fashion.

Remember, vector multiplication by a constant is homomorphic to multiplying a polynomial by a constant, so at this point, we take our witness vector and multiply it by the polynomials.

Note that everything we’ve done up to this point is fully public. Both the verifier and the prover can calculate the matrices we just produced. What remains hidden is the witness vector the prover picks.

Our original equation was So let’s pick x_1 = 3, x_2 = 4. This will compute the intermediate values as follows Thus, our witness vector is [1, 199, 3, 4, 9, 16].

If we do the dot product of U and the witness, we’ll get a polynomial

```import numpy as np
Ua = np.array([[0, 0, 0.5, -1, 0, 2],
[0, 0, -2.5, 4, 0, -6],
[0, 0, 3, -3, 0, 4]])

witness = [1, 199, 3, 4, 9, 16]

np.matmul(Ua, witness)
# array([ 29.5, -87.5,  61. ])```

Even though we are presenting this in matrix form, don’t forget this is really just shorthand for stacking polynomials.

To expand the operation above, we are really doing During matrix multiplication, we effectively multiplied each column polynomial by a constant, then added all the polynomials together. Therefore, we have produced a polynomial We can similarly produce the other two polynomials

```Va = np.array([[0, 0, 1, -1, 0, 0],
[0, 0, -4, 4, 0, 0],
[0, 0, 4, -3, 0, 0]])

np.matmul(Va, witness)
# array([-1,  4,  0])

Wa = np.array([[1, 0.5, 0, 0, 0, -1],
[-3, -1.5, 0, 0, -1, 4],
[2, 1, 0, 0, 2, -3]])

np.matmul(Wa, witness)
# array([  84.5, -246.5,  171. ])``` At this point, it becomes clear why we have the h(x)t(x) term. When (U·a) and (V·a) are multiplied, we’ll get a degree 4 polynomial, but (W·a) is of degree 2.

As long as we add a polynomial that interpolates zero and has degree 4, we can balance it out.

Using the formula from earlier, we can compute h with This can be done in python as

```a = poly1d([29.5, -87.5, 61])
b = poly1d([-1, 4, 0])
c = poly1d([84.5, -246.5, 171])
t = poly1d([1, -1])*poly1d([1, -2])*poly1d([1, -3])

(a * b - c) / t

# quotient, remainder# (poly1d([-29.5,  28.5]), poly1d([0.]))```

So our polynomial h(x) is It’s easy to see that Using encrypted polynomial evaluation at a random point τ, we have Then the verifier computes

pairing([A], [B]) = [C]

We just have to figure out how to prevent the prover from inventing values, which is the topic of the next tutorial.

Also, remember, we do everything over finite fields in the real implementation so that we don’t have to deal with ugly fractions.