top of page

Converting Algebraic Circuits to R1CS (Rank One Constraint System)

Updated: Nov 17, 2023

The goal of this article is to explain how to turn a set of polynomial constraints into rank one constraint system (r1cs).


The focus of this resource is implementation: we cover a lot more corner cases of doing this transformation than other materials, discuss optimizations, and explain how the circom library accomplishes it.


Zero knowledge algorithms have many steps involved, which is one reason they are hard to learn. The r1cs transformation is one of the steps and is the sole focus of this article.


Prerequisites

  • We assume the reader understands the significance of using polynomials to represent a computation

  • The reader is familiar with modular arithmetic

  • The reader has a basic understanding of Circom (if not, our zero knowledge puzzles are a good introduction)


Witness: The star of the show

The witness vector is a 1 x n vector that contains the value of all the input variables, the output variable, and the intermediate values. It shows you have executed the circuit from start to finish, knowing both the input, output, and all the intermediate values.


By convention, the first element is always 1 to make some calculations easier, which we will demonstrate later.


For example, if we have the polynomial

out = x²y

that we claim to know the solution for, then that must mean we know x, y, and out. Because rank one constraint systems require exactly one multiplication per constraint, the polynomial above must be rewritten as

v₁ = x * x
out = v₁ * y

A witness means we don’t just know x, y, and out, we must know every intermediate variable in this expanded form. Specifically, our witness is a vector:

[1, out, x, y, v₁]

where each term satisfies the constraints above.


For example,

[1, 18, 3, 2, 9]

is a valid witness because when we plug the values in,

[constant = 1, out = 18, x = 3, y = 2, v₁ = 9]

it satisfies the constraints

The extra 1 term is not used in this example and is a convenience we will explain later.


Example 1: Transforming out = x * y

In the circuit out = x * y, there are no intermediate variables. For our example, we will say we are proving 41 x 103 = 4223.


Therefore, our witness vector is [1, 4223, 41, 103], or [1, out, x, y].


Before we can create an r1cs, our polynomials and constraints need to be of the form

result = left_hand_side × right_hand_side

Luckily for us, it already is

r1cs out left hand side right hand side polynomial form

This is obviously a trivial example, but trivial examples are usually a great place to start.


To create a valid r1cs, you need a list of formulas that contain exactly one multiplication.


We will discuss later how to handle cases that do not have exactly one multiplication like out = x³ or out = x³ + y.


Our goal is to create a system of equations of the form

Cw=Aw⋅Bw

Matrix A encodes the left_hand_side variables B encodes the right_hand_side variables. C encodes the result variables. The variable w is the witness vector.


Specifically, A, B, and C are matrices with the same number of columns as the witness, and each column represents the same variable the index is using.


So for our example, the witness has 4 elements (1, out, x, y) so each of our matrices will have 4 columns.


The number of rows will correspond to the number of constraints in the circuit. In our case, we have only one constraint: out = x * y, so we will only have one row.


Let’s jump to the answer and explain how we obtained it.

Cw = Aw Bw

In this example, each item in the matrix serves as an indicator variable for whether or not the variable the column corresponds to is present. (Technically, it’s the coefficient of the variable, but we’ll get to that later).


For the left hand terms, x is present, so if the columns represent [1, out, x, y], then…


A is [0, 0, 1, 0], because x is present, and none of the other variables are. B is [0, 0, 0, 1] because the variables in the right hand side are just y, and C is [0, 1, 0, 0] because we only have the out variable.


We don’t have any constants anywhere so the 1 column is zero everywhere (We’ll discuss when it is non-zero later).


This equation is correct, which we can verify in Python

import numpy as np

# define the matrices
C = np.array([[0,1,0,0]])
A = np.array([[0,0,1,0]])
B = np.array([[0,0,0,1]])

# witness vector
witness = [1, 4223, 41, 103]

# Multiplication is element-wise, not matrix multiplication. # Result contains a bool indicating an element-wise indicator that the equality is true for that element.
result = C.dot(witness) == A.dot(witness) * B.dot(witness)

# check that every element-wise equality is true
assert result.all(), "result contains an inequality"

You can execute the code above online here: https://replit.com/@JeffreyScholz/Compute-Witness-Example-1?v=1


You may be wondering what the point of this is, aren’t we just saying that 41 x 103 = 4223 is a much less compact way?


You would be correct.


R1CS is far more verbose than polynomials, but they map nicely to Quadratic Arithmetic Programs (QAPs), which can be made succinct. But we will not concern ourselves with QAPs here.


But this is an important point of r1cs. r1cs communicates exactly the same information as a set of polynomial equations, but with a lot of extra zeros. In this example, we have only one polynomial constraint, but we’ll add more in the next example.


Example 2: Transforming out = x * y * z * u

In this slightly more complicated example, we need to deal with intermediate variables now. Each row of our computation can only have one multiplication, so we must break up our equation as follows:

out = xyzu transformation 1

There is no rule to say we had to break it up like that, the following is also valid:

out = xyzu transformation 2

We will use the first transformation for this example.


Size of A, B, and C

Because we are dealing with 7 variables (out, x, y, z, u, v1, v2), our witness vector will have 8 elements (the first being the constant 1) and our matrices will have 8 columns.

Because we have three constraints, the matrices will have three rows.


Left hand terms and right hand terms

This example will strongly enforce the idea of a “left hand term” and a “right hand term.” Specifically, x, z, and v1 are left hand terms, and y, u, and v2 are right hand terms.

C, A, and B variables labeled in red, purple, and blue

Constructing Matrix A from left hand terms

Let’s construct the matrix A. We know it will have three rows and eight columns

Our witness vector will be multiplied by this, so let’s define our witness vector to have the following layout:

This informs us as to what A’s columns represents:

In the first row, we have v₁ = xy:

first row of constraints highlighted

This means with respect to the left hand side, the variable x is present, and no other variables are present. Therefore, we transform the first row as follows:

first row of matrix A populated with x = 1

Working our way down, we see that only z is present for the left-hand side of our systems of equations.

second row of constraints highlighted

Therefore, we set everything in that row to be zero, except the column that represents z.


second row of A populated

Finally, we have v₁ as the only present variable in the left hand operators in the third row

third row of constraints highlighted

This completes matrix A

matrix A completed

The following image should make the mapping more clear

visual map from x, z, v1 to matrix A

Alternative transformation

We could accomplish this same exercises by expanding the left hand values of


three constraints repeated

as

zero expansion of lefthand side

We can do that because adding zero terms doesn’t change the values. We just have to be careful to expand the zero variables to have the same “columns” as how we’ve defined the witness vector.


And then, if we take the coefficients (made bold below) out of the above expansion,


coefficients of zero expansion in bold

we get the same matrix for A that we generated a moment ago.

matrix A

Constructing matrix B

matrix B unpopulated

Matrix B represents the right hand terms of our equation

right hand terms highlighted y u v2

Matrix B must have 1s representing y, u, and v2. The row in the matrix corresponds to the row of the polynomial constraint, i.e. we can number the polynomial constraints (rows) as follows:

constraint rows numbered

So the first row has 1 in the y column, the second row has 1 in the u column, and the third row has 1 in the v2 column. Everything else is zero.


This results in the following matrix for B

final result of B

This diagram illustrates the transformation.

B transformation map


Constructing matrix C

It is an exercise for the reader to determine that matrix C is

C matrix

using the column labels consistent with earlier matrices.


By way of reminder, C is derived from the result of the multiplication

C variables highlighted v1 v2 out

And the column labels are as follows

C with columns labelled according to the witness vector


Checking our work for out = x * y * z * u

import numpy as np

# enter the A B and C from above
A = np.array([[0,0,1,0,0,0,0,0],
              [0,0,0,0,1,0,0,0],
              [0,0,0,0,0,0,1,0]])
              
B = np.array([[0,0,0,1,0,0,0,0],
              [0,0,0,0,0,1,0,0],
              [0,0,0,0,0,0,0,1]])
              
C = np.array([[0,0,0,0,0,0,1,0],
              [0,0,0,0,0,0,0,1],
              [0,1,0,0,0,0,0,0]])

# random values for x, y, z, and u
import random
x = random.randint(1,1000)
y = random.randint(1,1000)
z = random.randint(1,1000)
u = random.randint(1,1000)

# compute the algebraic circuit
out = x * y * z * u
v1 = x*y
v2 = z*u

# create the witness vector
w = np.array([1, out, x, y, z, u, v1, v2])

# element-wise multiplication, not matrix multiplication
result = C.dot(w) == np.multiply(A.dot(w), B.dot(w))

assert result.all(), "system contains an inequality"

Handling addition with a constant

What if we want to build an rank one constraint system for the following?

out = x * y + 2

This is where that 1 column comes in handy.

Addition is free

You’ve probably heard the statement “addition is free” in the context of ZK Snarks. What that means is we don’t have to create an additional constraint when we have an addition operation.


We could write the above formula as

v1 = x * y, out = v1 + 2

But that would make our r1cs larger than it needs to be.


Specifically, if we rewrite the original polynomial as

out - 2 = x * y

Then the variable “out” and “2” are automatically “combined” when we do the dot product of C with our witness vector. Remember, the dot product sums up all the columns.


Hence, our witness vector has the form [1, out, x, y], so our matrix A and B will be

A = [0 0 1 0] B = [0 0 0 1]

And our matrix C will be

C = [-2 1 0 0]

The rule for addition is simple: move the terms to the output C.


Again, let’s do some unit tests on our math:

import numpy as np
import random

# Define the matrices
A = np.array([[0,0,1,0]])
B = np.array([[0,0,0,1]])
C = np.array([[-2,1,0,0]])

# pick random values to test the equation
x = random.randint(1,1000)
y = random.randint(1,1000)
out = x * y + 2# witness vector
w = np.array([1, out, x, y])

# check the equality
result = C.dot(w) == np.multiply(A.dot(w),B.dot(w))
assert result.all(), "result contains an inequality"

Multiplication with a constant

In all of the examples above, we never multiplied variables by constants. That’s why the entries in the r1cs was always 1. As you may have guessed from the above example, the entry in the matrices is the same value of the constant the variable is multiplied by as the following example will show.

Let’s work out the solution for

out = 2x^2 + y

Note that when we say “one multiplication per constraint” we mean the multiplication of two variables. The following solution is valid, but creates unnecessary rows:

v1 = x^2, out = 2v1

The more optimal solution is as follows

-y + out = 2x^2

Using the more optimal solution, our witness vector will have the form [1, out, x, y].

The matrices will be defined as follows:

r1cs with one row

Symbolically multiplying the above by [1, out, x, y] in the r1cs form gives us our original equation back:

symbolic manipulate of r1cs

so we know we set up A, B, and C correctly.


Here we have one row (constraints) and one “true” multiplication. As a general rule:

The number of constraints in a rank one constraint system should be equal the number of non-constant multiplications.


Larger example

Let’s do something less trivial that incorporates everything learned above

out = 3x^2 *y + 5xy - x - 2y + 3

We will break it up as follows:


large polynomial decomposed

Note how all the addition terms have been moved to the left (this is what we did in the addition example, but it is more apparent here).


Leaving the right hand side as 5xy in the third row is arbitrary. We could divide both sides by 5 and have the final constraint be


alternative third constraint

This doesn’t change the witness however, so both are valid.


Our witness vector will be of the form

w = [1 out x y v1 v2]

And our matrices will have three rows, since we have three constraints:

  1. v₁= 3xx

  2. v₂ = v₁y

  3. -v₂ + x + 2y - 3 + out = 5xy

We've marked the output (C) in red, the left hand side (A) in purple, and the right hand side (B) in blue. This produces the following matrices

matrices A, B, and C with the non-zero elements colored

If there seem to be surprisingly few constraints, note that our original set of polynomial constraints only has x^2, (x^2) * y, and x * y as non-constant multiplications.


And let’s check our work as usual.

import numpy as np
import random

# Define the matrices
A = np.array([[0,0,3,0,0,0],
               [0,0,0,0,1,0],
               [0,0,1,0,0,0]])

B = np.array([[0,0,1,0,0,0],
               [0,0,0,1,0,0],
               [0,0,0,5,0,0]])

C = np.array([[0,0,0,0,1,0],
               [0,0,0,0,0,1],
               [-3,1,1,2,0,-1]])

# pick random values for x and y
x = random.randint(1,1000)
y = random.randint(1,1000)

# this is our orignal formula
out = 3 * x * x * y + 5 * x * y - x- 2*y + 3# the witness vector with the intermediate variables inside
v1 = 3*x*x
v2 = v1 * y
w = np.array([1, out, x, y, v1, v2])

result = C.dot(w) == np.multiply(A.dot(w),B.dot(w))
assert result.all(), "result contains an inequality"


Rank 1 Constraint Systems do not require starting with a single polynomial

To keep things simple, we've been using examples of the form out = xy + ... but most set of polynomial constraints are going to start with a lot more than one variable.


For example, suppose we are proving that an array [x₁, x₂, x₃, x₄] is binary. The set of polynomial constraints will be


x₁² = x₁

x₂² = x₂

x₃² = x₃

x₄² = x₄


Luckily, that is already in the form required for an R1CS, but in most cases, we would have to "flatten" it a little bit -- introducing extra variables to ensure there is only one quadratic multiplication per constraint.


Turning the above set of constraints into an R1CS is an exercise for the reader. It will be quite straightforward due to how "symmetrical" the equations are.


Everything is done modulo prime in r1cs

In the above examples, we used traditional arithmetic for the sake of simplicity, but real world implementations use modular arithmetic instead.


The reason is simple: encoding numbers like 2/3 leads to ill-behaved floats which are computationally intensive and error prone, which leads to frustration, which leads to the dark side.


If we do all our math modulo a prime number, let’s say 23, then encoding 2/3 is straightforward. It’s the same as 2 * 3^-1, and multiplying by two and raising to the power of negative 1 are straightforward in modular arithmetic

p = 23

# 2 * 3^-1 mod p
two_thirds = 2 * pow(3, -1, p)

assert (two_thirds * 3) % p == 2

# ⅓ = 3 * 1^-1
one_third = pow(3, -1, p)

# check that ⅓ + ⅔ == 1 mod p
assert (two_thirds + one_third) % p == 1


Circom implementation.

In Circom (and many other frameworks), math is done modulo 21888242871839275222246405745257275088548364400416034343698204186575808495617.


This means -1 in that representation is

p = 21888242871839275222246405745257275088548364400416034343698204186575808495617

# 1 - 2 = -1
(1 - 2) % p

# 21888242871839275222246405745257275088548364400416034343698204186575808495616


Circom for out = x * y

If we write out = x * y in Circom, it would look like the following:

pragma circom 2.0.0;

template Multiply2() {
    signal input x;
    signal input y;
    signal output out;

    out <== x * y;
 }

component main = Multiply2();

Let’s turn this into an r1cs file and print the r1cs file

circom multiply2.circom --r1cs --sym
snarkjs r1cs print multiply2.r1cs

We get the following output:

This looks quite a bit different from our r1cs solution, but it is actually encoding the same information.

Here are the differences in Circom’s implementation:

  • columns with zero value are not printed

  • Circom writes Cw = AwBw as AwBw - Cw = 0

What about the 21888242871839275222246405745257275088548364400416034343698204186575808495616 which is really -1?


Circom's solution is

circom r1cs non-modular

Even though the negative ones might be unexpected, with the witness vector [1 out x y], this is actually consistent with the form Aw * Bw - Cw = 0. (We will see in a second that circom did indeed use this column assignment).


You can plug in values for x, y, and out and see that the equation Aw * Bw - Cw = 0 holds.


Let’s see Circom’s variable to column assignment. Let’s recompile our circuit with a wasm solver:

circom multiply2.circom --r1cs --wasm --sym

cd multiply2_js

We create the input.json file

echo '{"x": "11", "y": "9"}' > input.json

And compute the witness

node generate_witness.js multiply2.wasm input.json witness.wtns

snarkjs wtns export json witness.wtns witness.json

cat witness.json

We get the following result:

circom witness json

It is clear that circom is using the same column layout we have been using: [1, out, x, y], as x was set to 11 and y to 9 in our input.json


If we use Circom’s generated witness (replacing the massive number with -1 for readability), then we see Circom's r1cs is correct

Circom solution to 9 x 11 = 99 in R1cs form

A has one coefficient of -1 for x, B has one coefficient of +1 for y, and C has -1 for out. In modular form, this is identical to what the terminal outputted above:

Checking the rest of our work

By way of review, the formulas we explored were

polynomial formulas in this article

We just did (1) in the section above, for this section we will illustrate the principle that the number of non-constant multiplications is the number of constraints.


The circuit for (2) is

pragma circom 2.0.8;

template Multiply4() {
    signal input x;
    signal input y;
    signal input z;
    signal input u;
    
    signal v1;
    signal v2;
    
    signal out;
    
    v1 <== x * y;
    v2 <== z * u;
    
    out <== v1 * v2;
}

template main = Multiply4();

With everything we’ve dicussed so far, the circom output and the annotations should be self-explanatory.


With that in mind, our other formulas should have constraints as follows:


expected number of non-linear constraints per polynomial

It is an exercise for the reader to write the circom circuits and verify the above.


You do not need a witness to calculate the R1CS

Note that in the Circom code we never supplied the witness before calculating the r1cs. We supplied the witness earlier to make the example less abstract and to make it easy to check our work, but it isn’t necessary. This is important, because if a verifier needed a witness to construct an r1cs, then the prover would have to give the hidden solution away!


When we say “witness” we mean a vector with populated values. The verifier knows the “structure” of the witness, i.e. the variable to column assignments, but doesn’t know the values.


The prover doesn’t give out the witness, or even a homomorphically encrypted version, as that could be quite large. We need extra machinery to make the witness succinct, which is outside the scope of this article.


What if we wanted to compute r1cs inefficiently?

A valid transformation from a polynomial to an R1CS is not unique. You can encode the same problem with more constraints, which is less efficient. Here is an example.


In some R1CS tutorials, the constraints for a formula like

out = x² + y

is transformed to

v₁ = x * x
out = v₁ + y

As we’ve noted, this is not efficient. However, you create a valid r1cs for this using the methodology in this article. We simply add a dummy multiplication like so:

v₁ = x * x
out = (v₁ + y) * 1

Our witness vector is of the form [1, out, x, y, v1] and A, B, and C are defined as follows:

inefficient r1cs computation

The second row of A accomplishes the addition, and the multiply by one is accomplished by using the first element of the second row of B.


This is perfectly valid, but the solution has one more row and and one more column than it needs.


What if there are no multiplications?

What if we want to encode the following circuit?

out = x + y

This is pretty useless in practice, but for the sake of completeness, there can be solved with a dummy multiplication by one.

out = (x + y)*1

With our typical witness vector layout of [1, out, x, y], we have the following matrices:

A = [0 0 1 1]
B = [1 0 0 0]
C = [0 1 0 0]

Circom will not allow you to write circuits with no multiplications in them, but other zero knowledge programming languages will silently add a constraint like the one above to enable a transformation.


Rank One Constraint Systems are for convenience

The original papers for pinocchio and groth16 don’t have any reference to the term rank one constraint system. R1CS is handy from an implementation perspective, but from a pure math perspective, it is simply explicitly labeling and grouping the coefficients of different variables. So when you read academic papers on the subject, it is usually missing because it is an implementation detail of a more abstract concept.


Handy Resources

This web tool calculates R1CS for a set of constraints (but it only works with one input and output variable).




Learn more with RareSkills

This blog post is taken from learning materials in our zero knowledge course.

5,819 views0 comments

Comments


bottom of page