Zk circuits form a set of constraints that if satisfied, prove a computation was carried out correctly. Zk circuits are sometimes called arithmetic circuits because the “gates” in the circuit are addition and multiplication over a finite field. We are only allowed to use these two operators as a __finite field__ only has these two operations. This makes translating the circuit to polynomials and __elliptic curve points__ straightforward. Elliptic curve points support addition and a limited version of "multiplication" via __bilinear pairings__, so we must only use those operations.

The primary takeaway from this article is that zero knowledge proof circuits is not a programming paradigm for modeling computation but rather for validating a computation. Writing zero knowledge circuits is more like writing unit tests than writing the computation we are proving.

All mathematics in this article is done over a finite field; we omit writing (mod p) everywhere to keep the math cleaner.

Rather than trying to motivate zero knowledge circuits in the abstract, we will provide several examples so the reader can pattern match and develop an intuition for them.

## Example: Sudoku

When verifying a Sudoku solution is valid, the circuit verifying the proof doesn’t solve the Sudoku puzzle all over but instead verifies each row and column contains every digit from 1 to 9 exactly once.

Solving for a solution is usually a very different algorithm from proving the solution is true. With sudoku solving, we usually employ some kind of depth-first-search algorithm to find a solution. When *verifying* a sudoku solution, we just check that the proposed solution follows the rules.

For some operations, validating the solution is correct is the same computing it – for example proving that 5 * 7 + 3 = 38. To verify the claim that 5 * 7 + 3 = 38, you need to carry out the computation.

But most of the time, the verification algorithm doesn’t resemble the solving algorithm.

There is no set way to translate a solving algorithm to a verification algorithm, it generally requires some kind of creativity on the programmer’s part.

## Example: Proving b is the inverse of a

In Python, we compute the modular inverse of a finite field element using b = pow(a, -1, n) where n is the size of the field. Here’s the catch: ** we do not compute the inverse ourselves and compare the answer.** If b really is the inverse of a, then a * b == 1 must be true. So our circuit will simply be a * b == 1. We

**do not**implement the Extended Euclidean Algorithm as a circuit.

Claim: b is the modular inverse of a

Proof: a * b == 1

**The point of a ZK Circuit is to prove a computation is correct, not to repeat the computation in circuit form.**

To make this a little more concrete, here are the algorithms the prover and verifier would run:

```
field_size = 29 # some prime number
## Prover
def compute_inverse(a):
return pow(a, -1, field_size)
a = 22
b = compute_inverse(a)
## Verifier
assert (a * b) % field_size == 1
```

Again, the verifier is not redoing the prover’s work but checking the answer.

## Example: Proving x is the root of a polynomial

What is confusing about thinking about circuits is that the most common examples for them actually carry out the same computation we are verifying.

For example, if we want to prove x = 15 is a root of 2x² - 35 x + 75, then our circuit looks exactly like the computation:

`assert 0 == 2(15)² - 35(15) + 75`

**This is not the norm**. **Most circuits do not look like the computation they are verifying** as the first example, and the rest of the examples in this article, will show. If the claimed computation is only doing straightforward multiplication and addition, then the circuit might be a 1-1 translation from the original source code. Most interesting computations are not merely addition and multiplication however.

Even so, we should be aware that there is implied work the prover did that the verifier isn’t doing. The prover presumably had to compute the root of the polynomial. The verifier isn’t computing the root, just checking if the claimed root is valid.

It is a bit misleading that zk circuits are called “circuits” because that implies they are executing a computation the way a computer chip does. This confusingly implies that the verifier is repeating the prover’s algorithm using a different programming language.

It would be better to call zk circuits “systems of non-linear constraints” because that is what they are, but zk circuits is what we all refer to them as of now, so we are stuck with that terminology.

### Side-note about solving systems of non-linear constraints

The fact that the zk-circuits are non-linear in general is quite critical. If we could model an arbitrary computation as a **linear** system of constraints, then we have a straightforward proof that P = NP since linear systems of equations are easy to solve. If P = NP, there isn’t any significant compute cost difference in proving the equation rather than solving it.

We could also easily invert hash functions simply by modeling them as a circuit with a known output, then solving for the inputs.

So no, modeling a circuit in z3 or cvc5 will not give you a shortcut to breaking hard problems.

But this is key to why zk proofs are sound (you cannot come up with proofs for false statements). The only reliable way to find a solution for the system of non-linear equations is to actually execute the algorithm the system of non-linear equations is validating.

## Example: Proving a binary transformation was valid

Let’s get back to realistic examples.

Suppose a is a finite field element that falls in the range [0, 15]. I claim that b1, b2, b3, b4 is the binary representation of it. For example, if a = 11, then (b1, b2, b3, b4) = (1, 0, 1, 1).

Everything is a field element in arithmetic circuits, so first we need to make sure that b1…b4 only have the values zero or one. This is easy to constrain:

```
b1 * (1 - b1) == 0
b2 * (1 - b2) == 0
b3 * (1 - b3) == 0
b4 * (1 - b4) == 0
```

Note that if b1 has a value other than 0 or 1, the system of equations will not be satisfiable.

Next, we have to prove that the binary representation is true. To do this, we simply do the math that converts a binary number to a regular number:

`(8 * b1) + (4 * b2) + (2 * b3) + (1 * b4) == 11`

**We do not re-compute the binary transformation, we merely verify the claimed binary transformation is true.**

Claim: b1, b2, b3, b4 encode the binary representation of **a**

Proof: bi * (1 - bi) == 0 for i in [1,2,3,4] and (8 * b1) + (4 * b2) + (2 * b3) + (1 * b4) == a

## Example: Proving a number falls into a certain range

Using the above trick, we can prove a number falls into the interval [0, 2^n - 1] where n is the number of bits. If the sum of all the bits isn’t equal to the number, then it must lie outside the range.

Claim: a is in the range [0, 255]

Proof: bi * (1 - bi) == 0 for i in [1…8] and (128 * b1) + (64 * b2) + … + (1 * b8) == a

## Example: Proving a > b

Comparing field elements is rather tricky because the traditional comparison operators are not consistent when we underflow or overflow the field. More generally, it is not possible to compare field elements using only addition and multiplication.

For example, (1 - 2) will result in the largest element in the set, and that will be larger than 1. The statement 1 < (1 - 2) will of course lead to inconsistencies, so we cannot just do a direct comparison.

We will get into the details of how to accomplish this in our circom tutorial, but the key idea is that binary numbers can be easily compared, as opposed to field elements. By imposing the range check above, we don’t worry about the overflow and underflow issues.

## Example: Proving x is the maximum element in a list of elements

In normal programming, if I claim x is the maximum in a list, you would simply compute the maximum yourself and compare answers.

In circuit programming, you work backwards. You compare each element to the ** claimed** maximum and check for violations.

To do this safely, we need to impose a range check on on all the elements and use the comparison operator from the section above.

## Example: Proving integer division was done properly

Proving finite field division was done properly is easy, we saw that in the first example. But if we want to prove a / b = c for **integer division** this is trickier than just saying a * b = c.

Consider the example of 7 / 2. The solution is 3 in integer division, but 2 * 3 ≠ 7. We need to verify two claims: 3 is the quotient of 7 / 2, and 1 is the remainder. So the first entry in our proof is quotient * divisor + remainder == numerator. This is not a sufficient constraint because a malicious prover could provide 2 * 2 + 3 == 7 — they would be proving 7 / 2 = 2 and the quotient is 3. Additional constraints are needed to ensure the division was done properly. The implementation details of this are left as an exercise for the reader.

## Example: Proving a list was sorted

Suppose I give you two arrays, A and B, and I claim that B is the sorted version of A. In this section, we will learn about **auxiliary computations**. That is, we ask the prover to do additional work to back up their claim.

One way you can verify this is to sort A yourself, but carrying out merge sort or quick sort as a set of multiplications is rather challenging.

Let’s think about this in reverse. Suppose B is a sorted version of A. Then the following set of facts must be true:

A and B have the same length

B is itself sorted

A and B have the same elements

We already know how to accomplish the first two checks using the sections above, but what about the last one? Computing the mapping ourselves, especially only with addition and multiplication will be rather challenging. Thankfully we don’t need to compute the mapping ourselves. **We ask the prover to give us the mapping from the unsorted list to the sorted one and then we verify the mapping is a valid one.**

For example, one way to permute vectors is to do matrix multiplication in the following manner:

Such a sparse matrix can have an efficient representation, but let’s not concern ourselves with that. To verify the “transformation matrix” is valid, we need to ensure each element is zero or one, and each row and column contains exactly one “1” with the rest being zeros. If these constraints are not in place, then the prover could prove that the “sorted” list contains elements that are not in the original array.

How a malicious prover could accomplish this, and determining what the proper constraints are, are an exercise for the reader.

## Example: Proving a bit shift

If we constrain the matrix above to have all the “1” elements sit on a diagonal, then we can prove the permutation is a bit shift.

For example, the identity matrix will not shift the elements, but the following matrix will shift elements by one:

If we want to shift the bits in a circular manner, we would use the following matrix:

## Example: List contains no duplicates

The common way to check if a list contains no duplicates is to convert the list into a set and check if the size of the set equals the length of the array. However, encoding a data structure like this in a circuit is not efficient. Alternatively, we can sort the list and loop through the list and check if any of the neighbors are equal, but again, we don’t want to do sorting inside a circuit.

Instead, we can ask the prover to sort the list, verify it is sorted using the methodology from earlier, then check each element of the array to see if the entry next to it is equal.

## How do hash functions get proved?

Zk proofs appear to have this magical property of proving you know the preimage of a hash without revealing it. At this point, hopefully you have an intuition for how this is possible.

A traditional cryptographic hash is largely a combination of bitshifts and bit-wise XOR operations.

If each step in the hash function can be proven to have been executed correctly, then the entire hash function can be proven to have been executed correctly, ** without executing the hash function**.

Proving each step of something like sha256 as a circuit is extremely non-trivial: it requires tens of thousands of constraints to prove every stage of this hash function.

## Zk Circuit Design Conclusion

Designing circuits that correctly define a computation is a big topic, so we will not dive deep now.

At this point, we just want to give you an intuition for what circuits are and their motivation. Circuits are constraints to prove a calculation is correct based on assumptions about what the algorithm does. Circuits do not generally carry out the claimed computation except in very simple cases.

Designing a zk circuit requires a significant amount of “thinking backwards.” It generally works by the principle of

“if a claimed output of a computation is true, then the output must have requirements that are satisfied. If the requirements are hard to model with only addition or multiplication, we ask the prover to auxiliary work so we can model the requirements more easily.”

This article should also give you a clearer idea of what a zero knowledge proof actually is. At least in the context of ZK-SNARKS, **a zero knowledge proof is a solution to a system of non-linear equations that is satisfied when a computation is carried out correctly.** Of course, the solution is obscured using elliptic curves, but under the hood, it is a solution to a system of non-linear equations.

## Learn more with RareSkills

If you are looking to comprehensively __learn zero knowledge proofs__, see our zk bootcamp.

Thank you, great piece like all the previous! "To do this safely, we need to impose a range check on on all the elements and use the comparison operator from the section above." \ Can claimed "max" be the range boundary, or it should be a constant, known before-hand? If it does, would proving that other members are in the range [0, `max` - 1] do the trick?