You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a personal attempt to pierce through the hard walls of ZK constructions.
I don't want to skip any detail that would obscure my own understanding.
Hopefully this is enough to have the same effect on the reader.
Disclaimer
Elegance are succintness is non-goals. If there is a slower argument, mechanical even, I will choose it.
R1CS / QAP equivalence
Let's start with the following R1CS
$$\mathbb{A}Z \circ \mathbb{B}Z = \mathbb{C}Z$$
for fixed matrices $\mathbb{A}$, $\mathbb{B}$ and $\mathbb{C}$ with $n$ lines and $m$ columns each and the vector $Z$.
For each column $j$ of $\mathbb{A}$, $\mathbb{B}$ and $\mathbb{C}$, we construct the respective polynomial functions
$p_\mathbb{A_\mathcal{j}}$, $p_\mathbb{B_\mathcal{j}}$ and $p_\mathbb{C_\mathcal{j}}$ such that:
$Z$ is a solution for the original R1CS iff $d(i)$ divides $t(i)$ for all $0 \le i < n$.
Proof
$\Longrightarrow$
Since $Z$ is a solution to the R1CS, for every line $i$ of $\mathbb{A}$, $\mathbb{B}$ and $\mathbb{C}$,
let's call them $L_\mathbb{A_\mathcal{i}}$, $L_\mathbb{B_\mathcal{i}}$ and $L_\mathbb{C_\mathcal{i}}$ respectively,
we have that
$$(L_\mathbb{A_\mathcal{i}} \cdot Z)(L_\mathbb{B_\mathcal{i}} \cdot Z) = L_\mathbb{C_\mathcal{i}} \cdot Z \hspace{1cm}(1)$$
Now let's take a closer look at the dot product $L_\mathbb{A_\mathcal{i}} \cdot Z$.
It's the sum of the pairwise products between elements of $\mathbb{A}$'s $i$-th line and the elements in $Z$.
And that's precisely $\Sigma_{j = 0}^{m - 1} Z_j p_\mathbb{A_\mathcal{j}}(i)$ by construction of every
$p_\mathbb{A_\mathcal{j}}$ for $0 \le j < m$.
This is true because we've assumed $i$ to be in the range $0 \le i < n$ from the beginning, where such polynomials
behave as we planned.
Analogously, the same can be said about $L_\mathbb{B_\mathcal{i}} \cdot Z$ and $L_\mathbb{C_\mathcal{i}} \cdot Z$.
So we substitute in $(1)$:
The LHS is $t(i)$. So $t(i)$ equals $0$, which is divisible by $d(i)$.
Furthermore, $i$ has been kept as a generic line index for $\mathbb{A}$, $\mathbb{B}$ and $\mathbb{C}$.
Thus the argument above is valid for all $0 \le i < n$.
$\Longleftarrow$
If $d(i)$ divides $t(i)$, then there exists a polynomial function $h$ such that $t(i) = d(i)h(i)$.
Well, $d(i)$ was constructed to evaluate to $0$ at every $0 \le i < n$.
Thus $t(i)$ also evaluates to $0$ at every $0 \le i < n$.
Now we basically do the reverse path of what we've done before:
for fixed matrices $\mathbb{A}$, $\mathbb{B}$ and $\mathbb{C}$.
An instance for the relaxed R1CS includes the vectors $Z$ and $E$ as well as the scalar $u$.
An instance is a solution iff the system of equations is satisfied by it.
Claim
Suppose we have $(Z_1, E_1, u_1)$, $(Z_2, E_2, u_2)$ and $r$ such that
Since $r_1^2 - r_2^2 \ne 0$, then we can conclude that $\mathbb{A}Z_2 \circ \mathbb{B}Z_2 = u_2\mathbb{C}Z_2 + E_2$.
Rewrite that in $(4)$ to conclude that $\mathbb{A}Z_1 \circ \mathbb{B}Z_1 = u_1\mathbb{C}Z_1 + E_1$.
$\Longrightarrow$
Multiply the second relaxed R1CS by $r$ and sum with the first to produce the exact linear combination from $(4)$.
Conclusion
We've seen how to construct an instance for a relaxed R1CS that encodes a pair of instances such that
checking whether the later is a solution is enough to be convinced that the instances in the initial pair
are solutions as well. This construction guarantees that if the the pair of instances are solutions, then
the resulting instance is also a solution.