From 6e9c3b9efb400988fccb91a52fc4ab81bce7776a Mon Sep 17 00:00:00 2001 From: kashepavadan Date: Thu, 4 Jul 2024 13:25:03 -0400 Subject: [PATCH 1/6] ref and typo fixes in chapters 3-5 --- chapters/algebra-moonmath.tex | 2 +- chapters/arithmetics-moonmath.tex | 2 +- chapters/elliptic-curves-moonmath.tex | 8 ++++---- moonmath.bib | 5 +++++ 4 files changed, 11 insertions(+), 6 deletions(-) diff --git a/chapters/algebra-moonmath.tex b/chapters/algebra-moonmath.tex index 51cc315e..4926ad22 100644 --- a/chapters/algebra-moonmath.tex +++ b/chapters/algebra-moonmath.tex @@ -130,7 +130,7 @@ \subsection{The exponential map} \Require $g$ group generator of order $n$ \Require $x \in \Z_n$ \Procedure{Exponentiation}{$g,x$} -\State Let $(b_0,\ldots,b_k)$ be a binary representation of $x$ \Comment{see example XXX} +\State Let $(b_0,\ldots,b_k)$ be a binary representation of $x$ \Comment{see \ref{def:binary_representation_integer}} \State $h \gets g$ \State $y \gets e_{\G}$ \For{$0\leq j < k$} diff --git a/chapters/arithmetics-moonmath.tex b/chapters/arithmetics-moonmath.tex index 6f80c2cb..49852d4b 100644 --- a/chapters/arithmetics-moonmath.tex +++ b/chapters/arithmetics-moonmath.tex @@ -1033,7 +1033,7 @@ \subsection{\concept{Euclidean division} with polynomials} % https://math.stackexchange.com/questions/2140378/division-algorithm-for-polynomials-in-rx-where-r-is-a-commutative-ring-with-u This algorithm works only when there is a notion of division by the leading coefficient of $B$. It can be generalized, but we will only need this somewhat simpler method in what follows. -\begin{example}[Polynomial Long Division] To give an example of how the previous algorithm works, let us divide the integer polynomial $A(x)=x^5+2x^3-9\in \Z[x]$ by the integer polynomial $B(x)=x^2+4x-1\in\Z[x]$. Since $B$ is not the zero polynomial, and the leading coefficient of $B$ is $1$, which is invertible as an integer, we can apply algorithm \ref{alg_polynom_euclid_alg}. Our goal is to find solutions to equation XXX\sme{add reference}, that is, we need to find the quotient polynomial $Q\in\Z[x]$ and the remainder polynomial $P \in \Z[x]$ such that $x^5+2x^3-9 = Q(x)\cdot (x^2+4x-1) + P(x)$. Using a the long division notation that is mostly used in anglophone countries, we compute as follows: +\begin{example}[Polynomial Long Division] To give an example of how the previous algorithm works, let us divide the integer polynomial $A(x)=x^5+2x^3-9\in \Z[x]$ by the integer polynomial $B(x)=x^2+4x-1\in\Z[x]$. Since $B$ is not the zero polynomial, and the leading coefficient of $B$ is $1$, which is invertible as an integer, we can apply algorithm \ref{alg_polynom_euclid_alg}. Our goal is to find solutions to equation \ref{eq_polynomial_euclidean_division_notation}\sme{check reference}, that is, we need to find the quotient polynomial $Q\in\Z[x]$ and the remainder polynomial $P \in \Z[x]$ such that $x^5+2x^3-9 = Q(x)\cdot (x^2+4x-1) + P(x)$. Using a long division notation that is mostly used in anglophone countries, we compute as follows: \begin{equation} \polylongdiv{X^5+2X^3-9}{X^2+4X-1} \end{equation} diff --git a/chapters/elliptic-curves-moonmath.tex b/chapters/elliptic-curves-moonmath.tex index 05baa6eb..2a4783e8 100644 --- a/chapters/elliptic-curves-moonmath.tex +++ b/chapters/elliptic-curves-moonmath.tex @@ -1392,7 +1392,7 @@ \subsection{Pairing groups} ....: L_TJJ_G1.append(P) sage: TJJ_G1 = Set(L_TJJ_G1) \end{sagecommandline} -We get $\G_1= \{\Oinf, (7,2), (8,8), (8,5), (7,11)\}$ and as expected, $\G_1$ is identical to the $5$-torsion group of the (unextended) curve over the prime field $TJJ_13$ as computed in \examplename{} \ref{eq:TJJ13-logarithmic-order}. +We get $\G_1= \{\Oinf, (7,2), (8,8), (8,5), (7,11)\}$ and as expected, $\G_1$ is identical to the $5$-torsion group of the (unextended) curve over the prime field $TJJ\_13$ as computed in \examplename{} \ref{eq:TJJ13-logarithmic-order}. In order to compute the group $\G_2$ for the tiny jubjub curve, we can use almost the same algorithm as we used for the computation of $\G_1$. Since $p=13$ we get the following: \begin{sagecommandline} @@ -1465,7 +1465,7 @@ \subsection{The Weil pairing} \end{algorithmic} \end{algorithm} -Understanding the details of how and why this algorithm works requires the concept of \term{divisors}, which is outside of the scope this book. The interested reader might look at \chaptname{} 6, \secname{} 6.8.3 in \cite{hoffstein-2008}, or at \href{https://static1.squarespace.com/static/5fdbb09f31d71c1227082339/t/5ff394720493bd28278889c6/1609798774687/PairingsForBeginners.pdf}{Craig Costello’s great tutorial on elliptic curve pairings}.\tbds{add this to references} As we can see, the algorithm is more efficient on prime numbers $r$, that have a low Hamming weight \ref{def:binary_representation_integer}. +Understanding the details of how and why this algorithm works requires the concept of \term{divisors}, which is outside of the scope this book. The interested reader might look at \chaptname{} 6, \secname{} 6.8.3 in \cite{hoffstein-2008}, or at \chaptname {} 3 in \cite{costello-pairings}. As we can see, the algorithm is more efficient on prime numbers $r$, that have a low Hamming weight \ref{def:binary_representation_integer}. We call an elliptic curve $E(\F_p)$ \term{pairing-friendly} if there is a prime factor of the groups order such that the Weil pairing is efficiently computable with respect to that prime factor. In real-world applications of pairing-friendly elliptic curves, the embedding degree is usually a small number like $2$, $4$, $6$ or $12$, and the number $r$ is the largest prime factor of the curve's order. @@ -1555,7 +1555,7 @@ \subsection{Try-and-increment hash functions} Since the curve $TJJ\_13$ is defined over the field $\F_{13}$, and the binary representation of $13$ is $Bits(13)=<1,1,0,1>$, one way to implement a try-and-increment function is to apply SHA256 from Sage's hashlib library on the concatenation $s||c$ for some binary counter string $c$, and use the first $4$ bits of the image to try to hash into $\F_{13}$. In case we are able to hash to a value $x$ such that $x^3 +8\cdot x + 8$ is a quadratic residue in $\F_{13}$, we use the fifth bit to decide which of the two possible roots of $x^3 + 8\cdot x + 8$ we will choose as the $y$ coordinate. The result is a curve point different from the point at infinity. To project it onto the large prime order subgroup $TJJ\_13[5]$, we multiply it with the cofactor $4$. If the result is not the point at infinity, it is the result of the hash. -To make this concrete, let $s=<1,1,1,0,0,1,0,0,0,0>$ be our binary string that we want to hash onto $TJJ_13[5]$. We use a binary counter string starting at zero, that is, we choose $c=<0>$. Invoking Sage, we define the try-hash function as follows: +To make this concrete, let $s=<1,1,1,0,0,1,0,0,0,0>$ be our binary string that we want to hash onto $TJJ\_13[5]$. We use a binary counter string starting at zero, that is, we choose $c=<0>$. Invoking Sage, we define the try-hash function as follows: \begin{sagecommandline} sage: import hashlib sage: def try_hash(s,c): @@ -1595,7 +1595,7 @@ \subsection{Try-and-increment hash functions} $$ \end{example} \begin{exercise} -Use our definition of the $try\_hash$ algorithm to implement a hash function $H_{TJJ\_13[5]} : \{0,1\}^*\to TJJ\_13(\F_{13})[5]$ that maps binary strings of arbitrary length onto the $5$-torsion group of $TJJ13(\F_{13})$. +Use our definition of the $try\_hash$ algorithm to implement a hash function $H_{TJJ\_13[5]} : \{0,1\}^*\to TJJ\_13(\F_{13})[5]$ that maps binary strings of arbitrary length onto the $5$-torsion group of $TJJ\_13(\F_{13})$. \end{exercise} \begin{exercise} Implement a cryptographic hash function $H_{secp256k1} : \{0,1\}^*\to secp256k1$ that maps binary strings of arbitrary length onto the elliptic curve \curvename{secp256k1}. diff --git a/moonmath.bib b/moonmath.bib index 72b434c4..3ecaad2b 100644 --- a/moonmath.bib +++ b/moonmath.bib @@ -324,4 +324,9 @@ @misc{bowe-17 url = {https://eprint.iacr.org/2017/1050} } +@misc{costello-pairings, + author = {Craig Costello}, + title = {Pairings for beginners}, + url = {https://static1.squarespace.com/static/5fdbb09f31d71c1227082339/t/5ff394720493bd28278889c6/1609798774687/PairingsForBeginners.pdf} +} From 68a0afa49796e71ff9469db9a1a8ee21d3c3a96b Mon Sep 17 00:00:00 2001 From: kashepavadan Date: Fri, 5 Jul 2024 16:16:23 -0400 Subject: [PATCH 2/6] Chapter 6 typo edits --- chapters/statements-moonmath.tex | 73 ++++++++++++++++---------------- 1 file changed, 36 insertions(+), 37 deletions(-) diff --git a/chapters/statements-moonmath.tex b/chapters/statements-moonmath.tex index 5f083674..427c3a74 100644 --- a/chapters/statements-moonmath.tex +++ b/chapters/statements-moonmath.tex @@ -451,7 +451,7 @@ \subsubsection{R1CS Satisfiability} \label{ex:3-fac-R1CS-constr-proof} Consider the language $L_{3.fac\_zk}$ from \examplename{} \ref{ex:L-3fac-zk} and the R1CS defined in \examplename{} \ref{ex:3-factorization-r1cs}. As we have seen in \ref{ex:3-factorization-r1cs}, solutions to the R1CS are in 1:1 correspondence with solutions to the decision function of $L_{3.fac\_zk}$. Both languages are therefore equivalent in the sense that there is a 1:1 correspondence between words in both languages. -To give an intuition of what constructive R1CS-based proofs in $L_{3.fac\_zk}$ look like, consider the instance $I_1= 11$. To prove the statement ``There exists a witness $W$ such that $(I_1;W)$ is a word in $L_{3.fac\_zk}$'' constructively, a proof has to provide a solution to the R1CS from \examplename{} \ref{ex:3-factorization-r1cs}, that is, an assignments to all witness variables $W_1$, $W_2$, $W_3$ and $W_4$. Since the alphabet is $\F_{13}$, an example assignment is given by +To give an intuition of what constructive R1CS-based proofs in $L_{3.fac\_zk}$ look like, consider the instance $I_1= 11$. To prove the statement ``There exists a witness $W$ such that $(I_1;W)$ is a word in $L_{3.fac\_zk}$'' constructively, a proof has to provide a solution to the R1CS from \examplename{} \ref{ex:3-factorization-r1cs}, that is, an assignment to all witness variables $W_1$, $W_2$, $W_3$ and $W_4$. Since the alphabet is $\F_{13}$, an example assignment is given by $W=<2,3,4,6>$ since $(I_1;W)$ satisfies the R1CS: \begin{align*} W_1 \cdot W_2 &= W_4 & \text{\# } 2\cdot 3 = 6\\ @@ -486,14 +486,14 @@ \subsubsection{Modularity} \subsection{Algebraic Circuits} \label{sec:circuits} As we have seen in the previous paragraphs, \concept{rank-1 constraint systems} are quadratic equations such that solutions are knowledge proofs for the existence of words in associated languages. From the perspective of a prover, it is therefore important to solve those equations efficiently. -However, in contrast to systems of linear equations, no general methods are known that solve systems of quadratic equations efficiently. \concept{rank-1 constraint systems} are therefore impractical from a provers perspective and auxiliary information is needed that helps to compute solutions efficiently. +However, in contrast to systems of linear equations, no general methods are known that solve systems of quadratic equations efficiently. \concept{rank-1 constraint systems} are therefore impractical from a prover's perspective and auxiliary information is needed to help compute solutions efficiently. Methods which compute R1CS solutions are sometimes called \term{witness generator functions}. To provide a common example, we introduce another class of decision functions called \term{algebraic circuits}. As we will see, every algebraic circuit defines an associated R1CS and also provides an efficient way to compute solutions for that R1CS. This method is introduced, for example, in \cite{sasson-2013}. It can be shown that every space- and time-bounded computation is expressible as an algebraic circuit. Transforming high-level computer programs into those circuits is a process often called \term{flattening}. We will look at those transformations in \chaptname{} \ref{chap:circuit-compilers}. In this section we will introduce our model for algebraic circuits and look at the concept of circuit execution and valid assignments. After that, we will show how to derive \concept{rank-1 constraint systems} from circuits and how circuits are useful to compute solutions to associated R1CS efficiently. -\subsubsection{Algebraic circuit representation} To see what algebraic circuits are, let $\F$ be a field. An algebraic circuit is then a directed acyclic (multi)graph that computes a polynomial function over $\F$. Nodes with only outgoing edges (source nodes) represent the variables and constants of the function and nodes with only incoming edges (sink nodes) represent the outcome of the function. All other nodes have exactly two incoming edges and represent the field operations \term{addition} as well as \term{multiplication}. Graph edges are directed and represent the flow of the computation along the nodes. +\subsubsection{Algebraic circuit representation} To see what algebraic circuits are, let $\F$ be a field. An algebraic circuit is then a directed acyclic multi-graph that computes a polynomial function over $\F$. Nodes with only outgoing edges (source nodes) represent the variables and constants of the function and nodes with only incoming edges (sink nodes) represent the outcome of the function. All other nodes have exactly two incoming edges and represent the field operations \term{addition} as well as \term{multiplication}. Graph edges are directed and represent the flow of the computation along the nodes. To be more precise, in this book, we call a directed acyclic multi-graph $C(\F)$ an \term{algebraic circuit} over $\F$ if the following conditions hold: @@ -505,7 +505,7 @@ \subsubsection{Algebraic circuit representation} To see what algebraic circuits \item Every node that is neither a source nor a sink has exactly two incoming edges and a label from the set $\{+,*\}$ that represents either addition or multiplication in $\F$. \item All outgoing edges from a node have the same label. \item Outgoing edges from a node with a label that represents a variable have a label. -\item Outgoing edges from a node with a label that represents multiplication have a label, if there is at least one labeled edge in both input path. +\item Outgoing edges from a node with a label that represents multiplication have a label, if there is at least one labeled edge in both input paths. \item All incoming edges to sink nodes have a label. \item If an edge has two labels $S_i$ and $S_j$ it gets a new label $S_i = S_j$. \item No other edge has a label. @@ -641,7 +641,7 @@ \subsubsection{Algebraic circuit representation} To see what algebraic circuits n16 [shape=box, label="f_tiny-jj" /*, color=lightgray */]; } \end{center} -This circuit is not a graph, but a multigraph, since there is more than one edge between some of the nodes. +This circuit is not a graph, but a multi-graph, since there is more than one edge between some of the nodes. In the process of designing of circuits from functions, it should be noted that circuit representations are not unique in general. In case of the function $f_{tiny-jj}$, the circuit shape is dependent on our choice of bracketing above.%in \ref{ex:tiny_circuit_brackets}. An alternative design is for example, given by the following circuit, which occurs when the bracketed expression $8\cdot ( (x\cdot x) \cdot (y\cdot y) )$ is replaced by the expression $(x\cdot x) \cdot ( 8 \cdot (y\cdot y) )$. @@ -783,7 +783,7 @@ \subsubsection{Algebraic circuit representation} To see what algebraic circuits \subsubsection{Circuit Execution} Algebraic circuits are directed, acyclic multi-graphs, where nodes represent variables, constants, or addition and multiplication gates. In particular, every algebraic circuit with $n$ input nodes decorated with variable symbols and $m$ output nodes decorated with variables can be seen as a function that transforms an input string $(x_1,\ldots, x_n)$ from $\F^n$ into an output string $(f_1,\ldots,f_m)$ from $\F^m$. The transformation is done by sending values associated to nodes along their outgoing edges to other nodes. If those nodes are gates, then the values are transformed according to the gate label and the process is repeated along all edges until a sink node is reached. We call this computation \term{circuit execution}. -When executing a circuit, it is possible to not only compute the output values of the circuit but to derive field elements for all edges, and, in particular, for all edge labels in the circuit. The result is a string $$ of field elements associated to all labeled edges, which we call a \term{valid assignment} to the circuit. In contrast, any assignment $$ of field elements to edge labels that can not arise from circuit execution is called an \term{invalid assignment}. +When executing a circuit, it is possible to not only compute the output values of the circuit but to derive field elements for all edges, and, in particular, for all edge labels in the circuit. The result is a string $$ of field elements associated to all labeled edges, which we call a \term{valid assignment} to the circuit. In contrast, any assignment $$ of field elements to edge labels that cannot arise from circuit execution is called an \term{invalid assignment}. Valid assignments can be interpreted as \term{proofs for proper circuit execution} because they keep a record of the computational result as well as intermediate computational steps. \begin{example}[3-factorization] @@ -902,7 +902,7 @@ \subsubsection{Circuit Satisfiability} R_{C(\F)} : \F^* \times \F^* \to \{true, false\}\;;\; (I;W) \mapsto \begin{cases} -true & (I;W) \text{is valid assignment to } C(\F)\\ +true & (I;W) \text{ is a valid assignment to } C(\F)\\ false & else \end{cases} \end{equation} @@ -910,16 +910,16 @@ \subsubsection{Circuit Satisfiability} In the context of zero-knowledge proof systems, executing circuits is also often called \term{witness generation}, since in applications the instance part is usually public, while its the task of a prover to compute the witness part. -\begin{remark}[Circuit satisfiability] Similar to \ref{r1cs-satisfiability}, it should be noted that, in our definition, every circuit defines its own language. However, in more theoretical approaches another language usually called \term{circuit satisfiability} is often considered, which is useful when it comes to more abstract problems like expressiveness, or computational complexity of the class of \term{all} algebraic circuits over a given field. From our perspective, the circuit satisfiability language is obtained by union of all circuit languages that are in our definition. To be more precise, let the alphabet $\Sigma=\F$ be a field. Then +\begin{remark}[Circuit satisfiability] Similar to \ref{r1cs-satisfiability}, it should be noted that, in our definition, every circuit defines its own language. However, in more theoretical approaches another language usually called \term{circuit satisfiability} is often considered, which is useful when it comes to more abstract problems like expressiveness, or computational complexity of the class of \term{all} algebraic circuits over a given field. From our perspective, the circuit satisfiability language is obtained by the union of all circuit languages that are in our definition. To be more precise, let the alphabet $\Sigma=\F$ be a field. Then $$ -L_{CIRCUIT\_SAT(\F)} = \{(i;w)\in \Sigma^*\times \Sigma^*\;|\; \text{there is a circuit } C(\F) \text{ such that } (i;w) \text{ is valid assignment}\} +L_{CIRCUIT\_SAT(\F)} = \{(i;w)\in \Sigma^*\times \Sigma^*\;|\; \text{there is a circuit } C(\F) \text{ such that } (i;w) \text{ is a valid assignment}\} $$ \end{remark} \begin{example}[3-Factorization]Consider the circuit $C_{3.fac}$ from \examplename{} \ref{ex:3-fac-zk-circuit} again. We call the associated language $L_{3.fac\_circ}$. To understand how a constructive proof of a statement in $L_{3.fac\_circ}$ looks like, consider the instance $I_1= 11$. To provide a proof for the statement ``There exists a witness $W$ such that $(I_1;W)$ is a word in $L_{3.fac\_circ}$'' a proof therefore has to consists of proper values for the variables $W_1$, $W_2$, $W_3$ and $W_4$. Any prover therefore has to find input values for $W_1$, $W_2$ and $W_3$ and then execute the circuit to compute $W_4$ under the assumption $I_1=11$. -Example \ref{ex:3-fac-zk-circuit_2}implies that $<2,3,4,6>$ is a proper constructive proof and in order to verify the proof a verifier needs to execute the circuit with instance $I_1=11$ and inputs $W_1=2$, $W_2=3$ and $W_3=4$ to decide whether the proof is a valid assignment or not. +Example \ref{ex:3-fac-zk-circuit_2} implies that $<2,3,4,6>$ is a proper constructive proof. In order to verify the proof, a verifier needs to execute the circuit with instance $I_1=11$ and inputs $W_1=2$, $W_2=3$ and $W_3=4$ to decide whether the proof is a valid assignment or not. \end{example} \begin{exercise} Consider the circuit $C_{tiny-jj}(\F_{13})$ from \examplename{} \ref{ex:TJJ-circuit_1}, with its associated language $L_{tiny-jj}$. Construct a proof $\pi$ for the instance $<11,6>$ and verify the proof. @@ -937,23 +937,23 @@ \subsubsection{Associated Constraint Systems} \begin{equation} (\text{left input})\cdot (\text{right input}) = S_j \end{equation} -In this expression $(\text{left input})$ is the output from the symbolic execution of the subgraph that consists of the left input edge of this gate and all edges and nodes that have this edge in their path, starting with constant inputs or labeled outgoing edges of other nodes. +In this expression, $(\text{left input})$ is the output from the symbolic execution of the subgraph that consists of the left input edge of this gate and all edges and nodes that have this edge in their path, starting with constant inputs or labeled outgoing edges of other nodes. -In the same way $(\text{right input})$ is the output from the symbolic execution of the subgraph that consists of the right input edge of this gate and all edges and nodes that have this edge in their path, starting with constant inputs or labeled outgoing edges of other nodes. +In the same way, $(\text{right input})$ is the output from the symbolic execution of the subgraph that consists of the right input edge of this gate and all edges and nodes that have this edge in their path, starting with constant inputs or labeled outgoing edges of other nodes. \item If the edge label $S_j$ is an outgoing edge of an addition gate, the R1CS gets a new quadratic constraint \begin{equation} (\text{left input} + \text{right input})\cdot 1 = S_j \end{equation} -In this expression $(\text{left input})$ is the output from the symbolic execution of the subgraph that consists of the left input edge of this gate and all edges and nodes that have this edge in their path, starting with constant inputs or labeled outgoing edges of other nodes. +In this expression, $(\text{left input})$ is the output from the symbolic execution of the subgraph that consists of the left input edge of this gate and all edges and nodes that have this edge in their path, starting with constant inputs or labeled outgoing edges of other nodes. -In the same way $(\text{right input})$ is the output from the symbolic execution of the subgraph that consists of the right input edge of this gate and all edges and nodes that have this edge in their path, starting with constant inputs or labeled outgoing edges of other nodes. +In the same way, $(\text{right input})$ is the output from the symbolic execution of the subgraph that consists of the right input edge of this gate and all edges and nodes that have this edge in their path, starting with constant inputs or labeled outgoing edges of other nodes. \item No other edge label adds a constraint to the system. \end{itemize} If an algebraic circuit $C(\F)$ is constructed according to the rules from \ref{def:algebraic-circuit}, the result of this method is a \concept{rank-1 constraint system}, and, in this sense, every algebraic circuit $C(\F)$ generates a R1CS $R$, which we call the \term{associated R1CS} of the circuit. It can be shown that a string of field elements $$ is a valid assignment to a circuit if and only if the same string is a solution to the associated R1CS. Circuit executions therefore compute solutions to \concept{rank-1 constraint systems} efficiently. To understand the contribution of algebraic gates to the number of constraints, note that, -according to construction \ref{def:algebraic-circuit}, multiplication gates have labels on their outgoing edges if and only if there is at least one labeled edge in both input paths, or if the outgoing edge is an input to a sink node. This implies that multiplication with a constant is essentially free in the sense that it doesn't add a new constraint to the system, as long as that multiplication gate is not am input to an output node. +according to construction \ref{def:algebraic-circuit}, multiplication gates have labels on their outgoing edges if and only if there is at least one labeled edge in both input paths, or if the outgoing edge is an input to a sink node. This implies that multiplication with a constant is essentially free in the sense that it doesn't add a new constraint to the system, as long as that multiplication gate is not an input to an output node. Moreover, addition gates have labels on their outgoing edges if and only if they are inputs to sink nodes. This implies that addition is essentially free in the sense that it doesn't add a new constraint to the system, as long as that addition gate is not an input to an output node. @@ -1235,7 +1235,7 @@ \subsubsection{Associated Constraint Systems} n16 [shape=box, label="0" /*, color=lightgray */]; } \end{center} -Both the left and the right input are unlabeled, but have a labeled edges in their path. Since the gate is an addition gate, the right factor in the quadratic constraint is always $1$ and the left factor is computed by symbolically executing all inputs to all gates in the sub-circuit. We get +Both the left and the right input are unlabeled, but have labeled edges in their path. Since the gate is an addition gate, the right factor in the quadratic constraint is always $1$ and the left factor is computed by symbolically executing all inputs to all gates in the sub-circuit. We get $$ (12\cdot S_4 + S_5 + 10\cdot S_3 + 1)\cdot 1 = 0 $$ @@ -1247,17 +1247,17 @@ \subsubsection{Associated Constraint Systems} (S_4\cdot 8)\cdot S_3 &= S_5\\ (12\cdot S_4 + S_5 + 10\cdot S_3 + 1)\cdot 1 &= 0 \end{align*} -which is equivalent to the R1CS we derived in \examplename{} \ref{ex:TJJ-r1cs} both the circuit as well as the R1CS are just two different ways to express the same language. +which is equivalent to the R1CS we derived in \examplename{} \ref{ex:TJJ-r1cs}. Thus, both the circuit as well as the R1CS are just two different ways to express the same language. \end{example} \subsection{Quadratic Arithmetic Programs} \label{sec:QAP} - We have introduced algebraic circuits and their associated \concept{rank-1 constraint systems} as two particular models able to represent bounded computation. Both models define formal languages, and associated membership as well as knowledge claims can be proofed in a constructive way by executing the circuit in order to compute solutions to its associated R1CS. + We have introduced algebraic circuits and their associated \concept{rank-1 constraint systems} as two particular models able to represent bounded computation. Both models define formal languages, and associated membership as well as knowledge claims can be proven in a constructive way by executing the circuit in order to compute solutions to its associated R1CS. One reason why those systems are useful in the context of succinct zero-knowledge proof systems is because any R1CS can be transformed into another computational model called a \term{Quadratic Arithmetic Program} [QAP], which serves as the basis for some of the most efficient succinct non-interactive zero-knowledge proof generators that currently exist. As we will see, proving statements for languages that have decision functions defined by Quadratic Arithmetic Programs can be achieved by providing certain polynomials, and those proofs can be verified by checking a particular divisibility property of those polynomials. -\subsubsection{QAP representation} To understand what Quadratic Arithmetic Programs are in detail, let $\F$ be a field and $R$ a \concept{rank-1 constraint system} over $\F$ such that the number of non-zero elements in $\F$ is strictly larger than the number $k$ of constraints in $R$. Moreover, let $a_j^i$, $b_j^i$ and $c_j^i\in\F$ for every index $0\leq j \leq n+m$ and $1\leq i \leq k$, be the defining constants of the R1CS and $m_1$, $\ldots$, $m_k$ be arbitrary, invertible and distinct elements from $\F$. +\subsubsection{QAP representation} To understand what Quadratic Arithmetic Programs are in detail, let $\F$ be a field and $R$ a \concept{rank-1 constraint system} over $\F$ such that the number of non-zero elements in $\F$ is strictly greater than the number $k$ of constraints in $R$. Moreover, let $a_j^i$, $b_j^i$ and $c_j^i\in\F$ for every index $0\leq j \leq n+m$ and $1\leq i \leq k$, be the defining constants of the R1CS and $m_1$, $\ldots$, $m_k$ be arbitrary, invertible and distinct elements from $\F$. Then a \term{Quadratic Arithmetic Program} associated to the R1CS $R$ is the following set of polynomials over $\F$: \begin{equation} @@ -1271,13 +1271,13 @@ \subsubsection{QAP representation} To understand what Quadratic Arithmetic Progr A_j(m_i)=a_j^i, & B_j(m_i)=b_j^i, & C_j(m_i)=c_j^i & \text{ for all } j= 1, \ldots , n+m+1, i=1,\ldots,k \end{array} \end{equation} -Given some \concept{rank-1 constraint system}, an associated Quadratic Arithmetic Program is therefore a set of polynomials, computed from the constants in the R1CS. To see that the polynomials $A_j$, $B_j$ and $C_j$ are uniquely defined by the equations \ref{def:QAP-polynomials}, recall that a polynomial of degree $k-1$ is completely determined by $k$ evaluation points and it can be computed for example by Lagrange interpolation \ref{alg_lagrange_interplation}. +Given some \concept{rank-1 constraint system}, an associated Quadratic Arithmetic Program is therefore a set of polynomials, computed from the constants in the R1CS. To see that the polynomials $A_j$, $B_j$ and $C_j$ are uniquely defined by the equations \ref{def:QAP-polynomials}, recall that a polynomial of degree $k-1$ is completely determined by $k$ evaluation points and it can be computed for example by Lagrange interpolation, described above in Algorithm \ref{alg_lagrange_interplation}. -Computing a QAP from any given R1CS can be achieved in the following three steps. If the R1CS consists of $k$ constraints, first choose $k$ different, invertible element from the field $\F$. Every choice defines a different QAP for the same R1CS. Then compute the target polynomial $T$ according to its definition \ref{def:QAP-target-poly}. After that use Lagrange's method \ref{alg_lagrange_interplation} to compute the polynomials $A_j$ for every $1\leq j \leq k$ from the set +Computing a QAP from any given R1CS can be achieved in the following three steps. If the R1CS consists of $k$ constraints, first choose $k$ different, invertible elements from the field $\F$. Every choice defines a different QAP for the same R1CS. Then compute the target polynomial $T$ according to its definition in \ref{def:QAP-target-poly}. After that, use Lagrange interpolation via Algorithm \ref{alg_lagrange_interplation} to compute the polynomials $A_j$ for every $1\leq j \leq k$ from the set \begin{equation} S_{A_j} = \{(m_1,a^1_j),\ldots,(m_k,a^k_j)\} \end{equation} -After that is done, execute the analog computation for the polynomials $B_j$ and $C_j$ for every $1\leq j \leq k$. +After that, use the same method to compute the polynomials $B_j$ and $C_j$ for every $1\leq j \leq k$. \begin{example}[3-factorization] \label{ex:3-fac-QAP} To provide a better intuition of Quadratic Arithmetic Programs and how they are computed from their associated \concept{rank-1 constraint systems}, consider the language $L_{3.fac\_zk}$ from \examplename{} \ref{ex:L-3fac-zk} and its associated R1CS from \examplename{} \ref{ex:3-factorization-r1cs}: \begin{align*} @@ -1297,14 +1297,14 @@ \subsubsection{QAP representation} To understand what Quadratic Arithmetic Progr c_0^2 = 0 & c_1^2= 1 & c_2^2= 0 & c_3^2 = 0 & c_4^2= 0 & c_5^2= 0 \end{array} $$ -Since the R1CS is defined over the field $\F_{13}$ and since it has two constraints, we need to choose two arbitrary but invertible and distinct elements $m_1$ and $m_2$ from $\F_{13}$. We choose $m_{1}=5$, and $m_{2}=7$ and with this choice we get the target polynomial +Since the R1CS is defined over the field $\F_{13}$ and has two constraints, we need to choose two arbitrary but invertible and distinct elements $m_1$ and $m_2$ from $\F_{13}$. We choose $m_{1}=5$, and $m_{2}=7$ and with this choice we get the target polynomial \begin{align*} T(x) & = (x-m_1)(x-m_2) & \text{\# Definition of T}\\ - & = (x-5)(x-7) & \text{\# Insert our choice}\\ - & = (x+8)(x+6) & \text{\# Negatives in } \F_{13}\\ - & = x^2 + x +9 & \text{\# expand} + & = (x-5)(x-7) & \text{\# Insert our chosen values}\\ + & = (x+8)(x+6) & \text{\# Additive inverses in } \F_{13}\\ + & = x^2 + x +9 & \text{\# Expand} \end{align*} -Then we have to compute the polynomials $A_j$, $B_j$ and $C_j$ by their defining equation from the R1CS coefficients. Since the R1CS has two constraining equations, those polynomials are of degree $1$ and they are defined by their evaluation at the point $m_1=5$ and the point $m_2=7$. +Then, we have to compute the polynomials $A_j$, $B_j$ and $C_j$ by their defining equation from the R1CS coefficients. Since the R1CS has two constraining equations, those polynomials are of degree $1$ and they are defined by their evaluations at $m_1=5$ and $m_2=7$. At point $m_1$, each polynomial $A_j$ is defined to be $a_j^1$ and at point $m_2$, each polynomial $A_j$ is defined to be $a_j^2$. The same holds true for the polynomials $B_j$ as well as $C_j$. Writing all these equations down, we get: $$ @@ -1319,9 +1319,9 @@ \subsubsection{QAP representation} To understand what Quadratic Arithmetic Progr C_0(7)=0, & C_1(7)=1, & C_2(7)=0, & C_3(7)=0, & C_4(7)=0, & C_5(7)=0 \end{array} $$ -Lagrange's interpolation implies that a polynomial of degree $k$, that is zero on $k+1$ points has to be the zero polynomial. Since our polynomials are of degree $1$ and determined on $2$ points, we therefore know that the only non-zero polynomials in our QAP are $A_2$, $A_5$, $B_3$, $B_4$, $C_1$ and $C_5$, and that we can use Lagrange's interpolation to compute them. +The Fundamental Theorem of Algebra implies that a polynomial of degree $k$ that is zero on $k+1$ points must be the zero polynomial. Since our polynomials are of degree $1$ and are determined on $2$ points, we therefore know that the only non-zero polynomials in our QAP are $A_2$, $A_5$, $B_3$, $B_4$, $C_1$ and $C_5$, and that we can use Lagrange interpolation to compute them. -To compute $A_2$ we note that the set $S_{A_2}$ in our version of Lagrange's interpolation is given by $S_{A_2}=\{(m_1,a^1_2), (m_2,a_2^2)\} = \{(5,1), (7,0)\}$. Using this set we get: +To compute $A_2$, we note that the set $S_{A_2}$ in our example is given by $S_{A_2}=\{(m_1,a^1_2), (m_2,a_2^2)\} = \{(5,1), (7,0)\}$. Using this set, we get: \begin{align*} A_2(x) & = a^1_2\cdot(\frac{x-m_2}{m_1-m_2}) + a^2_2\cdot(\frac{x-m_1}{m_2-m_1}) = 1\cdot(\frac{x-7}{5-7}) + 0\cdot(\frac{x-5}{7-5}) \\ @@ -1330,7 +1330,7 @@ \subsubsection{QAP representation} To understand what Quadratic Arithmetic Progr & = 6(x-7) = 6x + 10 & \text{\# } -7 = 6 \text{ and } 6\cdot 6 = 10 \end{align*} -To compute $A_5$, we note that the set $S_{A_5}$ in our version of Lagrange's method is given by $S_{A_5}=\{(m_1,a^1_5), (m_2,a^2_5)\} = \{(5,0), (7,1)\}$. Using this set we get: +To compute $A_5$, we note that the set $S_{A_5}$ in our example is given by $S_{A_5}=\{(m_1,a^1_5), (m_2,a^2_5)\} = \{(5,0), (7,1)\}$. Using this set, we get: \begin{align*} A_5(x) & = a^1_5\cdot(\frac{x-m_2}{m_1-m_2}) + a^2_5\cdot(\frac{x-m_1}{m_2-m_1}) = 0\cdot(\frac{x-7}{5-7}) + 1\cdot(\frac{x-5}{7-5}) \\ @@ -1338,7 +1338,7 @@ \subsubsection{QAP representation} To understand what Quadratic Arithmetic Progr & = 7(x-5) = 7x + 4 & \text{\# } -5 = 8 \text{ and } 7\cdot 8 = 4 \end{align*} -Using Lagrange's interpolation, we can deduce that $A_2=B_3=C_5$ as well as $A_5=B_4=C_1$, since they are polynomials of degree $1$ that evaluate to the same values on $2$ points. Using this, we get the following set of polynomials +Using Lagrange interpolation, we can deduce that $A_2=B_3=C_5$ as well as $A_5=B_4=C_1$, since they are polynomials of degree $1$ that evaluate to the same values on $2$ points. Using this, we get the following set of polynomials: \begin{center} \begin{tabular}{|l|l|l|}\hline $A_{0}(x)=0 $ &$ B_{0}(x)=0 $ & $C_{0}(x)=0$ \tabularnewline\hline @@ -1349,7 +1349,7 @@ \subsubsection{QAP representation} To understand what Quadratic Arithmetic Progr $A_5(x)=7x+4$ &$ B_5(x)=0 $ & $C_5(x)=6x+10$ \tabularnewline\hline \end{tabular} \end{center} -We can use Sage to verify our computation. In Sage, every polynomial ring has a function \code{lagrange\_polynomial} that takes the defining points as inputs and the associated Lagrange polynomial as output. +We can use Sage to verify our computation. In Sage, every polynomial ring has a function \code{lagrange\_polynomial} that takes the defining points as inputs and returns the associated Lagrange polynomial. \begin{sagecommandline} sage: F13 = GF(13) sage: F13t. = F13[] @@ -1362,7 +1362,6 @@ \subsubsection{QAP representation} To understand what Quadratic Arithmetic Progr \end{sagecommandline} Combining this computation with the target polynomial we derived earlier, a Quadratic Arithmetic Program associated to the \concept{rank-1 constraint system} $R_{3.fac\_zk}$ is given as follows: - \begin{multline} \label{QAP-R3-fac-zk} QAP(R_{3.fac\_zk}) =\{x^{2}+x+9,\notag\\ @@ -1372,7 +1371,7 @@ \subsubsection{QAP representation} To understand what Quadratic Arithmetic Progr \begin{exercise} Consider the \concept{rank-1 constraint system} for points on the \curvename{Tiny-jubjub} curve from \examplename{} \ref{ex:TJJ-r1cs}. Compute an associated QAP for this R1CS and double check your computation using sage. \end{exercise} -\subsubsection{QAP Satisfiability} One of the major points of Quadratic Arithmetic Programs in proving systems is that solutions of their associated \concept{rank-1 constraint systems} are in 1:1 correspondence with certain polynomials $P$ divisible by the target polynomial $T$ of the QAP. Verifying solutions to the R1CS and hence, checking proper circuit execution is then achievable by polynomial division of $P$ by $T$. +\subsubsection{QAP Satisfiability} One of the major advantages of Quadratic Arithmetic Programs in proving systems is that solutions of their associated \concept{rank-1 constraint systems} are in 1:1 correspondence with certain polynomials $P$ divisible by the target polynomial $T$ of the QAP. Verifying solutions to the R1CS and hence, checking proper circuit execution is then achievable by polynomial division of $P$ by $T$. To be more specific, let $R$ be some \concept{rank-1 constraint system} with associated variables $(; )$ and let $QAP(R)$ be a Quadratic Arithmetic Program of $R$. Then the string $(; )$ is a solution to the R1CS if and only if the following polynomial is divisible by the target polynomial $T$: \begin{equation}\label{polynomial-P-IW} @@ -1397,10 +1396,10 @@ \subsubsection{QAP Satisfiability} One of the major points of Quadratic Arithmet Verifying a constructive proof in the case of a circuit is achieved by executing the circuit and then by comparing the result against the given proof. Verifying the same proof in the R1CS picture means checking if the elements of the proof satisfy the R1CS equations. In contrast, verifying a proof in the QAP picture is done by polynomial division of the proof $P$ by the target polynomial $T$. The proof is verified if and only if $P$ is divisible by $T$. -\begin{example} Consider the Quadratic Arithmetic Program $QAP(R_{3.fac\_zk})$ from \examplename{} \ref{ex:3-fac-QAP} and its associated R1CS from equation \ref{ex:3-factorization-r1cs}. To give an intuition of how proofs in the language $L_{QAP(R_{3.fac\_zk})}$ look like, lets consider the instance $I_1=11$. As we know from \examplename{} \ref{ex:3-fac-zk-circuit_2}, $(W_1,W_2,W_3,W_5)=(2,3,4,6)$ is a proper witness, since +\begin{example} Consider the Quadratic Arithmetic Program $QAP(R_{3.fac\_zk})$ from \examplename{} \ref{ex:3-fac-QAP} and its associated R1CS from equation \ref{ex:3-factorization-r1cs}. To give an intuition of how proofs in the language $L_{QAP(R_{3.fac\_zk})}$ look like, let's consider the instance $I_1=11$. As we know from \examplename{} \ref{ex:3-fac-zk-circuit_2}, $(W_1,W_2,W_3,W_5)=(2,3,4,6)$ is a proper witness, since $(;)=(<11>;<2,3,4,6>)$ is a valid circuit assignment and hence, a solution to $R_{3.fac\_zk}$ and a constructive proof for language $L_{R_{3.fac\_zk}}$. -In order to transform this constructive proof into a knowledge proof in language $L_{QAP(R_{3.fac\_zk})}$, a prover has to use the elements of the constructive proof, to compute the polynomial $P_{(I;W)}$. +In order to transform this constructive proof into a knowledge proof in language $L_{QAP(R_{3.fac\_zk})}$, a prover has to use the elements of the constructive proof to compute the polynomial $P_{(I;W)}$. In the case of $(;)=(<11>;<2,3,4,6>)$, the associated proof is computed as follows: \begin{align*} @@ -1413,7 +1412,7 @@ \subsubsection{QAP Satisfiability} One of the major points of Quadratic Arithmet = & (x^{2}+x+9x+9)-(9x) \\ = & x^{2}+x+9 \end{align*} -Given instance $I_1=11$ a prover therefore provides the polynomial $x^2+x+9$ as proof. To verify this proof, any verifier can then look up the target polynomial $T$ from the QAP and divide $P_{(I;W)}$ by $T$. In this particular example, $P_{(I;W)}$ is equal to the target polynomial $T$, and hence, it is divisible by $T$ with $P/T=1$. The verifier therefore verifies the proof. +Given instance $I_1=11$, a prover provides the polynomial $x^2+x+9$ as proof. To verify this proof, any verifier can look up the target polynomial $T$ from the QAP and divide $P_{(I;W)}$ by $T$. In this particular example, $P_{(I;W)}$ is equal to the target polynomial $T$, and hence, it is divisible by $T$ with $P/T=1$. The verifier thus verifies the proof. \begin{sagecommandline} sage: F13 = GF(13) sage: F13t. = F13[] From a594eefaceeb2943c7393fe8896b8091d9dce4b3 Mon Sep 17 00:00:00 2001 From: kashepavadan Date: Fri, 5 Jul 2024 17:20:12 -0400 Subject: [PATCH 3/6] Added details in QAP section chapter 6 --- chapters/statements-moonmath.tex | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/chapters/statements-moonmath.tex b/chapters/statements-moonmath.tex index 427c3a74..17843f16 100644 --- a/chapters/statements-moonmath.tex +++ b/chapters/statements-moonmath.tex @@ -1271,13 +1271,13 @@ \subsubsection{QAP representation} To understand what Quadratic Arithmetic Progr A_j(m_i)=a_j^i, & B_j(m_i)=b_j^i, & C_j(m_i)=c_j^i & \text{ for all } j= 1, \ldots , n+m+1, i=1,\ldots,k \end{array} \end{equation} -Given some \concept{rank-1 constraint system}, an associated Quadratic Arithmetic Program is therefore a set of polynomials, computed from the constants in the R1CS. To see that the polynomials $A_j$, $B_j$ and $C_j$ are uniquely defined by the equations \ref{def:QAP-polynomials}, recall that a polynomial of degree $k-1$ is completely determined by $k$ evaluation points and it can be computed for example by Lagrange interpolation, described above in Algorithm \ref{alg_lagrange_interplation}. +Given some \concept{rank-1 constraint system}, an associated Quadratic Arithmetic Program is therefore a set of polynomials computed from the constants in the R1CS, together with the target polynomial. Note that the target polynomial evaluates to $0$ for every $m_1$, $\ldots$, $m_k$, which is a useful fact for verifying proofs. To see that the polynomials $A_j$, $B_j$ and $C_j$ are uniquely defined by the equations \ref{def:QAP-polynomials}, recall that a polynomial of degree $k-1$ is completely determined by $k$ evaluation points and it can be computed for example by Lagrange interpolation, described above in Algorithm \ref{alg_lagrange_interplation}. Computing a QAP from any given R1CS can be achieved in the following three steps. If the R1CS consists of $k$ constraints, first choose $k$ different, invertible elements from the field $\F$. Every choice defines a different QAP for the same R1CS. Then compute the target polynomial $T$ according to its definition in \ref{def:QAP-target-poly}. After that, use Lagrange interpolation via Algorithm \ref{alg_lagrange_interplation} to compute the polynomials $A_j$ for every $1\leq j \leq k$ from the set \begin{equation} S_{A_j} = \{(m_1,a^1_j),\ldots,(m_k,a^k_j)\} \end{equation} -After that, use the same method to compute the polynomials $B_j$ and $C_j$ for every $1\leq j \leq k$. +Then, use the same method to compute the polynomials $B_j$ and $C_j$ for every $1\leq j \leq k$. \begin{example}[3-factorization] \label{ex:3-fac-QAP} To provide a better intuition of Quadratic Arithmetic Programs and how they are computed from their associated \concept{rank-1 constraint systems}, consider the language $L_{3.fac\_zk}$ from \examplename{} \ref{ex:L-3fac-zk} and its associated R1CS from \examplename{} \ref{ex:3-factorization-r1cs}: \begin{align*} @@ -1378,6 +1378,7 @@ \subsubsection{QAP Satisfiability} One of the major advantages of Quadratic Arit P_{(I;W)} = \scriptstyle \left(A_0 + \sum_{j}^n I_j\cdot A_j + \sum_{j}^m W_j\cdot A_{n+j} \right) \cdot \left(B_0 + \sum_{j}^n I_j\cdot B_j + \sum_{j}^m W_j\cdot B_{n+j} \right) -\left(C_0 + \sum_{j}^n I_j\cdot C_j + \sum_{j}^m W_j\cdot C_{n+j} \right) \end{equation} +This works because $P_{(I;W)}$ will equal $0$ for all values $m_1$, $\ldots$, $m_k$ if and only if the string $(; )$ is a solution to the R1CS. Since the target polynomial $T$ equals $0$ for all values $m_1$, $\ldots$, $m_k$, $P_{(I;W)}$ must be a multiple of $T$ in order to also equal $0$. Thus, our proposed solution will be valid if and only if $P_{(I;W)}$ is divisible by $T$. To understand how Quadratic Arithmetic Programs define formal languages, observe that every QAP over a field $\F$ defines a decision function over the alphabet $\Sigma_I \times \Sigma_W = \F \times \F$ in the following way: \begin{equation} @@ -1394,7 +1395,7 @@ \subsubsection{QAP Satisfiability} One of the major advantages of Quadratic Arit To compute a constructive proof for a statement in $L_{QAP}$ given some instance $I$, a prover first needs to compute a constructive proof $W$ of the associated R1CS, e.g. by executing the circuit of the R1CS. With $(I;W)$ at hand, the prover can then compute the polynomial $P_{(I;W)}$ and publish the polynomial as proof. -Verifying a constructive proof in the case of a circuit is achieved by executing the circuit and then by comparing the result against the given proof. Verifying the same proof in the R1CS picture means checking if the elements of the proof satisfy the R1CS equations. In contrast, verifying a proof in the QAP picture is done by polynomial division of the proof $P$ by the target polynomial $T$. The proof is verified if and only if $P$ is divisible by $T$. +Verifying a constructive proof in the case of a circuit is achieved by executing the circuit and then by comparing the result against the given proof. Verifying the same proof in the R1CS picture means checking if the elements of the proof satisfy the R1CS equations. By contrast, verifying a proof in the QAP picture is done by polynomial division of the proof $P$ by the target polynomial $T$. The proof is verified if and only if $P$ is divisible by $T$. \begin{example} Consider the Quadratic Arithmetic Program $QAP(R_{3.fac\_zk})$ from \examplename{} \ref{ex:3-fac-QAP} and its associated R1CS from equation \ref{ex:3-factorization-r1cs}. To give an intuition of how proofs in the language $L_{QAP(R_{3.fac\_zk})}$ look like, let's consider the instance $I_1=11$. As we know from \examplename{} \ref{ex:3-fac-zk-circuit_2}, $(W_1,W_2,W_3,W_5)=(2,3,4,6)$ is a proper witness, since $(;)=(<11>;<2,3,4,6>)$ is a valid circuit assignment and hence, a solution to $R_{3.fac\_zk}$ and a constructive proof for language $L_{R_{3.fac\_zk}}$. From 4b69d4da3ff28b7349ba3868af17cae5952d1f30 Mon Sep 17 00:00:00 2001 From: kashepavadan Date: Tue, 9 Jul 2024 14:02:00 -0400 Subject: [PATCH 4/6] typo in chapter 5 --- chapters/elliptic-curves-moonmath.tex | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/chapters/elliptic-curves-moonmath.tex b/chapters/elliptic-curves-moonmath.tex index 2a4783e8..8b8c53eb 100644 --- a/chapters/elliptic-curves-moonmath.tex +++ b/chapters/elliptic-curves-moonmath.tex @@ -1214,7 +1214,7 @@ \subsubsection{Elliptic Curves over extension fields} \end{exercise} \begin{exercise} \label{exercise:BN128-extension} -Consider the \curvename{alt\_bn128} curve and its associated base field $\F_p$ from \examplename{} \ref{BN128}. As we know from example \ref{ex:embedding_degre_BN128} this curve has an embedding degree of $12$. Use Sage to find an irreducible polynomial $P\in \F_p[t]$ and write a sage program to implement the finite field extension $\F_{p^{12}}$ and to implement the curve extension $alt\_bn128(\F_{p^12})$ and compute the number of curve points. +Consider the \curvename{alt\_bn128} curve and its associated base field $\F_p$ from \examplename{} \ref{BN128}. As we know from example \ref{ex:embedding_degre_BN128} this curve has an embedding degree of $12$. Use Sage to find an irreducible polynomial $P\in \F_p[t]$ and write a sage program to implement the finite field extension $\F_{p^{12}}$ and to implement the curve extension $alt\_bn128(\F_{p^{12}})$ and compute the number of curve points. \end{exercise} \subsection{Full torsion groups} \label{sec:full-torsion} As we will see in what follows, cryptographically interesting pairings are defined on so-called torsion subgroups of elliptic curves. To define \term{torsion groups} of an elliptic curve, let $\F$ be a finite field, $E(\F)$ an elliptic curve of order $n$ and $r$ a factor of $n$. Then the \term{$r$-torsion group} of the elliptic curve $E(\F)$ is defined as the set From 0a5cb18d185367eb3e5d308cc147b5b0eb585e7f Mon Sep 17 00:00:00 2001 From: kashepavadan Date: Tue, 9 Jul 2024 18:21:56 -0400 Subject: [PATCH 5/6] typos in chapter 7 --- chapters/circuit-compilers-moonmath.tex | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/chapters/circuit-compilers-moonmath.tex b/chapters/circuit-compilers-moonmath.tex index b98ae158..f6f400ca 100644 --- a/chapters/circuit-compilers-moonmath.tex +++ b/chapters/circuit-compilers-moonmath.tex @@ -91,7 +91,7 @@ \subsection{The Execution Phases} In contrast to normal executable programs, pro After this is done, we have to do a consistency and type check for every occurrence of the assignment operator \texttt{<==}. We have to ensure that the expression on the right side of the operator is well defined and that the types of both side match. -Then we compile the right side of every occurrence of the assignment operator \texttt{<==}. If the right side is a constant or variable defined in this function, we draw a dotted line from the box-node that represents the left side of \texttt{<==} to the box node that represents the right side of the same operator. If the right side represents an argument of that function we draw a line from the box-node that represents the left side of \texttt{<==} to the box node that represents the right side of the same operator. +Then we compile the right side of every occurrence of the assignment operator \texttt{<==}. If the right side is a constant or variable defined in this function, we draw a dotted line from the box-node that represents the right side of \texttt{<==} to the box node that represents the left side of the same operator. If the right side represents an argument of that function we draw a line from the box-node that represents the left side of \texttt{<==} to the box node that represents the right side of the same operator. If the right side of the \texttt{<==} operator is a function, we look into our database, find its associated circuit and draw it. If no circuit is associated to that function yet, we repeat the compilation process for that function, drawing edges from the function's argument to its input nodes and from the functions output nodes to the nodes on the right side of \texttt{<==}. @@ -162,9 +162,9 @@ \section{Real World Circuit Languages} \subsection{Circom} Circom is a domain-specific programming language for designing arithmetic circuits. It is used to build circuits that can be compiled to rank-1 constraint systems and outputted as WebAssembly and C++ programs for efficient evaluation. -In this section, we will gives examples of how to write basic circuits in Circom. We will use those examples then later to compute associated proof in snarkjs. +In this section, we will gives examples of how to write basic circuits in Circom. We will use those examples later to compute associated proof in snarkjs. -To understand circom, we first have to provide definitions for the terms \hilight{signals},\hilight{templates}, and \hilight{components} to facilitate a better understanding of the examples discussed. +To understand Circom, we first have to provide definitions for the terms \hilight{signals},\hilight{templates}, and \hilight{components} to facilitate a better understanding of the examples discussed. A \term{signal} refers to an element in the underlying finite field $\F$ of a circuit. The arithmetic circuits created using Circom operate on signals, which are immutable and can be defined as inputs or outputs. Input signals are private, unless specified as public, and all output signals are publicly accessible. The remaining signals are private and cannot be made public. Public signals are part of the instance and private signals are part of the witness in any valid assignment of a circuit. @@ -205,7 +205,7 @@ \subsection{Circom} \begin{example}[The 3-factorization problem in Circom] \label{ex:3-fac-circom} -In this example we implement the 3-factorization problem \ref{ex:3-factorization} in Circom's language and compile into an R1CS and statement generator. In order to show, how Circom handles modularity \ref{modularity}, we write the code as follows: +In this example we implement the 3-factorization problem \ref{ex:3-factorization} in Circom's language and compile into an R1CS and statement generator. In order to show how Circom handles modularity \ref{modularity}, we write the code as follows: \begin{lstlisting} template Multiplier() { signal input a ; @@ -255,7 +255,7 @@ \subsection{Primitive Types} % https://zeroknowledge.fm/172-2/ reference for all the languages Primitive data types like booleans, (unsigned) integers, or strings are the most basic building blocks one can expect to find in every general high-level programing language. In order to write statements as computer programs that compile into circuits, it is therefore necessary to implement primitive types as constraint systems, and define their associated operations as circuits. -In this section, we look at some common ways to achieve this. After a recapitulation of the atomic type for the base field where the circuit is defined on, we start with an implementation of the boolean type and its associated boolean algebra as circuits. After that, we define unsigned integers based on the boolean type, and leave the implementation of signed integers as an exercise to the reader. +In this section, we look at some common ways to achieve this. After a review of the atomic type for the base field where the circuit is defined on, we start with an implementation of the boolean type and its associated boolean algebra as circuits. After that, we define unsigned integers based on the boolean type, and leave the implementation of signed integers as an exercise to the reader. \subsubsection{The base-field type} \label{def:base_field_type} @@ -399,7 +399,7 @@ \subsubsection{The base-field type} } \end{center} \end{example} -\begin{example}[$3$-factorization] Consider our $3$-factorization problem from \examplename{} \ref{ex:3-factorization} and the associated circuit $C_{3.fac\_zk}(\F_{13})$ we provided in \examplename{} \ref{ex:TJJ-circuit_1}. To understand the process of replacing high-level functions by their associated circuits inductively, we want define a \lgname{PAPER} statement that we brain-compile into an algebraic circuit equivalent to $C_{3.fac\_zk}(\F_{13})$: +\begin{example}[$3$-factorization] Consider our $3$-factorization problem from \examplename{} \ref{ex:3-factorization} and the associated circuit $C_{3.fac\_zk}(\F_{13})$ we provided in \examplename{} \ref{ex:TJJ-circuit_1}. To understand the process of replacing high-level functions by their associated circuits inductively, we want to define a \lgname{PAPER} statement that we brain-compile into an algebraic circuit equivalent to $C_{3.fac\_zk}(\F_{13})$: \begin{lstlisting} statement 3_fac_zk {F:F_13} { fn main(x_1 : F, x_2 : F, x_3 : F) -> F{ @@ -1283,7 +1283,7 @@ \subsubsection{The boolean Type} W_5:\;\; & W_1 \cdot (1- W_1) = 0 & \text{boolean constraints}\\ W_6:\;\; & W_2 \cdot (1- W_2) = 0 \\ W_7:\;\; & W_3 \cdot (1- W_3) = 0 \\ -W_8:\;\; & W_4 \cdot (1- w_4) = 0 \\ +W_8:\;\; & W_4 \cdot (1- W_4) = 0 \\ W_9:\;\; & W_1 \cdot W_2 = W_9 & \text{ first OR-operator constraint}\\ W_{10}:\;\; & W_3 \cdot (1-W_4) = W_{10} & \text{AND(.,NOT(.))-operator constraints}\\ I_1:\;\; & (W_1 + W_2 -W_9) \cdot W_{10} = I_1 & \text{AND-operator constraints}\\ @@ -1718,7 +1718,7 @@ \subsubsection{The Unsigned Integer Type} Unsigned integers of size \texttt{N}, \end{lstlisting} Let $L_{mask\_merge}$ be the language defined by the circuit. Provide a constructive knowledge proof in $L_{mask\_merge}$ for the instance $I=(I_a, I_b) = (14, 7)$. \end{exercise} -\subsection{Control Flow} Most programming languages of the imperative of functional style have some notion of basic control structures to direct the order in which instructions are evaluated. Contemporary circuit compilers usually provide a single thread of execution and provide basic flow constructs that implement control flow in circuits. In this part we look at some basic control flow constructions and their implementation in circuits. +\subsection{Control Flow} Most programming languages of the imperative or functional style have some notion of basic control structures to direct the order in which instructions are evaluated. Contemporary circuit compilers usually provide a single thread of execution and provide basic flow constructs that implement control flow in circuits. In this part we look at some basic control flow constructions and their implementation in circuits. \subsubsection{The Conditional Assignment} Writing high-level code that compiles to circuits, it is often necessary to have a way for conditional assignment of values or computational output to variables. One way to realize this in many programming languages is in terms of the conditional ternary assignment operator $?:$ that branches the control flow of a program according to some condition and then assigns the output of the computed branch to some variable: \begin{lstlisting} variable = condition ? value_if_true : value_if_false @@ -2127,7 +2127,7 @@ \subsection{Binary Field Representations} In applications, it is often necessary $$ This is because the unsigned integers $2$ and $15$ are both in the modular $13$ remainder class of $2$ and hence are both representatives of $2$ in $\F_{13}$. -To see how circuit the associated circuit works, we want to enforce the binary representation of $7\in \F_{13}$. Since $m=4$ we have to enforce a $4$-bit representation for $7$, which is $<1,1,1,0>$, since $7= 1\cdot 2^0 + 1\cdot 2^1 + 1\cdot 2^2 + 0\cdot 2^3$. A valid circuit assignment is therefore given by $=<1,1,1,0,7>$ and, indeed, the assignment satisfies the required $5$ constraints including the $4$ boolean constraints for $S_0$, $\ldots$, $S_3$: +To see how the associated circuit works, we want to enforce the binary representation of $7\in \F_{13}$. Since $m=4$ we have to enforce a $4$-bit representation for $7$, which is $<1,1,1,0>$, since $7= 1\cdot 2^0 + 1\cdot 2^1 + 1\cdot 2^2 + 0\cdot 2^3$. A valid circuit assignment is therefore given by $=<1,1,1,0,7>$ and, indeed, the assignment satisfies the required $5$ constraints including the $4$ boolean constraints for $S_0$, $\ldots$, $S_3$: \begin{align*} 1\cdot (1-1) &= 0 & \text{// boolean constraints}\\ 1\cdot (1-1) &= 0 \\ From 0f60c19445f6e93646b9393a7231fb8bb7fcf382 Mon Sep 17 00:00:00 2001 From: kashepavadan Date: Tue, 9 Jul 2024 18:24:54 -0400 Subject: [PATCH 6/6] typos in chapter 8 --- chapters/zk-protocols-moonmath.tex | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/chapters/zk-protocols-moonmath.tex b/chapters/zk-protocols-moonmath.tex index 3d74cb41..8a135730 100644 --- a/chapters/zk-protocols-moonmath.tex +++ b/chapters/zk-protocols-moonmath.tex @@ -32,7 +32,7 @@ \section{Proof Systems} A proof system is usually called \term{succinct} if the size of the proof is shorter than the witness necessary to generate the proof. Moreover, a proof system is called \term{computationally sound} if soundness only holds under the assumption that the computational capabilities of the prover are polynomial bound. To distinguish general proofs from computationally sound proofs, the latter are often called \term{arguments}. -Since the term \term{zk-SNARKs} is an abbreviation for "Zero-knowledge, succinct, non-interactive argument of knowledge". These proof systems are able to generate zk-SNARKS therefore have the zero-knowledge property, are able to generate proofs that require less space than the original witness and require no interaction between prover and verifier, other than transmitting the zk-SNARK itself. However those systems are only sound under the assumption that the prover's computational capabilities are polynomial bound. +The term \term{zk-SNARKs} is an abbreviation for "Zero-knowledge, succinct, non-interactive argument of knowledge". These proof systems are able to generate zk-SNARKS therefore have the zero-knowledge property, are able to generate proofs that require less space than the original witness and require no interaction between prover and verifier, other than transmitting the zk-SNARK itself. However, those systems are only sound under the assumption that the prover's computational capabilities are polynomial bound. \begin{example}[Constructive Proofs for Algebraic Circuits] We have seen in \ref{circuit-satisfiability} how algebraic circuits give rise to formal languages and constructive proofs for knowledge claims. To reformulate this notion of constructive proofs for algebraic circuits into a proof system, let $\F$ be a finite field, and let $C(\F)$ be an algebraic circuit over $\F$ with associated language $L_{C(\F)}$. A non-interactive proof system for $L_{C(\F)}$ is given by the following two algorithms: @@ -90,7 +90,7 @@ \section{The ``Groth16'' Protocol} \begin{example}[The 3-Factorization Problem in Circom and Snarkjs] \label{ex:3-fac-groth-16-params-circom} Snark.js is a JavaScript library that facilitates the development of systems incorporating zero-knowledge proofs (ZKPs), including the Groth-16 protocol. To showcase a practical example of the 3-factorization problem, we utilize our Circom implementation (see \ref{ex:3-fac-circom}), which compiles into a form that is compatible with snark.js. -As of the time of writing, Snark.js supports the elliptic curves \curvename{alt\_bn128}, \curvename{BLS12-381}, and \curvename{Goldilocks}. For the purposes of this example, we shall utilize \curvename{alt\_bn128}, and it's associated scalar field $\F_{bn128}$ as introduced in \ref{BN128}. The Groth-16 parameters for this curve, as officially defined for the Ethereum blockchain, can be found in \href{https://github.com/ethereum/EIPs/blob/master/EIPS/eip-197.md}{EIP-197}. Snark.js utilizes those parameters. +As of the time of writing, Snark.js supports the elliptic curves \curvename{alt\_bn128}, \curvename{BLS12-381}, and \curvename{Goldilocks}. For the purposes of this example, we shall utilize \curvename{alt\_bn128}, and its associated scalar field $\F_{bn128}$ as introduced in \ref{BN128}. The Groth-16 parameters for this curve, as officially defined for the Ethereum blockchain, can be found in \href{https://github.com/ethereum/EIPs/blob/master/EIPS/eip-197.md}{EIP-197}. Snark.js utilizes those parameters. \end{example} \begin{exercise} \label{ex:baby-jubjub-circom} Implement the \href{https://github.com/iden3/iden3-docs/blob/master/source/iden3_repos/research/publications/zkproof-standards-workshop-2/baby-jubjub/baby-jubjub.rst}{Baby-JubJub} twisted Edwards curve equation in Circom and compile it into an R1CS and associated witness generator. @@ -151,7 +151,7 @@ \subsection{The Setup Phase} Generating zk-SNARKs from constructive proofs in th \Tau = (6,5,4,3,2) $$ -We keep this secret in order to simulate proofs later on, but we are careful to hide $\Tau$ from anyone who hasn't read this book. Then we instantiate the \concept{Common Reference String} \ref{def:groth16-crs}from those values. Since our groups are subgroups of the \texttt{BLS6\_6} elliptic curve, we use scalar product notation instead of exponentiation. +We keep this secret in order to simulate proofs later on, but we are careful to hide $\Tau$ from anyone who hasn't read this book. Then we instantiate the \concept{Common Reference String} \ref{def:groth16-crs} from those values. Since our groups are subgroups of the \texttt{BLS6\_6} elliptic curve, we use scalar product notation instead of exponentiation. To compute the $\G_1$ part of the \concept{Common Reference String}, we use the logarithmic order of the group $\G_1$ \ref{BLS6-G1-log}, the generator $g_1=(13,15)$, as well as the values from the simulation trapdoor. Since $deg(T)=2$, we get the following: \begin{align*} @@ -267,7 +267,7 @@ \subsection{The Setup Phase} Generating zk-SNARKs from constructive proofs in th \end{example} \begin{example}[The 3-Factorization Problem in Circom and Snark.js] -\label{ex:3-fac-groth-16-setup-circom} The implementation of the Groth\_16 zk-SNARK setup phase in real world applications can be observed through the examination of our Circom implementation of the $3$-factorization problem \ref{ex:3-fac-circom} and the associated parameter set from Snark.js, as outlined in example \ref{ex:3-fac-groth-16-params-circom}. +\label{ex:3-fac-groth-16-setup-circom} The implementation of the Groth\_16 zk-SNARK setup phase in real world applications can be observed through the examination of our Circom implementation of the $3$-factorization problem from \ref{ex:3-fac-circom} and the associated parameter set from Snark.js, as outlined in example \ref{ex:3-fac-groth-16-params-circom}. In accordance with the methodology described in \cite{bowe-17}, the generation of the Common Reference String in Snark.js is comprised of two parts. The first part depends on an upper bound on the number of constraints in the circuit, while the second part is dependent on the circuit itself. This division increases the flexibility of the trusted setup procedure, as protocols with Universal Common Reference Strings, such as PLONK, only require the execution of the first phase, while the Groth\_16 protocol mandates the execution of both phases.