6. Algorithms

Generalized Schnorr Non-interactive Zero-Knowledge Proofs (zk-SNARK)

Generalized Schnorr Non-interactive Zero-Knowledge Proofs (zk-SNARK) are a technology used to verify the correctness of computational results without revealing any information. It's a zero-knowledge proof, meaning the prover can prove to the verifier that a statement is correct without revealing any information related to the proof.

The basic idea of zk-SNARK is to convert the computation process into an arithmetic circuit and then transform the circuit into a polynomial. Then, by selecting some specific points, a proof is generated that can be compressed into a very small string. The verifier can use public parameters and the proof to verify the correctness of the computation without needing to know the details of the circuit.

zk-SNARKs are efficient, scalable, and secure, and can be widely applied in various scenarios, such as blockchain, privacy protection, and data verification. They can help reduce network transmission and computational costs, improving privacy and data security.

zk-SNARK is a powerful technology that enhances privacy and data security. Applied to blockchain, it achieves more efficient, scalable, and secure computational verification. For transactions and smart contracts, zk-SNARK technology allows the correctness of a computational process to be determined without executing or understanding the specific content. This is achieved by performing specified verification computations on the data and comparing them with expected results, thus knowing whether the "computational process" has been correctly completed.

zk-SNARK can provide proof for computations that generate specific outputs, where the speed of verifying the proof is much faster than executing the corresponding computation. In terms of scalability, if direct verification of a block takes too long, one person can verify and generate proof, and then others in the network can quickly verify that proof, without everyone spending a lot of time on direct execution. In terms of privacy, users can prove in transactions that they have certain unspent assets without revealing the full source and destination of the assets. This can solve the information leakage problem caused by transaction transparency in blockchain platforms like Bitcoin, such as the identities of transaction parties and the amount of transactions.

RSA Accumulators

RSA accumulators are a widely used data structure in the blockchain field, used for verifying transaction validity and protecting blockchain security. They can quickly accumulate data, and the accumulated results can be efficiently proven. RSA accumulators were initially proposed by Rivest, Shamir, and Wagner in 2002.

The basic idea of an RSA accumulator is to convert each data element into an RSA public key through a hash function and then perform multiplicative operations on these public keys to obtain an accumulator. The value of the accumulator is public, but there is no method to extract information about the original data elements from the accumulator. To verify whether a data element belongs to an accumulator, one only needs to compare the hash value of the data element with the accumulator.

RSA accumulators offer advantages such as fast addition of data elements and proof without disclosing any data elements. Furthermore, RSA accumulators are highly secure, capable of withstanding various attacks, such as preprocessing attacks and collision resistance attacks.

RSA accumulators are widely applied in the blockchain field, such as for verifying the validity of transactions. They can also be applied in other areas, such as digital certificate validation and cryptographic protocol verification. Unlike Bitcoin's Merkle trees, Ethereum's Trie trees, and Merkle Patricia trees, the Amaze chain uses RSA accumulators for storing, updating, querying, proving, and validating the constantly changing blockchain state.

Kate Polynomial Commitments (KZG)

Kate Polynomial Commitments (KZG) is an algorithm for verifying polynomial computation results, widely applicable in decentralized finance (DeFi) and privacy protection scenarios in cryptocurrency and blockchain.

The basic idea of KZG polynomial commitments is to generate a commitment to a polynomial to verify computation results without directly revealing the polynomial's value. Specifically, the KZG algorithm generates a commitment to a polynomial and sends the commitment to a verifier for validation. The KZG algorithm selects a random point and calculates the value of the polynomial at that point and its derivative. With these values, the coefficients of the polynomial can be calculated, and a commitment to the polynomial is generated. During validation, one only needs to verify if the commitment and the computation results match.

KZG polynomial commitments are efficient, scalable, and secure, effectively protecting data privacy and security. Compared to traditional polynomial commitment algorithms, the KZG algorithm avoids the high computational cost and complexity of FFT algorithms. Additionally, the KZG algorithm has security features such as refraction attack resistance, false evidence attack resistance, and collision resistance.

KZG polynomial commitments, requiring only one group element, can prove any number of elements in a vector, used for validating nodes, light clients, and stateless blockchains, reducing network transmission and achieving fast verification. Therefore, KZG polynomial commitments have important applications in the Amaze blockchain.

Inner Product Argument (IPA)

The Inner Product Argument (IPA) is a commonly used zero-knowledge proof protocol for proving whether the inner product of two vectors of length n equals a specific value c. IPA is widely used in the cryptocurrency and blockchain domains for privacy protection, secure multi-party computation, and cryptographic verification, among other scenarios. The specific steps of IPA are as follows:

1. The prover selects a random vector r, and multiplies vectors a and b by r, resulting in vectors a' and b'.

2. The prover multiplies the elements of vectors a' and b' individually to get vector c', i.e., a'·b'=c'.

3. The prover calculates vector s=r·c'-c, where · denotes the dot product of vectors, i.e., a·b=Σ(a[i]×b[i]).

4. The prover submits vectors s and r to the verifier for verification.

5. The verifier checks whether s equals r·c'-c. If they match, the verification passes; otherwise, it fails.

In this protocol, the prover converts the inner product proof into a product of vectors by randomly choosing vector r, then submits the result to the verifier for verification. Since vectors a, b, and c are not directly exposed, the protocol has the characteristic of protecting data privacy. IPA is an efficient, scalable, and secure zero-knowledge proof protocol, widely applicable in multi-party computation, privacy protection, and data verification scenarios in the cryptocurrency and blockchain fields. IPA doesn't require a trusted setup, using bilinear groups to allow provers to prove the correctness of the "inner product" of Pedersen commitments to verifiers. The basic strategy is to use a "divide and conquer" approach, breaking down a problem into several sub-problems of the same type for a simple solution. The prover submits some information, and then the verifier issues a challenge, leading to the next commitment. The Fiat-Shamir algorithm allows us to transform interactive proofs into non-interactive proofs by replacing the challenge with the collision-resistant hash value of the commitment, using the inner product argument to verify polynomial values. Correctness means that when provers follow the corresponding operations, they can convince verifiers that the conclusion is correct. Soundness means that a cheating prover cannot pass the verifier's verification with false proof or has a very low success rate.

Blockchain Distribution Network (BDN)

The "Blockchain Distribution Network" is a novel network architecture and technology aimed at enhancing the performance, reliability, and security of blockchain transactions broadcasting, transaction pools, consensus, rapid synchronization of distributed ledgers, and blockchain applications. This technology distributes data across multiple nodes globally, enabling nodes to synchronize faster, more reliably, and more efficiently. When nodes request data, the network selects the nearest node to reduce latency and improve response speed. Specifically, the network caches content on nodes worldwide and offers functions like caching static content, providing load balancing, and preventing DDoS attacks, thereby enhancing the performance and security of blockchains and Dapp applications.

The primary advantages of this technology include improving blockchain performance and availability: by caching data on nodes worldwide, users can access content faster, enhancing the performance and availability of blockchains and Dapps; reducing network traffic: by caching static content, it lowers the bandwidth usage of blockchains and Dapp applications, thereby reducing network traffic and costs; enhancing security: providing DDoS attack protection, secure authentication, and malware prevention, thereby enhancing blockchain security; supporting large-scale globalization: providing rapid consensus and data consistency for global blockchains with distributed ledgers, achieving data synchronization, rapid global consistency, without affecting the network's decentralized characteristics.

To ensure data synchronization, rapid global consistency, and continuous operation of the system in the event of large-scale network disconnections (splitting into several parts), without affecting the network's decentralization, Amaze achieves this through the BDN, a neutral blockchain distribution network.

Efficient Compressed Bitmap (RoaringBitmap)

RoaringBitmap is a fast and efficient bitmap data structure used for efficiently compressing and processing large-scale binary data. It is mainly suitable for sparse bitmaps, where only a few bits are set to 1 in a dataset. Compared to traditional bitmap data structures, RoaringBitmap offers a higher compression ratio and faster query speed, especially in large-scale data sets.

RoaringBitmap supports bitmaps of various data types, such as 32-bit and 64-bit integers, strings, IPv4 and IPv6 addresses, etc. It provides various bitwise operations, such as AND, OR, XOR, NOT, Contains, etc., which can be completed in constant time.

The internal implementation of RoaringBitmap uses multi-level compression technology, including efficient compression algorithms using arrays and bit shifts. It can also be combined with other compression algorithms (e.g., LZ4) to further improve compression ratios and query speeds.

RoaringBitmap has been widely applied in various data processing scenarios, such as search engines, databases, graphics processing, machine learning, etc. Its efficiency and scalability make it one of the powerful tools for handling large-scale data.

In the blockchain domain, RoaringBitmap can be used for processing and compressing transaction and state data in blockchains, improving data processing and storage efficiency. Bitmap indexes are widely used in databases and search engines, significantly improving query speeds with bit-level parallelism. However, bitmap indexes occupy a lot of memory, so compressed bitmap indexes are adopted. RoaringBitmap is an excellent compressed bitmap index, having better compression performance and faster query efficiency compared to common RLE (Run-Length Encoding) WAH (Word Aligned Hybrid compression scheme), and Concise (Compressed Combinable Integer Set).

RoaringBitmap serves similar purposes to Bitmap, such as indexing, but is superior in terms of performance and space utilization. It has been widely applied in many mature open-source big data platforms. The main idea of RoaringBitmap is to divide the 32-bit range ([0, n)) into 2^16 buckets, with each bucket having a Container to store the lower 16 bits of the value. When storing and querying a value k, divide k into the high 16 bits (k % 2^16) and the low 16 bits (k mod 2^16), use the high 16 bits to find the corresponding bucket, and then store the low 16 bits in the relevant Container. RoaringBitmap has two types of container structures: Array Container and Bitmap Container. The Array Container stores sparse data, and the Bitmap Container stores dense data.

Big Data ETL

Extract, Transform, and Load, commonly abbreviated as ETL, is a data processing workflow used to move large volumes of data from one data source to another storage system. The ETL process generally includes the following three steps:

• Extract: Retrieve raw data from the source data sources. This typically involves connecting to one or more data sources, such as databases, log files, APIs, etc., and extracting the data into a unified storage area, like a data warehouse or data lake.

• Transform: Process and transform the raw data to meet the format and quality requirements of the target data storage area. This often includes data cleansing, data merging, deduplication, data conversion, data splitting, and data aggregation.

• Load: Load the transformed data into the target data storage area. This typically involves importing data into databases, data warehouses, data lakes, or data stream processing systems.

The ETL process is crucial for data analysis and data mining as it extracts data from multiple sources, cleanses and transforms the data, and then loads it into a data warehouse or data lake, providing a reliable data foundation for subsequent data analysis and mining. At the same time, the ETL process is a common task in the daily work of data scientists and data engineers, hence widely applied in the field of data processing. ETL is commonly used in data warehousing, decision support, real-time analytics, data mining, and data intelligence, where its primary role is to integrate dispersed, semi-structured, and non-standardized data into a unified data warehouse to provide a quality-assured data source for analysis and decision-making. Amaze uses ETL to efficiently analyze and process blockchain historical ledger data, arbitrary spatiotemporal states, account transaction reconstruction, and contract storage.

Verifiable Delay Function

A Verifiable Delay Function (VDF) is a cryptographic function characterized by the fact that computing its output requires a certain number of sequential steps, but the correctness of the output can be verified relatively quickly. VDFs are designed to prevent cheaters from shortening the computation time through parallel computing or faster hardware, ensuring the computation of the function is fair to some extent. VDFs have multiple applications in blockchain and distributed systems, such as:

1. Random number generation: VDFs can be used to generate unpredictable and fair random numbers, avoiding the risk of manipulation of random numbers.

2. Timestamp services: VDFs can act as a timestamp service, ensuring that in a distributed network, the sequence of messages or events is not tampered with, which is crucial for preventing double-spending and ensuring transaction order.

3. Proof of Sequential Work: VDFs can be used to build a consensus algorithm called "Proof of Sequential Work", which requires participants to execute a series of computational tasks in sequence to prove they have completed a certain amount of work, thus preventing manipulation and cheating and increasing the network's security.

A VDF takes an input value, undergoes a certain period of delay in computation, and then produces a result. This result can be easily verified. VDFs use serial computing algorithms, whose execution time is predictable and cannot be accelerated through parallel computing. The proof process can be quickly verified. VDFs can not only reduce the massive energy consumption caused by Proof of Work (POW) but also solve the problem of hackers attacking with 51% computing power, preventing hackers from obtaining the majority's signature and monopolizing computing difficulty over a long period.

Zero-Knowledge Virtual Machine (zkEVM)

zkEVM is an improved version of the Ethereum Virtual Machine (EVM), offering enhanced privacy protection and execution efficiency, applicable to a wider range of scenarios. It allows private computations to be executed without revealing any information. Unlike traditional smart contracts, zkEVM does not leak any execution information onto the blockchain, thus achieving higher privacy protection.

zkEVM uses zero-knowledge proof technology to verify the correctness of computations without having to disclose the data used in the calculations. Specifically, when zkEVM executes a computation, it generates a proof to demonstrate the correctness of the computation, containing only the necessary computation results and not leaking details of the computation process or data. This enables zkEVM to prove the correctness of computations to other participants without exposing sensitive computation details.

zkEVM is a virtual machine that generates zero-knowledge proofs to verify program correctness, designed to support zero-knowledge technology when executing smart contracts. Unlike traditional virtual machines, zkEVM verifies the correctness of program execution, including the validity of inputs and outputs used during the operation. This means that for executing smart contracts, thousands of transactions and contract executions do not need to be repeated by every node, and correctness can be ensured through a small amount of verification computations.

zkEVM is divided into three parts: the execution environment, proof circuit, and verification program. The execution environment, similar to EVM, is where programs (smart contracts) run in zkEVM. The proof circuit generates zero-knowledge proofs to verify the validity of computations of transactions in the execution environment, while the verification program submits these validity proofs for verification, including input and output states.

The implementation of zkEVM utilizes existing zkSNARK technology, which allows verifiers to validate the correctness of a proof without inspecting its contents. Therefore, users of zkEVM can prove they have executed a computational task to others without exposing the computational details and data.

The advantages of zkEVM include higher privacy protection, higher execution efficiency, and broader application scenarios. It's used for protecting personal privacy, safeguarding trade secrets, and ensuring data security.

Batch Range Proofs

Batch range proofs are a method used to simultaneously verify whether multiple data points fall within a specific range. By combining multiple proofs, this method improves verification efficiency and is particularly suitable for situations involving large amounts of data, such as transaction verification. Combined with zero-knowledge proof technology, it ensures data conforms to a predetermined range while protecting privacy, enhancing the security and privacy of transactions. The technical implementation typically relies on advanced cryptographic methods, capable of creating compact range proof aggregations without increasing proof size. Batch range proofs play a key role in enhancing the efficiency and privacy protection of blockchain transactions, with three main functions:

1. Updating: The updating process often involves adjusting or optimizing algorithms to adapt to new data sets or improve efficiency. In a blockchain environment, this might mean adjusting parameters to handle larger data sets or meet new security requirements. Updates may include improving aggregation methods to handle more proofs without sacrificing security.

2. Proof Generation: Proof generation is the process of creating a cryptographic proof that a set of data falls within a specific numerical range. In batch range proofs, this means combining multiple independent range proofs into a single proof. This process requires using complex algorithms to ensure each individual proof is correctly considered and merged while keeping the overall proof size and complexity manageable.

3. Verification: The verification process involves checking the validity of the batch range proof, ensuring all included data falls within the specified range. This typically involves a series of mathematical and cryptographic checks to ensure the proof's correctness and completeness. In blockchain applications, this step is crucial for maintaining the network's integrity and trust.

The common goal of these three processes is to ensure batch range proofs are both efficient and secure, capable of handling large amounts of data while maintaining user privacy and network security. As blockchain technology evolves and its applications expand, the optimization and innovation of these processes are vital for supporting more complex and secure cryptocurrency transactions.

Search for Singularity

"Searching for Singularity" is a key concept that uses advanced mathematical tools to optimize the process of proving balance, executing transactions, and achieving consensus. This concept employs parameter family searches, allowing untrusted provers to commit to a vector a ∈ Fm, and prove that all entries of a belong to a predetermined table t ∈ Fn. The focus here is on improving operational efficiency while maintaining security. Key aspects of "Searching for Singularity" in Amazechain include:

1. Optimization of Prover's Commitment: For m searches on a table of size n, Amazechain's provers only need to commit to m + n field elements. These elements are small and fall within the set {0,...,m}, regardless of the size of field F.

2. Significant Improvement in Proof Costs: When using a commitment scheme based on multi-exponentials, the cost for the prover is mainly determined by O(m + n) group operations (like elliptic curve point additions) and the cost of proving multilinear polynomial evaluations on a Boolean hypercube.

3. Applicable to Large Structured Tables: Unlike previous lookup parameters, Amazechain allows the use of larger tables (e.g., of size 2^128 or more) without requiring any party to commit to table t, as long as it has certain structures.

4. Enhanced Security for Spark: Amazechain provides stronger security analysis on top of Spark. While Spartan's security analysis assumed metadata committed by honest parties, Amazechain expanded this concept, proving that Spark remains secure even with metadata committed by malicious parties.

5. Support for Lookup Parameters of Structured and Unstructured Tables: Amazechain extends Spark to directly support lookup parameters for both structured and unstructured tables, possessing the efficiency characteristics mentioned above.

Through these features, Amazechain significantly enhances the efficiency and security of lookup operations in blockchain technology, especially when dealing with complex transactions and datasets.

Last updated