11. AmazeChain 2024 Roadmap

This roadmap elevates public awareness of the AmazeChain development process and its key milestones. By analyzing the roadmap, users can better understand why and how the community determines the priorities for upgrades. Notably, the stages of the roadmap do not proceed in sequence but rather multiple development paths progress in parallel. In this document, we will examine each stage of the roadmap, key features, and how you can track their completion.

The overall diagram is as follows:

Color and Shape Explanation of the Roadmap:

• Light green background shapes represent tasks to be done, referred to as “Tasks.” The left third of the entire diagram in green is filled indicating “What’s Done,” while the right two-thirds show partial filling (indicating progress) for “What’s Next.”

• Light blue background shapes represent milestones, referred to as “Outcomes.”

• Light purple background shapes are schemes currently under research.

• Light yellow background boxes are for comments, explanations, or sub-roadmaps.

• White background shapes represent tasks that are pending.

Takeoff

Objective: To achieve 1 million TPS and a capacity of 10 billion accounts.

Importance: Current blockchains struggle to efficiently process a large volume of transactions, significantly impacting user experience. As the number of users and the volume of transactions increase, transaction fees rise sharply, causing network difficulties. Achieving a capacity of one million transactions per second will ensure the public chain can scale to meet the needs of a global user base.

Overview: The "Takeoff" plan aims to pave a development path to unlock higher transaction throughput and capacity for the entire network. This goal will be achieved through the implementation of a series of innovative expansion technologies.

• Sharding technology and data availability sampling play a central role in the development of blockchain technology. They enable additional transaction processing capacity to be seamlessly integrated into the main chain while maintaining decentralization and security. Deep sharding technology, in particular, which is managed by a committee of validators randomly selected to decide transactions for each slot, not only improves data availability but also achieves network consensus, settlement, and data layer unity through the data sampling of consensus nodes. Data availability itself becomes a validity check, as nodes will not follow branches with unavailable data, just as they will not follow branches containing invalid state transitions. This is crucial for achieving fully trustless scaling.

• Parallel transaction execution technology allows different transactions and contracts to be processed simultaneously on multiple nodes. This method increases the overall system efficiency by executing a large number of blockchain transactions in parallel and processing and re-executing conflicting transactions. Each node handles a portion of transactions, resulting in higher throughput and faster processing speeds across the entire network. Parallel execution also enhances network scalability, as more nodes can be added to handle more transactions without affecting overall performance as the network grows.

• Parallel communication technology, through subscriptions, subnets, channels, and communication sharding, divides the network into multiple parts, each processing transactions independently. This not only increases processing speed and system throughput but also effectively addresses the challenge of high transaction volumes.

• Account capacity technology enhances account processing capability by optimizing data storage and processing mechanisms. It supports more efficient transaction processing, further enhancing the overall performance and scalability of the system.

• zk-VM technology combines virtual machines with zero-knowledge proof principles to achieve high privacy and security in transactions. Under this technology, proposers need only execute once to generate a proof, while validators can verify quickly and at a low cost, optimizing the blockchain network's processing speed and efficiency.

What’s Done:

• Parallel transaction execution and communication technologies have surpassed the performance of existing chains like BTC and ETH by thousands to millions of times through parallel execution, communication, account expansion, etc. New blocks parallel transactions at 100KTPS, replaying history in parallel at 500KTPS; state reconstruction, L0 settlement layer change set state reconstruction, and synchronization at 20Mtps.

• Account capacity technology: Account expansion through more efficient data polynomial commitment structures, data compression, and data cleanup strategies in the fields of data storage, network bandwidth, and transaction processing efficiency; sharding, data cleanup strategies, only saving final states, zero-knowledge proof, and other privacy technologies to reduce the size of transaction data while maintaining security and privacy.

• zk-VM: Utilizing the Risc0 proof system, full compatibility with virtual machines like BTC script and ETH EVM has been achieved. It supports efficient proof generation by proposers and allows validators to verify the proof quickly and at a low cost. This system has been running stably on hundreds of thousands of mobile devices for 10 months, demonstrating its reliability and effectiveness.

What’s Next:

Data Availability:

• P2P design for data availability sampling involves all efforts and research required for network sharding.

• Each node's ability to download data limits the feasible capacity in space.

• Data availability sampling client: A minimalist-level client that can quickly judge data availability through a few thousand bytes of random sampling.

• Efficient data availability self-repair: Capable of efficiently reconstructing all data even under the worst network conditions (e.g., malicious validator attacks or extensive node downtime).

• Fully decentralized sorters, trustless fraud proofers, immutable contracts, etc.

• Polynomial commitments require a "KZG ceremony" to create a trusted setup, initially using the results of 4844.

• Execution Proofs: Transforming smart contract operations into mathematical equations, then generating proofs through zero-knowledge proof algorithms to confirm these operations have been correctly executed without revealing specific execution data. This is a key part of ensuring the correctness of blockchain transactions and smart contract operations.

• Data Sharding: Enhances system scalability and processing capacity. Currently, a transitional technology is being implemented, introducing a data structure called "blob" for temporarily storing large amounts of off-chain data. These blob data are packed into specific blocks but are not directly stored on the Ethereum state tree, thus reducing the storage and processing burden on the main chain. This way, network data throughput and expansion capabilities are significantly improved without sacrificing network security and decentralization principles, paving the way for future full data sharding.

• Decentralized sorters sort transactions or data in a decentralized manner. This ensures that the processing and sorting of transaction data are fair and transparent without a central authority or single control point.

• Decentralized sorting is used to sort transactions or data in a decentralized environment. It is commonly applied in blockchain networks to ensure the sequentiality and consistency of transactions. This technology coordinates the sorting process among multiple nodes in the network through algorithms (such as consensus algorithms), ensuring all participating nodes reach consensus on the transaction order. This not only improves processing efficiency but also enhances network security and tamper resistance. Decentralized sorters are crucial for implementing efficient, transparent, and trustworthy blockchain systems, especially in maintaining network stability and consistency when handling a large volume of transactions.

• Optimization of zk-VM input proofs is also crucial, and we use KZG tree structures to improve efficiency and security. This will help process and verify transactions more efficiently.

• Kerkle Optimization: This is a cryptographic data structure optimization technique that uses Kate commitments (a cryptographic commitment scheme based on bilinear pairings) to construct verifiable data structures. This structure allows users to quickly generate and verify proofs of data (such as blockchain states or transaction logs) without processing the entire dataset. The optimization of KZG trees reduces the computational resources required for data storage and verification, improving system efficiency and scalability, especially suitable for scenarios with large-scale and high-frequency data updates.

• Pre-execution: Pre-execution of blockchain transactions is used to improve the system's processing speed and throughput. This optimization technique involves simulating the execution of transactions before they are officially recorded on the blockchain. This process includes verifying the validity of transactions, checking smart contract logic, obtaining access lists and their proofs, and predicting transaction outcomes. Pre-execution allows potential issues, such as conflicts or execution errors, to be identified and resolved in advance, thereby improving the efficiency and reliability of the blockchain system. This technique is particularly suitable for complex smart contract operations and high-concurrency transaction environments.

• Batch Execution: Batch execution of blockchain transactions is a processing technique that combines multiple transactions into a batch for simultaneous execution, improving the processing efficiency and throughput of the blockchain network. This method reduces the overhead of processing each transaction individually, optimizing the utilization of blockchain resources. Batch execution allows a large number of transactions to be verified and recorded in a short period, especially suitable for high-frequency transaction environments. This also helps reduce network congestion and improve overall system performance.

• Access Change Sets: Enhancing the overall system's security and reliability, in stateless blockchains, the technology for proof of transactions and contract reads of accounts and storage states is mainly based on KZG trees. KZG trees are an efficient data structure that can generate and verify state proofs with minimal data. This method allows stateless nodes to quickly verify transactions and contract state read operations without storing the entire state history. This way, data storage requirements are significantly reduced, while transaction execution speed and efficiency are increased. This proof technology is especially suitable for scenarios requiring fast processing of a large number of transactions, improving the overall performance of the blockchain network.

• Distributed Execution + Distributed Verification: Transitioning to distributed execution and verification not only enhances the system's decentralization but also its adaptability to different network environments. Through the implementation of these plans, the system's performance is improved while ensuring its security and stability.

Boundary

Objective: Simplify block verification to downloading N bytes of data, performing basic computations, and verifying SNARKs, suitable for mobile nodes.

Importance: The complexity and resource intensity of block verification limit the number of entities participating in verification. Simplifying the verification process can attract more network participants to become validators, thereby enhancing the network's decentralization and overall health.

Overview: The implementation of minimalist clients aims to provide an easy-to-operate alternative with low storage and bandwidth requirements for users unable or unwilling to run full nodes. The goal is to provide these minimalist clients with the same security assurances as full nodes. This is all based on zero-knowledge proof technologies, such as SNARKs and STARKs, as well as polynomial commitment schemes. Here are some relevant resources:

• A brief introduction to zk-SNARKs

• Analysis of STARK technology

• Explanation of zkSNARKs for those with a mathematical and programming background

• The role of polynomial commitment schemes in chain scaling

The "Boundary" project aims to change the way network validation nodes operate, avoiding the need to store all transaction history. Currently, most network validators need to run full nodes, which is challenging in terms of time and storage.

As the network expands, storage and data processing requirements are also growing. The Ethereum database covers all smart contract deployments, externally owned accounts, their balances, and related storage. As more users join and more developers deploy new contracts, the network size is growing exponentially. The goal of "Boundary" is to make light clients more viable to accommodate this growth and attract new network participants. These light clients will use zero-knowledge proof technologies, such as zkSNARKs and zkSTARKs, to achieve the same security as full nodes.

While SNARKs and STARKs themselves are worth delving into, the key is to understand that these advanced cryptographic technologies will provide the same robust security as existing systems while using fewer computational resources.

The Kerkle trees introduced in "Boundary" provide a more efficient method of data storage, with vector commitments and KZG for every 4096 state constraints in a group, achieving smaller proof sizes.

By deploying the latest cryptographic technologies like SNARKs, STARKs, Kerkle trees, etc., "Boundary" will provide a more efficient state storage structure and enhance the network's decentralization by attracting new participants.

What’s Done:

• Resolving VM DoS Issues: Attackers exploit vulnerabilities or design flaws in VMs, deliberately consuming network resources by sending numerous complex transactions or smart contract operations, causing network congestion or reducing node processing capacity. This attack can cause legitimate transactions to be delayed or not executed, impacting the performance and stability of the entire blockchain network. Optimizing VM execution efficiency through transaction fees, limiting the complexity of transactions, and improving network throughput and resistance to attacks.

• Supporting Minimalist Clients: These clients do not need to store the complete chain state but instead validate stateful transactions and blocks through zero-knowledge proofs. This technology reduces data storage and processing requirements, making the clients run more efficiently, especially suitable for devices with limited resources.

• Kerkle Trees: Determining Kerkle tree operations and design decisions implemented within the protocol; a secure transition process from the current Merkle tree process to fully functional Kerkle trees; smaller proof sizes promote the popularity of stateless clients. Validators will be able to rely on block builders to provide Kerkle proofs about specific block states and validate these lightweight proofs instead of directly maintaining the state of the chain.

• Access List Proofs: Access list proof technology refers to generating cryptographic proofs to verify the existence and status of specific data elements (such as account balances, transaction records) in the blockchain. This technology ensures data integrity and consistency while improving query efficiency and security.

• L0 Settlement Layer: L0 settlement layer technology is the foundational layer in blockchain architecture, providing core data structures and consensus mechanisms to process and validate transactions. It serves as the bottom layer supporting other application layers (such as L1, L2), achieving efficient, secure transaction settlement, and data maintenance.

• Mobile Nodes: Mobile node technology refers to blockchain nodes running on smartphones. These nodes do not need to store the entire blockchain data but participate in the network through a simplified verification process. They provide mobile device users with the ability to participate in the blockchain network, enhancing the network's accessibility and decentralization characteristics.

What’s Next:

• SNARK / STARK ASICs: ASICs, standing for "Application-Specific Integrated Circuits," are hardware (to be) specifically designed for computing ZK proofs for a single computational task. The computationally intensive nature of many parts of the protocol will require this specialized hardware.

• SNARK-based Lightweight Clients: The sync committee structure used supports basic lightweight clients. The sync committee is a group of 512 validators randomly assigned and chosen every 256 epochs, continuously signing block headers for each new slot in the chain, representing accurate and verified blocks. The transition process of the sync committee needs to be SNARKed so that clients can confirm without delay which validators make up the current sync committee.

• SNARK for L1 VM: With the development of emerging ZK-VMs, SNARK should be directly integrated into L1. Upgrading the settlement layer to a solidified zk-VM upgrade will allow a single succinct ZKP to validate the entire ecosystem - possibly including thousands of L2s and millions of TPS.

• SNARK for Kerkle Proofs: Simplifying Kerkle proofs into a single SNARK deepens the efficiency captured by using Kerkle trees.

• SNARK for Consensus State Transitions: This will replace the sync committee with SNARKs for trustless verification of all consensus layer activities.

• Full SNARKization: The ultimate goal of "Boundary" is to complete all SNARK-related development paths mentioned above. Full SNARKization paves the way for extremely efficient and trustless block verification.

• Increasing L1 Fee Limits: A fully SNARKized L1 will promote increased fee limits without burdening validators with correspondingly larger block sizes. Larger blocks at the L1 level will, in turn, further benefit L2 expansion.

• Accumulator Technology: Accumulator technology in blockchain is used to efficiently and compactly represent data collections. It allows for the rapid verification of the presence or absence of data members without accessing the entire dataset. This technology is particularly suitable for optimizing the data verification process and improving storage and processing efficiency.

• Fast Lookup Technology: Uses mathematical tools, such as parameter families, to prove balances, execute transactions, and achieve consensus. The efficiency of such operations is improved through multilinear polynomial commitment schemes. Based on Spark optimization, a stronger security analysis is provided, ensuring security even if metadata is committed by malicious parties. This method greatly improves the cost efficiency of the prover, representing a significant improvement over previous lookup parameters.

• Zero-Knowledge Elementary Databases (ZK-EDBs): The database EDB is a list of key-value pairs, first committing this database, then responding to queries of keys with zero-knowledge proofs that the value is correctly corresponding.

Endgame

Objective: To have an ideal, simple, robust, and decentralized Proof-of-Stake (PoS) consensus mechanism.

Importance: Many of the technological infrastructures in other parts of the roadmap depend on consensus. PoS significantly reduces the network's energy consumption, while VDF offers the entropy advantage of PoW.

Overview: The Endgame roadmap phase is ongoing.

The VDF Endgame roadmap phase is ongoing.

VDF, essentially a "non-parallelizable proof-of-work," will enhance the randomness used in PoS and other things, requiring everyone to spend a certain amount of time to produce a proof, increasing the cost and difficulty for all forgers and disruptors, having the entropy advantage of PoW, with extremely low relative power consumption.

Staking and withdrawal functionality. Proof of Stake now reduces network energy consumption by about 99.95% compared to Proof of Work.

Endgame lays the groundwork for future improvements to the network, making scaling work possible, crucial for proposer/builder separation, which aids in data sharding and other mechanisms critical to network speed, playing a role in subsequent upgrades.

What’s Done:

• Staking and Withdrawal: PoS staking technology allows users to lock a certain amount of cryptocurrency to participate in network consensus and earn rewards. Withdrawal involves unlocking and reclaiming these assets.

• Reward Distribution: PoS reward distribution mechanism automatically calculates and periodically distributes the transaction fees and block rewards generated by the network based on the amount and duration of cryptocurrency staked by users.

• Distributed Mobile Validators: Distributed mobile validator technology uses mobile networks to build a decentralized verification system, with each phone acting as a node, assisting in transaction verification and maintaining the integrity and security of the blockchain network.

• BLS Signature Aggregation Voting: BLS signature aggregation voting technology reduces data volume and improves processing efficiency by combining multiple signatures into one, used in blockchain network consensus mechanisms to ensure the security and validity of transactions.

• Single Slot Finality: Single slot finality technology in blockchain networks means that transactions confirmed within each slot become unchangeable records, ensuring the immediacy and irreversibility of transactions.

• Settlement Layer Consensus: Settlement layer consensus technology achieves consensus among nodes in the settlement layer of the blockchain, ensuring the uniformity of transaction verification and recording, maintaining the network's consistency and security.

What’s Next:

• Distributed Validators: "Multi-signature but for staking," where n people share a single validator and m-of-n must agree on its behavior.

• Lookahead Merging: Essentially, honest validators can "impose" their view of the correct head of the chain, reducing the opportunity for malicious validators to split votes and then reorganize blocks in a way that benefits them.

• Increasing Balance Weight Limit: Allows large stakers to merge all validators into one, significantly reducing the weight of proofs.

• Improved Aggregation: The network currently requires all validators to vote on each block and validate other validators' votes, a high bandwidth demand. BLS signature aggregation for voting can increase efficiency and lower the requirements for running a validator, allowing the network to support more validators.

• Secret Leader Election: At the start of each epoch, the network determines specific block proposer validators for each slot, providing attack vectors for malicious actors to attack upcoming proposers. Eliminating public visibility of block proposers strengthens Ethereum's consensus.

• Supporting More Validators: An ongoing long-term effort: securely supporting more validators is always desirable.

• Proof-Carrying Transaction Pool: Transactions enter the transaction pool carrying SNARK proofs for access list execution and state changes, reducing computation and confirmation workload, accelerating consensus.

• Pre-sorting: Transactions are linked into segments and blocks in advance, improving consensus processing speed.

• VDF Verifiable Delay Function: Provides necessary entropy for the selection of validators and committees. Everyone needs a certain amount of time to produce a proof, increasing the cost and difficulty for all forgers and disruptors, having advantages similar to PoW, with extremely low relative power consumption. Core developers need to research exact specifications and specific VDF hardware requirements.

• Quantum-Secure Aggregate Signatures: The network currently uses cryptographic technology based on the BLS signature scheme, allowing validators to sign and verify messages. However, with the advancement of quantum computing technology, this cryptographic scheme faces the risk of being broken. Therefore, the network must plan and implement a new, quantum-resistant cryptographic signature scheme to replace the existing system.

Shine

Objective: To develop algorithms and protocols that remain secure even in the face of powerful quantum computing capabilities, protecting blockchain networks against attacks from quantum computers.

Importance: Quantum computers can crack cryptographic algorithms like ECDSA, BLS12-381, SNARK, etc. Therefore, developing quantum-safe technologies is crucial to ensure blockchain networks and data remain secure even in the face of advanced quantum computing capabilities.

Overview: Quantum-safe cryptographic technology is part of a long-term effort to protect network security before quantum computers become a realistic threat. Quantum safety refers to a security mode that specifically considers the potential capabilities of quantum computers in its design and implementation. This means that quantum-safe algorithms and protocols will still protect data from unauthorized access or decryption even in a future where quantum computers are available and have powerful computing capabilities. These technologies often involve using quantum key distribution, quantum-resistant cryptography, and other advanced cryptographic methods to ensure the security of blockchain networks for data transmission and storage even under extreme computational power.

What’s Done:

• Hash-based Cryptography: Algorithms based on hash functions, like the Merkle signature scheme, are considered quantum-safe due to the difficulty of solving hash collisions.

• Lattice-based Cryptography: Lattice cryptography algorithms, such as NTRU and lattice-based fully homomorphic encryption methods, are resistant to quantum attacks due to the complexity of lattice problems.

• Multivariate Polynomial Cryptography: Algorithms based on the difficulty of solving systems of multivariate polynomial equations are hard to solve in both traditional and quantum computing models.

• Code-based Cryptography: Systems like the McEliece encryption system and its variants, based on the complexity of decoding random linear codes, can resist quantum attacks.

• Quantum Key Distribution (QKD): Securely transmits keys using the principles of quantum mechanics, able to withstand quantum computing attacks.

What’s Next:

• Quantum-Safe SNARKs (e.g., STARKs): Quantum computers could crack current cryptographic SNARKs. To achieve the goal of quantum proofing, quantum-safe SNARKs like STARKs are being researched.

• Quantum-Safe Fast Lookup: Quantum-safe lookup algorithms that can search efficiently and effectively in large datasets while ensuring the search process is immune to quantum computing attacks.

• Quantum-Safe Accumulators: Accumulators are data structures used for efficiently accumulating and verifying membership in data sets. Quantum-safe accumulators can prevent quantum attacks, ensuring the accuracy and security of data verification.

• Quantum-Safe Aggregatable Signatures: With the development of quantum computing, current aggregatable signature cryptography (such as BLS signature schemes) might be at risk of being broken. Therefore, alternative, quantum-proof aggregatable signature methods are being researched to maintain the security of validator-signed messages.

• Quantum-Safe and Trustless Setup-Free Commitments: Current polynomial commitments (KZG) are efficient and powerful but lack quantum safety and require a trusted setup. Researchers are seeking more ideal, long-term viable commitment schemes that don't require a trusted setup, aiming to smoothly replace KZG.

• Quantum-Safe Encryption and Hashing: With the development of quantum computers, traditional encryption and hashing algorithms may be broken. Therefore, researchers are developing new algorithms to resist attacks from quantum computers and maintain data security and integrity.

• The next steps focus on further researching and improving these quantum-safe technologies, especially in terms of practicality and efficiency. This includes developing more efficient quantum-safe cryptographic algorithms, building stronger quantum-resistant network protocols, and testing and deploying these technologies in real-world environments. Additionally, enhancing education and training to raise industry and public awareness of the importance of quantum safety is necessary. Moreover, collaboration and international standardization will play key roles in achieving a globally quantum-safe network.

Stellar Array

Objective: To ensure reliable and trustworthy neutral transaction inclusion across multiple ecosystems and clients, with historical archiving and state dormancy, avoiding centralization and other protocol risks.

Importance: Multi-ecosystem protocol support is crucial for compatibility and development vitality. Multi-client support prevents issues like 2/3 chain forks and 1/3 loss of vitality. Proposer Builder Separation (PBS) addresses risks of centralization in block construction and validation and enhances the ability to resist censorship or transaction filtering. Simplified data storage supports future proof protocols.

Overview: We are dedicated to providing a diverse ecosystem and multi-client support to ensure the reliability and trustworthiness of transactions while avoiding centralization and other protocol risks. Multi-ecosystem protocol support enhances compatibility and growth, while multi-client design prevents chain forks and efficiency declines. We use PBS to reduce the risk of centralization in block construction and validation and to improve resistance to censorship and transaction filtering. We also simplify data storage structures to support future-proof protocols. Additionally, we optimize data storage through historical archiving and state dormancy mechanisms. Archive nodes handle historical block storage, reducing the burden of historical data on regular nodes. New synchronization mechanisms, like checkpoint synchronization, enhance overall efficiency, allowing the chain to synchronize from recent checkpoint blocks instead of the genesis block. This ensures that indexing and accessing historical data does not impact the functionality of existing applications while incentivizing network participants to establish decentralized data sources. We also plan to develop simplified virtual machines and state expiry features to reduce the storage burden on clients.

What’s Done:

• VM Simplification: Implemented Kerkle tree technology to simplify transaction execution, including clearing fee refunds, prohibiting the destruction of any created object, simplifying fee mechanisms, and replacing precompiled contracts with direct VM implementations.

• State Dormancy: Determined how to achieve state expiry, including options for weak statelessness rather than pure state expiry.

• Archive Nodes: Developed mobile nodes. Core protocol nodes only need to manage up to one year of historical blocks, without needing to download and provide the entire chain from genesis.

• Multi-Ecosystem Compatibility: Supported various protocols like ETH, BTC, and Sui for increased security, speed, and efficiency.

• Multi-Client Support: Developed multiple clients in Golang, Rust, C, etc., to avoid 2/3 chain forks and 1/3 loss of vitality issues.

What’s Next:

• Super Nodes: Offering archiving, data storage (see “Mirroring”), distributed computing, and profitable nodes for WebX.

• Geographic Diversity: Enhancing network resilience and security. When blockchain nodes are distributed across different geographical locations, they are less likely to be simultaneously affected by localized natural disasters, political unrest, or cyber-attacks. This spread helps prevent single points of failure, ensuring continuous and stable operation of the blockchain network. Moreover, geographic diversity helps avoid concentrated legal and regulatory risks, increasing the network's global adaptability and resilience.

• Data Compression: Address indexing, compact formatting, bt synchronization, snapshots, ledger compression, mdbx storage for variable content: state, indexes, and archival files for accumulating content: transactions.

• State Availability: After completing state expiry-related development, validators and nodes will be able to operate without storing any state (or at least keeping the amount of stored state constant, not growing over time), inevitably disrupting some existing applications on the network. Researching what will be disrupted and how will enable the upgrade of infrastructure to achieve a stateless blockchain.

• Address Space Expansion: Some proposals advocate expanding the standard size of Ethereum addresses from 20 bytes to 32 bytes, partly to address overall security concerns. Developers need to find a way to make old 20-byte addresses backward compatible.

• LOG Reform: Network developers need to simplify event logs to allow for efficient retrieval of historical events.

• Simple Serialization: Currently, chain objects are serialized using two different encoding methods. Compatible with other chains using Recursive Length Prefix (RLP) serialization, while internally using Simple Serialize (SSZ). Plans to end the use of RLP and support all aspects of the protocol using SSZ.

• Inclusion Lists: Allow validators to check the blocks passed to them by builders based on an expected list of transactions. If a block does not include these transactions, validators can reject it. The inclusion list aims to combat builders not including specific transactions. Also, addressing the maintenance and availability issues of the inclusion list.

• Eliminating Arbitrage: The ordered system of the chain creates arbitrage opportunities. Compatibility with these chains requires eliminating and reducing the variability of block rewards, including application layer minimization (e.g., CowSwap and Arbitrum) and burning protocol-level rules, leading to fairer block rewards and reducing the potential for validator centralization.

• Distributed Construction: Introduced “block builders” and the PBS prototype, minimizing validator computational overhead, reducing validator centralization, developing block construction innovations to lighten hardware requirements, pre-confirmation services could increase chain adoption, preventing front-running to enhance the trust neutrality of block construction, and preventing front-running within the protocol.

Mirroring

Objective: To decentralize content distribution and storage. This includes rapid content retrieval, enhanced data integrity, reduced bandwidth, and high network resilience.

Importance: The significance of distributed storage lies in decentralization and resistance to censorship, maintaining open information dissemination, improving content availability, reducing service interruptions, protecting data integrity through hash values, safeguarding sensitive information, reducing bandwidth waste, enhancing network efficiency, and increasing network resilience against disruptions and faults.

Overview: Distributed storage is aimed at improving internet content distribution and storage. It enhances content availability, reliability, and security by using decentralized content storage and hash addressing. This creates a more open and decentralized internet, reducing the risks of single points of failure and censorship.

Decentralized Content Distribution: Eliminates single points of failure and bottlenecks common in the traditional internet. Storing content across multiple nodes in the network increases content availability and reliability.

Faster Content Retrieval: Content is accessed directly via its hash address, bypassing centralized servers. This reduces latency and speeds up content delivery.

Enhanced Data Integrity: Content's hash value is used as its address, meaning the content remains unaltered during transmission, offering higher data integrity and security.

Reduced Bandwidth Waste: Redundant data transmission on the internet is reduced by allowing nodes to share cached content, thereby saving bandwidth.

Increased Network Resilience: The system is designed to cope with network disruptions and faults. It allows nodes to automatically reconnect and synchronize data when the network is restored.

What’s Done:

• Basic functionalities and protocols implemented in 2015.

• Performance and stability enhancements in 2017.

• Compliance with IETF standards and widespread adoption in 2019.

• Increased use by developers, applications, and businesses in 2021.

• WebX integration in 2023.

What’s Next:

• Performance Optimization: Continuous improvement of performance to increase content retrieval speed and network efficiency.

• Ecosystem Expansion: Expanding the user and developer community, encouraging more applications and enterprises to adopt IPFS.

• Protocol Improvement: Ongoing enhancements to the protocol to provide more features and security.

• Enhanced Security and Privacy Protection: Strengthening security to protect user data and digital assets, enhancing data encryption and privacy measures, offering more robust resistance to censorship and data integrity verification.

• Global Deployment: Promoting deployment globally for broader usage and impact.

• Wider Adoption and Integration: Encouraging more applications and services to adopt it for data storage and transmission, integrating more closely with existing internet infrastructure and applications.

• Support for Decentralized Applications (DApps): Serving as a backend storage and distribution platform for DApps, supporting more DApp development and operation, integrating more deeply with blockchain technology, and providing stable data services for DApps.

Improving the decentralization, content availability, and data integrity of the internet. Its future development will continue to impact and improve the evolution of the internet.

Four-Dimensional Computing

Objective: To enhance processing speed, data handling capacity, reliability, and fault tolerance, and reduce costs in blockchain systems through distributed computing collaboration, addressing large-scale, complex problems.

Importance: Parallel processing in blockchain significantly improves the speed of handling large computational tasks. The system ensures business continuity and data security as the failure of one or multiple nodes does not lead to a system collapse. Using existing node resources reduces costs, and the system's scalability allows for the easy addition of more nodes as computational demands grow. Users in different geographical locations can also share data and computational resources, promoting rapid information dissemination and knowledge sharing.

• Increased Processing Capacity: Parallel processing in blockchain allows large computational tasks to be carried out simultaneously across multiple nodes, significantly speeding up processing.

• Reliability and Fault Tolerance: The failure of one or more nodes in a blockchain system does not result in a system collapse, ensuring business continuity and data security.

• Cost-Effectiveness: Utilizing the existing computational resources of nodes avoids substantial investments in single high-performance computers.

• Scalability: More nodes can be easily added as computational demands increase.

• Resource Sharing: Users in different geographical locations can share data and computational resources, facilitating rapid information dissemination and knowledge sharing.

Overview: Distributed computing involves multiple independent nodes connected through a network, collaborating to complete tasks. Each node has its own memory and computational resources and can execute tasks independently or exchange information with other nodes to collectively solve problems.

Application Areas:

• Scientific Research: Processing large-scale data in fields like astrophysics and bioinformatics.

• Business Applications: Scenarios like financial analysis and e-commerce that require processing vast amounts of data.

• Cloud Computing: Providing scalable computing resources and services.

• Internet of Things (IoT): Processing data from numerous devices.

What’s Done:

• Cloud Computing Platforms: Providing robust distributed computing capabilities.

• Big Data Processing: Frameworks enabling the processing of large-scale data sets.

• Scientific Computing Projects: Utilizing volunteers' computing resources for extraterrestrial life searches.

• Distributed Ledgers: Supporting distributed value networks.

• Distributed Databases: Supporting large-scale data storage and access.

What’s Next:

• Enhanced AI Integration: Combining distributed computing with AI technology to improve data analysis and decision-making capabilities.

• Improved Energy Efficiency: Developing more energy-efficient distributed computing technologies to reduce environmental impact.

• Security and Privacy Protection: Strengthening data security and privacy measures to counter growing cybersecurity threats.

• Development of Edge Computing: Bringing data processing closer to the data source, reducing latency, and improving response times.

• Fusion with Quantum Computing: Exploring the combination of quantum computing with distributed computing, offering potential solutions to more complex problems.

Distributed computing, as an efficient and reliable computing model, will play an increasingly important role in future technological developments. As technology progresses and application areas expand, distributed computing will continue to drive technological and societal advancements.

Key Technologies:

• Network Communication: Effective communication between nodes, including data transmission, synchronization, and coordination, is central to distributed computing.

• Data Segmentation: Large tasks are divided into smaller chunks and distributed among different nodes.

• Load Balancing: Distributing tasks evenly to ensure all nodes are evenly loaded, avoiding overloading some while leaving others idle.

• Fault Tolerance Mechanisms: Ensuring that the failure of individual nodes does not affect the operation of the entire system.

WebX

Objective: The primary goal of blockchain-decentralized WebX services is to create a secure, transparent, and tamper-proof network service platform using blockchain technology. This platform aims to ensure the authenticity and integrity of data while providing a high level of privacy protection and user control. In this way, WebX services can better safeguard user data from unauthorized access and misuse, while enhancing the reliability and efficiency of network services.

Importance: In the current internet environment, decentralized WebX services offer a more secure, free, and reliable cyberspace through their inherent features, such as data security and privacy protection, resistance to censorship and stability, transparency and trustworthiness, and support for innovation and development. These services effectively prevent data breaches and privacy violations, are difficult for a single entity to control or censor, ensure content freedom and network stability, and enhance platform trustworthiness through the immutability of blockchain. Furthermore, decentralized WebX services provide a fertile ground for developing new applications and services, fostering technological innovation and economic growth.

• Resistance to Censorship and Stability: Decentralized WebX services are more challenging to control or censor by any single entity, thereby enhancing the freedom of network content and service stability.

• Transparency and Trustworthiness: The immutability of the blockchain ensures the transparency of transactions and data records, boosting the platform's trustworthiness.

• Innovation and Development: Decentralized WebX services provide a platform for developing new applications and services, promoting technological innovation and economic growth.

Detailed Description:

Blockchain-decentralized WebX services use distributed ledger technology, distributing data across multiple nodes in the network. Each node maintains a complete copy of the data, and any changes to the data require verification and confirmation by the majority of nodes in the network. This mechanism not only enhances data security but also strengthens the network's resistance to attacks.

Additionally, WebX services support smart contracts, allowing complex business logic to be executed automatically without intermediaries. This feature can be widely applied in areas such as financial transactions, supply chain management, and identity verification.

What’s Done:

• Microservice Architecture: Breaking down DAPPs into a series of smaller, independent services to enhance flexibility, scalability, and maintainability. Features include decoupling and independent deployment, fault tolerance and resilience, scalability, technological diversity, continuous integration, and deployment (CI/CD), fine-grained resource management, API-First design, rapid iteration, and market adaptability.

• Node Management and Network Monitoring: Developed effective node management systems and network monitoring tools to ensure stable network operation.

• Decentralized Storage Solutions: Implemented decentralized data storage mechanisms to enhance data availability and reliability. Advanced zero-knowledge-proof privacy protection technologies were adopted to prevent user data leakage.

• Identity Authentication and Authorization System (DID): More secure and private decentralized identity authentication and authorization mechanisms, allowing users to securely access and use network services while maintaining identity coherence and security across different services and platforms.

• Distributed Storage Solutions: Implemented the ability to automatically execute contract terms in a decentralized environment, reducing intermediaries, and enhancing efficiency and security.

What’s Next:

The development of decentralized WebX services may focus on the following key technological areas:

• Advanced Blockchain Technology: Including new consensus algorithms to increase transaction speed, improvements in cryptographic technology to enhance network security, and more efficient methods for data storage and processing.

• Widespread Decentralized Support: Including hyper-text, multimedia, transaction and value transfer, personal full blockchain browsers, content creation and information publishing, search and AI chat, and more.

• Cross-Chain Technology Development: Achieving better interoperability between different blockchain platforms, enabling seamless interaction and integration of various decentralized systems.

• Expanded Smart Contract Functionality: Enhancing the flexibility and functionality of smart contracts to support more complex business logic and automated tasks.

• Further Development of Decentralized Financial Services (DeFi): Developing more innovative decentralized financial products and services, such as decentralized lending, insurance, and derivatives markets.

• Advanced Zero-Knowledge Proofs: Enhancing privacy protection, allowing users to verify transactions without revealing personal information.

• Improved Decentralized Storage Technologies: Such as distributed storage optimization, increasing data access speed and reliability.

• Expansion and Improvement of Decentralized Applications (DApps): Enhancing the user experience, performance, and security of DApps to attract more users and developers.

• AI and Machine Learning Integration: Using artificial intelligence and machine learning to optimize network management, security, and user experience.

• Sustainability and Energy Efficiency: Developing more energy-efficient blockchain technologies to reduce the environmental impact of decentralized networks.

• Development of Decentralized Autonomous Organizations (DAOs): Strengthening community governance models, allowing users to directly participate in the decision-making process of decentralized services.

• Quantum Computing Resistance: As quantum computing technology advances, developing quantum-safe cryptographic algorithms to ensure long-term security.

• Account Abstraction and VM Improvement: Incorporating modular arithmetic upgrades directly into the VM, enabling intelligent recovery and key replacement for wallets, ensuring quantum safety.

The development and integration of these technologies will further enhance the functionality, security, and user-friendliness of decentralized WebX services, driving their application and adoption in a broader range of fields.

Foundational Technologies:

• Technology Optimization and Upgrade: Continuous improvement of blockchain algorithms to increase transaction speed and reduce costs.

• Wider Application Scenarios: Exploring the application of blockchain in more fields, such as education, healthcare, and social media.

• Compliance and Standardization: Collaborating with governments and regulatory bodies to ensure decentralized WebX services meet legal requirements and promote the establishment of industry standards.

• User Education and Popularization: Raising public awareness of blockchain technology and increasing user acceptance of decentralized services.

In summary, as an innovative network service model, blockchain-decentralized WebX services not only enhance network security and efficiency but also offer new possibilities for digital transformation in various industries. With continuous technological advancements and expanding application fields, they are poised to become a significant force in driving social and economic development.

Last updated