The Ethereum Foundation is waiting for the most anticipated event in the world for a public blockchain. This despite the pandemic and the ensuing panic on the cryptograms. Etherea’s transition to the Serenity phase called it Ethereum 2.0.
You should wait for the arrival of Serenity / Ethereum 2.0. Initially available since January this year, the leading test network has been already running for a year. It does not mean that no one has been worked on it actively. Neither, that your plan to achieve a higher level for this has to end.
On March, there was an escalation of the coronavirus pandemic. Also a completed audit for the specification of the entire ETH 2.0 protocol. This is from the Least Authority (Ethereum 2.0 Specifications Security Audit Report). The preliminary examination turned out surprisingly well, despite significant concerns. It reveal the weaknesses and limitations in the design. But also, it opens the way to a sharp deployment. The audit blamed the creators for the tiny part of the protocol with a peer-to-peer layer. Especially a P2P messaging system that could represent a possible new attack vector. Similarly, a system of ENR (Ethereum Node Records). It also recalls a little alibi that, in practice, has never tested the proof of stake. Neither sharding on such a scale.
The Prismatic Labs are a technical team working on the technical infrastructure for Ethereum 2.0. On April 16, they made the network available in the third version of the testnet (Topaz). If no significant problems appear on it, we can expect shortly just the Diamond phase. It is nothing more than a sharp launch of the Ethereum 2.0 mainnet network.
Why expectations from Ethereum 2.0 are so high
Ethereum sees itself as a decentralized open-source platform for hosting smart contracts. On the other hand, Ethereum blockchain bears many similarities to the bitcoin one, but differs in fundamental respects. The design runs a program code and host any decentralized application. Thanks to its popularity, the blockchain network hit its limits relatively quickly. Today, a lot of things can’t work, regardless of the willingness to pay correctly for their operation.
The current Ethereum is the second-largest public blockchain by the market capitalization of its native cryptocurrency. It has about twice as many active developers against Bitcoin. Also, four times as much as most of the remaining largest competitors (see last year’s Dev Reports from Electric Capital).
Not only have pioneers of the crypto-industry worked with Ethereum for a long time, but more and more applications and extensions for Ethereum are also made up of companies from the traditional business. As one example for all, we can mention the independently implementations of Zero-Knowledge Proof for Ethereum, the first from EY (part of the EY Ops Chain Public Edition framework), the second from ING Bank, both opening the door to more massive platform deployment in corporations.
Acronis, the backup and disaster recovery giant, uses the Etherea blockchain as the backing layer for its Notary Cloud service. Its use is mainly for data notarization, electronic signing, and enterprise-scale file verification. There is even a consortium for the standardization of ethereum business applications (Enterprise Ethereum Alliance). Today, it has hundreds of members, including names such as Microsoft, JPMorgan Chase, Santander, Accenture, ING, Intel, and Cisco.
But all the hopes and energies that are being placed in Etherea are based on an unspoken assumption. Ethereum will solve its long-standing problem of scaling and security. While Etherea’s security will never be completely perfect in principle, due to the Turing-complete nature of its programming language, creating space for a market with professional security audits and insurance, network scaling is a problem, but not while maintaining the current blockchain-like nature. And this is where the first significant expected change of Etherea 2.0 – sharding – comes.
Thanks to it, and the related changes of other layers, Ethereum should be able to support an order of magnitude more significant traffic and some users during the next few years. The current Ethereum manages to process 7-25 transactions per second. This is because it is not primarily a system for transferring unspent transactions like Bitcoin. The conception, from the beginning, was a platform. Especially for hosting smart contracts and running decentralized applications (decentralized global computer). Clearly, it is unfortunate and unsustainable in the long run.
The main visible change brought by Ethereum 2.0 is the so-called sharding. It should massively improve the scalability of the Ether blockchain. In addition to the transition to a distributed consensus algorithm called Casper. It is based on proof of stake. But it’s something for something. What is it about? How does it work? Why is there such a controversy around it?
A small turn to the proof of stake
One thing that the whole world revolving around public blockchains and cryptocurrencies is curious about is the transition of the ethereal network to a proof of stake consensus that has not yet been deployed to such an extent (and on such a large and active blockchain), but rather, let’s say, a necessary prerequisite for the others, and not the main significant change in itself (even if the PoS economy of the ETH 2.0 system publishes a separate article), so we will only briefly discuss it here.
PoS systems are often criticized, especially by people from the bitcoin community, for not buying your decision-making power for the costs you had to incur outside the system. Still, for the deposit that is part of the system from the beginning (typically a native token), so the system is less secure. The reality, however, is that, for example, small PoS and PoW blockchains prove dangerous at least as well. However, PoS has not yet been tested to the extent it now plans to deploy Ethereum 2.0.
The fundamental difference between PoW and PoS is that PoW rewards voting network users for doing the right thing, while PoS relies more on punishing you if you do the wrong thing. With a PoW system, you have to incur high costs (the higher, the more secure the network) to get the desired reward within the system. With PoS, the cost of getting a bonus is low, but the system severely punishes it for attempting fraud.
Ethereum 2.0 sharding design at a glance
As the name itself, which comes from the database world, suggests, the primary purpose of sharding is to break one fast-swelling blockchain, which has become a narrow throat, into several smaller parts (shards), but what exactly does that get? When a regular database is sharded, an extensive database is split into many smaller fractions that can then be easily managed, rapidly increasing the speed of access to individual items. However, it is not that simple with blockchain. Although it is often referred to as a database, it has a concrete structure and works a little differently. Two things can be sharded for a blockchain, transaction processing, or blockchain status. Ethereum 2.0 strives for both, resulting in the ability to perform over 10,000 transactions (with the help of additional layers) per second, without the risk of increasing centralization of nodes.
To put it very merely today, each node must validate each transaction in turn. This is part of the security design of the blockchain and ensures sufficient redundancy to make the network work even if a large part of the network fails, however, this leads to a considerable waste of resources (albeit intentional) and creates a precondition for creating bottlenecks (places where transactions gather before extraction). Sharding works with the idea that nodes are more efficient.
The network will be divided into an optimal number of shards, which represents a reasonable compromise between efficiency and security (so far, a proposal of 1024 shards is envisaged), and each of them will verify a different part of the pending transactions. Because nothing changes on the actual ordering of pending sales into a block or its size, this parallelization increases the throughput of the network more than a thousandfold. Also, with the deployment of additional layers, such as zk-rollups or plasma, the overall transaction throughput will increase even further.
So far, at least a very simplified theory in which everything works perfectly. Unfortunately, sharding a blockchain is anything but simple. The problem is, for example, how to ensure that the simultaneous processing of transaction subsets results in the correct update of the network state and at the same time ensures that the network state obtained in this way is valid (remember what maneuvers the public blockchain usually does) and definitive (problem, which generally solves the costly cumulative proof of work, which is entirely missing in ETH 2.0). The system must, therefore, include a mechanism that will, on one hand, share responsibilities and, on the other, ensure that all data remains valid and its final state, as is the case with the current Ethereum blockchain. At the same time, it must not become another vector of attack and a centralized point of possible failure of the entire system.
In principle, this should work in such way that all user accounts will always be assigned to a specific shard, and everything will run within them in the same way as before (the network will be divided into several Ethernet mini-blockchains). However, this becomes complicated when the final state is created. To be able to compose it at all, the transactions will initially have to be grouped into some transaction packages. This according to the automatic optimization process. Then, they go through a double validation process, and only then they can connect to the leading network.
Because proof of work does not work here, i.e., the verification is not performed by the miner who guessed the nonce, the principle of validation of transactions by voting of assigned validators works here. Security and impartiality during this process is done by several extra means.
Firstly, validators to shard assignments are random and change regularly. If the lid blocker approves the transaction block, it must be approved by a separate committee. This is on the central (so-called beacon) chain. It should happen before its inclusion in the main blockchain. By another vote, it is possible to do by using a smart contract with the name of ‘sharding manager’. Only if this vote passes, the cross-link between the shard and the main chain will be established. Therefore, the transaction block becomes a permanent part of the blockchain. Once the verification mechanism detects that one of the bindings is invalid, the entire blockchain becomes invalid. Just like the hash string of an unshared blockchain.
If an attacker doesn’t know where to appoint, he cannot coordinate an attack with other attackers on time. That is the logic of randomly assigning validators to a shard. Theoretically, it should enforce sufficient security. The beacon chain itself should have more functions in the future. More than managing the entire process and guaranteeing randomness in verification and transactional finality. However, these are still the music of the distant future.
Why sharding is controversial
As part of our description of sharding, we have so far focused mainly on the division of blockchain into shards. Also in their communication with the main chain (beacon chain). Still, it must be able to perform transactions relatively quickly and securely directly. At least for the system to work as fast and efficiently as intended. Also to be able to reference each other’s data between shards. All of this while maintaining perfect network integrity. There are several approaches to this problem, but none is entirely universal and practical for all real-life scenarios. However, the issue of active and secure communication between shards, while maintaining the integrity of the entire network, is so complex that it does not fit into the article. It would have to be at least twice as long.
Sharding also brings other problems. One of them is the challenge of, for example, providing accurate information on the status of the entire blockchain network. This for some clients at a given moment. Also, for the timely detection of fraud. Sharding brings a much higher degree of complexity to the system. We no longer have to deal with one blockchain, but the whole system of chains. From a security point of view, it has never been entirely good news. Also in terms of predictability of behavior.
It is also an interesting paradox. The official goal of the transition from today’s Etherea to Ethereum 2.0 was to achieve greater decentralization. But also resilience, security, and long-term sustainability and simplicity. Above each of these points, the advent of sharding points out to a small question mark.
If you want to read more about how the blockchain network works, go right here.