Demystifying JAM

jam-white.png

Translations / Re-Publishing

The following are further translations of this article:

Feel free to share your re-publications in a comment down below to be featured here!

The following is a ground-up explanation of Polkadot 1, Polkadot 2, and how it will evolve to JAM. It is targeted towards a technical audience, but more so those that are not very familiar with Polkadot, but have a good high level understanding of blockchain based systems, and are possibly familiar with one other ecosystem at a technical level. I believe reading this can be a great prelude before reading the JAM graypaper.

Background Knowledge

This article makes use and assumes familiarity of the following concepts:

Prelude: Polkadot 1

First, a recap of what I consider the top novel features of Polkadot 1.

Let's dive further into sharded execution and what we mean by it.

Sharded Execution: All About Cores

For now, we are talking in the context of an L1 network that hosts other L2 "blockchain" networks, much like Polkadot and Ethereum. Therefore, the words L2 and Parachain can be used interchangeably.

The core problem with blockchain scalability can be stated as: There exists a set of validators, whose execution of some code can be trusted through crypto-economics of proof-of-stake. By default, these validators are expected to re-execute the entirety of each other's work. Therefore, the system as a whole is not scalable so long as we force all validators (re)execute everything at all times.

Note that increasing the number of validators in this model doesn't really increase the system's throughput, so long as the above absolute re-execution principle is in place.


The above demonstrated a monolithic (as opposed to sharded) blockchain. Inputs (i.e. blocks) are processed by all network validators, one by one.

In such a system, if the L1 wants to host further L2s, all validators have to now re-execute the work of all L2s as well. Obviously, this will not scale. Optimistic rollups are one way to circumvent this issue, in that re-execution (aka. fraud-proofs) only happen if someone claims a fraud to have happened. SNARK-based rollups circumvent this by leveraging the fact that verifying a SNARK proof is significantly cheaper than generating it, and therefore it is reasonable to allow all validators to verify a SNARK proof. More about this in Scalability Space Map Appendix.

A naive solution to sharding is to merely split the validator set into smaller subsets, and have this smaller subset re-execute L2 blocks. What is the issue with this approach? We are sharding execution and economic security of the network. The security of such an L2 is less than that of the L1, and the security drops further and further as we carve up the validator set into more shards.

Contrary to optimistic rollups that cannot afford re-execution at all times, Polkadot was designed with execution sharding in mind, and therefore can have a subset of its validators re-execute L2 blocks, whilst providing sufficient crypto-economical evidence to all network participants that the veracity of the L2 block is as secure as if the entire set of validators had re-executed it. This is possible through the novel (recently formally published) ELVES mechanism.

In short, one can see ELVES as a "Cynical Rollup" mechanism. Through a few rounds of validators proactively asking other validators if an L2 block is valid, we can reach an extremely high probability that the L2 block is valid. Indeed, in case of any disputes, very soon the entire validator set is asked to participate. This is explained in detail in an article by Rob Habermeier, Polkadot co-founder.

ELVES is why Polkadot can have two properties previously assumed to be mutually exclusive: "Sharded Execution", with "Shared Security". This is the main technological outcome of Polkadot 1 when it comes to scalability.

Now, moving on to the "Core" analogy.

An execution-sharded blockchain is very much like a CPU: In much the same way that a CPU can have many cores that execute instructions in parallel, Polkadot can progress L2 blocks in parallel. This is why an L2 on Polkadot is called a Parachain[1], and the environment in which the smaller subgroup of validators re-executes a single L2 block is called a "core*. Each core can be abstracted as "a group of validators working in coordination".

You can imagine a monolithic blockchain as one that ingests a single block at any given time-slot, while Polkadot ingests 1 relay-chain block, and 1 parachain block per core, per time-slot

Heterogeneous

So far, we only talked about scalability, and that Polkadot provides sharded execution. It is important to note that each of Polkadot's shards is an entirely different application[2]. This is achieved through the usage of a bytecode-stored meta-protocol: A protocol in which the definition of the blockchain is stored as bytecode in state of the blockchain itself. In Polkadot 1.0, WASM was used as the bytecode of choice, and in JAM, PVM/RISC-V is being adopted.

All in all, this is why Polkadot is called a heterogeneous sharded blockchain. Each of the L2s is an entirely different application.

Polkadot 2

A big part of Polkadot 2 is about making cores more flexibly use-able. In the original Polkadot model, a core could have been rented for 6 month up to 2 years at a time. This is suitable for resourceful businesses, but less so for small teams. The feature that enables Polkadot cores to be used in a more flexible way is called "agile coretime". In this model, Polkadot cores can be rented for as little as one block at a time, up to a month at a time, with price cap guarantees for those that want to rent for long term.