Introduction
Throughout its history, both Ethereum and other blockchains have undergone a number of upgrades aimed at advancing certain key aspects of their functioning. In general, these improvements focus on the three fundamental pillars of these technologies: security, decentralization, and scalability.
Although significant milestones have been reached in these areas, the work is not yet finished. To understand the current limits of this technology, we need to examine its present state and break down the components involved.
Today, we’ll focus on scalability. This may be the most critical aspect for the evolution of Ethereum—but to explain why, we’ll first need to clarify what we mean by scalability in the context of Ethereum.
What is Scalability?
In general terms, scalability means a system’s ability to handle an increasing amount of work or traffic. For a blockchain, this translates directly into the ability to process a larger number of transactions per second (TPS), while keeping transaction costs and confirmation times low.
Of course, this is crucial for ensuring a good user experience on Ethereum. Currently, the network processes around 15 to 20 transactions per second, which can easily be verified by inspecting blocks on platforms such as etherscan.io (by looking at the number of transactions per block and the block time, which is 12 seconds).
However, there are some complications. Not all transactions are the same—for example, a simple $ETH transfer is very different from a contract call where the call parameters (carried in CALLDATA) can take up a large amount of bytes. Moreover, all transaction data is stored in the blockchain’s blocks, and these blocks have size limits.
One option to process more transactions is to increase block size. Sounds simple, but this introduces new problems. Remember that for consensus to work, blocks must be transmitted across the network’s nodes—and the larger they are, the slower communications become. The network will have higher latency, and the entire process slows down. It’s a delicate balance.
As if that weren’t enough, certain actors complicate things further: layer 2 solutions, or rollups. These process thousands of transactions per second and then publish the processed transactions onto Ethereum. In other words, they use Ethereum as a settlement layer: by publishing this data periodically, they leave an immutable record on Ethereum. Historically, this data was also published as CALLDATA—so more rollups meant more pressure on Ethereum’s scalability.
This was the state of the network before the Dencun upgrade, which went live on mainnet in early 2024. The upgrade included EIP-4844 , also known as Proto-Danksharding, marking the beginning of a path to improve scalability on layer 1.
Data Availability
The key question is whether we really need to store this information on Ethereum through CALLDATA, or if there is another solution. Rollups publish their transactions not only to create an immutable record, but also a verifiable one, allowing actors like nodes and indexers to access transaction data and understand how the network evolves.
Data stored with CALLDATA remains indefinitely on-chain, and incurs processing costs because the EVM treats it like any other payload. Rollups must pay this cost, and ultimately pass it on to their users.
In other words, what we really need is to guarantee the availability of this data. And here lies the key question: do these data need to be available forever, or only long enough to be verified?
Proto-Danksharding
To solve this, Proto-Danksharding introduces the concept of blobs: data that is stored for a limited time (4096 epochs, currently about 18 days), and then discarded.
Blobs have a maximum size of 128 kilobytes and can be inspected on sites like blobscan.com .
This timeframe is more than enough for anyone who needs the published data to access it. Crucially, blobs are not published via CALLDATA, making them a much cheaper alternative.
Sounds promising. But naturally, the next question is: if blobs are not stored in the blockchain state, where are they stored?
Well, the data is transmitted along with blocks, so nodes receiving them simply keep them in memory. Since blobs are not part of the blockchain state, there must be a verification mechanism to ensure their integrity, i.e., that they are not corrupted or tampered with.
The solution is to permanently store a short identifier on the network that allows quick verification of data integrity. This can be done in various ways—for example, storing the blob’s hash, which acts like a unique cryptographic “fingerprint.”
Other approaches exist, such as Merkle trees (in fact, this is how execution nodes operate).
In Proto-Danksharding, a polynomial commitment scheme called KZG (Kate-Zaverucha-Goldberg) was chosen. Its function is the same: reducing an entire blob to a small verifiable identifier. This choice is also motivated by potential compatibility with zero-knowledge proofs (ZK proofs).
Perfect! With this, we can store data on Ethereum cheaply and verifiably, allowing rollups to operate without congesting the network.
It might seem like the problem is solved. But as often happens… it’s not that simple.
Limitations
Suppose we start attaching these blobs to blocks. How many can we add?
Although blobs are not part of Ethereum’s state, the problem is that nodes must transmit them across the network along with blocks. Sound familiar? It’s exactly the same problem we discussed earlier: more blobs mean more network latency. And not only due to transmission—remember, blobs must also be verified.
Additionally, nodes storing blobs could face higher memory (hardware) requirements. As an example, let’s calculate the memory needed: suppose we could store 100 blobs per block. Since blobs are kept for 4096 epochs (131,072 slots), and each blob is up to 128 kilobytes, the required storage would be about 1.6 terabytes just for blobs!
So again, there is a limit on how many blobs can be included per block to keep the network running smoothly (with slots remaining at 12 seconds). Currently, the network supports only 6 blobs per block.
This creates tension: rollups now compete for this new “blobspace,” and during high activity periods, its price can spike, impacting rollups. Ideally, we’d want to include more blobs per block to ease this tension—but as we’ve seen, that would significantly strain the network.
So how do we solve this problem?
Danksharding
Proto-Danksharding was designed as an intermediate step between the previous version of Ethereum and the vision known as Danksharding .
The main idea is to increase the number of blobs per block, from 6 to 64. But to reach that point without facing the same problems we discussed earlier, certain conditions must be met, tied to other milestones in Ethereum’s evolution, such as:
Proposer-Builder Separation
Data Availability Sampling
We’ll soon dive deeper into these topics, explaining the role these proposals play in achieving Danksharding.
And Scalability?
It’s worth noting that neither Proto-Danksharding nor Danksharding aim to improve Ethereum’s scalability directly. Neither update changes block production frequency or maximum block size. This means Ethereum itself will continue to have relatively low TPS.
Despite their names, neither of these upgrades is actually sharding.
Sharding refers to a database architecture where data is split into fragments called shards, which can be modified independently. In Ethereum, this would mean dividing validators into groups so they could process transactions in parallel. Currently, all validators assigned to a slot validate the entire proposed block, and the global state remains unique—not divided into shards. This, essentially, would not change with Danksharding. But that’s a story for another time!
The true winners here are rollups, which can process large amounts of transactions and keep their costs low thanks to increased blobspace availability. This is Ethereum’s real (and current) strategy: what Vitalik himself has called the rollup-centric roadmap
Closing Notes
It’s clear that there is still much to be done. Ethereum aims to be much more than a blockchain—it is on the path to becoming the infrastructure for a future where rollups take center stage.
The road ahead is long, and there are exciting challenges to come. Like other blockchains, Ethereum is a constantly evolving system, and in the coming years we will surely witness revolutionary changes. The history of blockchain is still being written live, and it is important to understand the problems that we as a community must continue to solve.
This article is part of a collaboration for creating educational content about Ethereum between Space Dev and ETH Kipu . We thank Juan Manuel Sobral for making this collaboration possible, and especially Franco Mangone for writing the article.
You will receive all our news and updates in your inbox. You can unsubscribe whenever you want :)