Cutting Blockchain Storage in Half
How Kaia v2.1 Reclaimed 2TB Through Compression

Running a Node Costs Too Much
Running a blockchain node requires large storage capacity. Chains grow, transactions accumulate, and storage requirements quickly reach several terabytes just to keep a historical record. Solana archive nodes eat up 400TB. Ethereum archive nodes need almost 14TB. Even optimized chains face this limitation.
Node operators pay for this storage. Our RPC service providers (especially archive node operators) also identify disk requirements as a major operational concern. Blockchain data will keep growing — that’s not changing. The question is how to keep storage manageable.
Kaia started working on this back in 2021 with State Migration, a batch pruning feature that let operators delete old states without taking their nodes offline. Then came an efficient migration process that cut downtime. In 2023, StateDB Live Pruning automated the deletion of outdated states in real-time.
Kaia v2.1.0 takes a different route. Instead of deleting data, it compresses block data and cuts full node storage in half. Nothing gets deleted.
The Problem: Repetitive Block Data
A Kaia mainnet full node takes up more than 4.2TB as of July 2025. Here’s where that space goes:

Block data — headers, transaction bodies, receipts — makes up over 3.6TB. This data must be retained. The node needs it to verify old transactions and answer queries.
But there’s something you might not notice at first. Block data repeats itself frequently.
Look at any EVM transaction. Solidity’s ABI encoding pads everything with zeros to hit specific byte lengths. Transaction call data ends up full of long zero sequences.

Receipts do the same thing. Event logs, return values, status codes — all packed with repetitive bytes.
Kaia uses Google’s LevelDB for storage, which has built-in compression. But it wasn’t turned on by default. All those repetitive zeros sat there taking up space.
The Fix: Compress What Matters
Kaia v2.1 turns on LevelDB’s Snappy compression, but not everywhere. Only where compression is effective.
The — db.leveldb.compression flag controls this:
0: No compression
1: Just receipts
2: Headers, bodies, and receipts (recommended)
3: Everything including state trie
Get Kaia Foundation’s stories in your inbox
Join Medium for free to get updates from this writer.
The tests showed option 2 gave the best results. Option 3 provides minimal benefit because state data (balances, contract storage, cryptographic hashes) looks random. Random data doesn’t compress well.

Tests used a Kairos testnet snapshot from December 31, 2024. The state-migrated full node showed that compressing the state trie gives you almost nothing. The CPU overhead isn’t justified.
What Happened: 2TB Saved
When we ran this on a mainnet full node with compression plus manual compaction, the reduction was large:

Storage dropped from 4,215GB to 2,016GB. That’s 2.2TB saved, about 52% smaller. Body and receipts tables shrank the most, exactly where you’d expect based on all those repetitive bytes.
The compression applies to all new data. From v2.1.0 forward, new data gets compressed automatically. Storage grows more slowly going forward.
What It Costs: 10 Hours and Heavy Disk I/O
Compaction has costs. The compaction process rewrites existing data to apply compression. On a mainnet full node, which takes about 10 hours. During that time, disk I/O increases sharply.

Disk throughput peaks above 200 MiB/s for reads and 250 MiB/s for writes, while IOPS frequently exceeds 2,000 operations per second.
But block sync keeps running. The compaction uses the disk heavily but doesn’t stop the node from processing new blocks.
Why It Matters
Smaller storage requirements have several benefits. More operators can afford to run nodes.
Lower storage requirements mean:
- Node operators and RPC providers pay less
- More people can run full nodes
- The network remains decentralized when node operation costs less
- New nodes deploy faster with smaller chaindata snapshots
What to Do
Kaia v2.1.0 turns compression on by default with option 2 (all block data except state trie). For node operators, do the steps that follow:
- (For older versions only) Add — db.leveldb.compression 2 to your config.
- Restart the node — new data compresses automatically.
- For old data, trigger compaction via RPC: debug.chaindbCompact({ “preset”: “allbutstate” })
- Or wait for compressed chaindata snapshots (available later)
Compression is part of a bigger storage plan. Full nodes get state pruning. Archive nodes get the upcoming FlatTrie scheme. These changes reduce costs for all operators.
Check the Optimize Node Storage guide in Kaia docs for step-by-step instructions.


