pull down to refresh

by ffwd

As the Bitcoin block chain grows, the time required to set up a new node from scratch also increases.

At block height 910_000, just over 16 years into the Bitcoin experiment, more than 1.2 billion on-chain transactions have been recorded. Including witness data, this amounts to almost 700GiB of data, enough to fill roughly 1000 CDs. Those transactions created over 3 billion UTXOs, of which about 95% have either already been spent or are provably unspendable. One goal of syncing a new node from scratch is therefore to determine which UTXOs survive valid transfers, so that new blocks can be checked quickly against the current UTXO-set.

Current implementations locate a valid block header chain and then download and validate each block sequentially. This process is slow; depending on CPU, RAM, disk, and network speed it can take anywhere from a few hours to several days.

Numerous ideas for improving Bitcoin scalability, specifically initial block download (IBD) speed, have been proposed over the years - some have even been implemented.

Earlier this year an observation was made that current bottleneck is not CPU speed but I/O. Because we process blocks in sequence, we constantly write, read, delete entries in the chainstate database on disk. Using an SSD for the chainstate folder and increasing -dbcache can accelerate sync considerably, but this remains a sequential process. Some researchers explored a new approach called SwiftSync, originally published by Ruben Somsen. Reading about SwiftSync provides useful background, and inspired our work, but it is not required to understand the method described here.

...read more at delvingbitcoin.org
22 sats \ 0 replies \ @k00b 17 Nov
MergeSync demonstrates that, under an assumevalid trust model, parallel processing and simple set operations can reduce wall clock for UTXO-set construction

Where's the demonstration? If I/O is the bottleneck how would parallelizing the non-bottleneck help?

reply