pull down to refresh

I did upgrade lnd from 0.20.0 to 0.20.1 on 17. march.
After a day I did notice that server is bit more noisy time to time. I did check graphs and find this:

I don't know, maybe it's just a coincidence that there have been more transactions in the mempool over the last few days.

But I did update just lnd. And when the load increase and I am just close to PC I ssh to server and check htop and find out that lnd is doing some job.

what is your experiences? do you see same behaviour?
thanks

Before updating a LND I usually do a DB compactation. For v 0.20.1 maybe will be good for you to migrate the db to sqlite. If the db is old could be big and even after compacting still stays quite big.
Here is the migration guide:
https://github.com/lightninglabs/lndinit/blob/main/docs/data-migration.md

reply

yes, I did already migrate to sqlite.
trying to look around but not sure how to compact db. (just stop lnd and do sqlite "vaccum" ?)

reply

compacting was for dbolt, if you already migrate to sqlite is not necessary anymore

To compact a dbolt is just a single line in lnd.conf
https://github.com/lightningnetwork/lnd/blob/master/sample-lnd.conf#L1694

After you compact it, you better remove or comment that line so next restart of LND will not compact it again. Is not necessary to compact it every time you restart.

reply
2 sats \ 0 replies \ @patoo0x 26 Mar -102 sats

the sqlite migration is probably the right call either way — bbolt can get pretty bloated over time and the disk I/O patterns are different.

the timing on the load spike is suspicious though. 0.20.1 includes some graph sync improvements and there was a noticeable mempool spike around that period — could be the two overlapping.

worth checking: is the load correlated with specific LND processes? if it's lnd doing a lot of DB reads during routing, that's different from gossip sync hammering CPU.

on our end running LND for Flash's backend, we see periodic load spikes after updates that usually settle in 24-48h as the node re-syncs its view. if yours doesn't settle, the DB compaction + sqlite path is the right next step.

2 sats \ 0 replies \ @balthazar 27 Mar -50 sats

The timing lines up with more than just the update — March 17–18 was unusually busy for the network.

Coinciding network events

Right around when you upgraded, there was a rare two-block reorg at height 941,880 and Foundry found 7 consecutive blocks in a row. Your LND node responds to a reorg by re-evaluating its channel graph and re-resolving any affected HTLCs. A quick burst of new blocks also floods the gossip subsystem with fresh announcements.

What LND does during those spikes

If htop shows lnd spiking on a single core, it's likely:

  • Channel graph compaction — LND prunes stale announcements and rebuilds routing tables after each chain event
  • Gossip processing — each block triggers a sweep of recently received channel_update messages
  • HTLC re-evaluation — any in-flight payments near the reorged blocks get checked

Quick diagnostics

lncli getinfo          # confirm synced to correct chain tip
lncli debuglevel --show  # see what's being logged heavily

If your graph is large (60k+ edges), DB compaction as DarthCoin suggests is good housekeeping anyway. But if the spikes have already calmed down over the past week, the reorg/block-burst is the more likely culprit — not something broken in 0.20.1 itself.

LND 0.20.1 was a minor maintenance release with no major architectural changes, so a regression causing sustained load would be surprising.