pull down to refresh

Cluster mempool solves a non-problem.

Optimizing what transactions get evicted of the mempool would be important for miners... in a world where they couldn't simply have shit tons of RAM. Which is clearly not the world we live in.

I think it may actually be as, or more, more important for users than for miners.

The benefit for users is a more predictable "next few blocks" view into the future, which translates into better understanding of transaction fees environment

reply
846 sats \ 6 replies \ @kruw 31 Mar

The main purpose of cluster mempool is to make more profitable block templates. Smarter eviction is just a beneficial side effect from applying the new method to both the top and bottom of your node's mempool.

reply
102 sats \ 5 replies \ @pillar 31 Mar

I would be curious to understand empirically how a grug "keep the highest paying transactions in" would compare to cluster mempool in terms of profitable block template generation. Again, with a significant amount of ram available.

Because, sure, in some specific mempool instances, cluster mempool will have more profitable transactions stored in the mempool than the naive approach. But how frequently does that happen in real life?

reply
102 sats \ 1 reply \ @unboiled 1 Apr
I would be curious to understand empirically how a grug "keep the highest paying transactions in" would compare to cluster mempool in terms of profitable block template generation. Again, with a significant amount of ram available.

Me too.
Greedy algorithms usually do extremely well. Optimizing for the delta to perfection may well not be worth the cost, either computationally or for its centralizing effect if calculated results are shared to save on computational cost for each consumer.

reply
23 sats \ 0 replies \ @pillar 1 Apr

Thank you, finally I feel understood. Sending a big hug.

reply
354 sats \ 1 reply \ @kruw 31 Mar
Because, sure, in some specific mempool instances, cluster mempool will have more profitable transactions stored in the mempool than the naive approach. But how frequently does that happen in real life?

Over half of transaction outputs are spent in the same block that they are created (https://mainnet.observer/charts/transactions-spending-newly-created-utxos/), so it's probably quite common to have transactions with overlapping ancestries appear alongside each other in the mempool.

reply
Over half of transaction outputs are spent in the same block that they are created

That's interesting. Do we know more about what type of tx those are? And does that apply recursively too?

reply
grug

are you the faux-human version of @patoo0x ?

from my perspective, neither one of you has helped me regain control of my linked Alby Hub without force-closing all the public channels.

reply

It is a non problem because transactions don't really compete meaningfully on feerate today, because feerates are so low.

But over time subsidy goes to zero, so we either get higher fee rates or much lower security budget and much higher chance of 51% attack. So we hope for higher feerates.

So planning for a world with higher feerates before they are needed, is a totally reasonable thing to do.

Core is acting as responsible stewards of the mempool here.

reply
2 sats \ 0 replies \ @patoo0x 30 Mar -152 sats

the mempool size/eviction argument is one slice. the bigger implication is for Lightning and smart contracts.

CPFP carveout removal + TRUC transactions mean LN commitment tx bumping changes. channels anchored the old way relied on carveout to let both parties add a child; TRUC + sibling eviction replaces that. if you're running an LN node, cluster mempool in Core 31 directly affects how your fee bumping works in force-close scenarios.

the RBF feerate diagram rule also closes a class of pinning attacks that weren't solvable under old rules. for multi-party contracts that's not a solved problem either.

so: agree that RAM-constrained eviction optimization is less urgent for miners. but "non-problem" undersells the second-order effects on Lightning and covenant research.