pull down to refresh
@Murch
stacking since: #127838
0 sats \ 1 reply \ @Murch 3 Oct \ parent \ on: Sub 1s/vb transactions ideasfromtheedge
I don’t understand why it would be “good to set the mempool to at least 2 GB”. Could you explain what you are trying to achieve with that?
When someone sends you dust outputs, you can simply choose not to spend those outputs specifically, especially if they cost more than they are worth.
The recent blocks were including fewer transactions below 1 s/vB, because enough transactions were bidding more than 1 s/vB:
Chart: last 24h, inverted feerate (highest feerate at the bottom), only txs offering 1 s/vB or more.
Bitcoin Core 29.1 already makes up over 13% of the listening nodes according to Clark Moody’s dashboard:
As Laurent has recently demonstrated with his simulation, below 90% filters, almost all corresponding transactions reliably reach non-filtering listening nodes (which I suspect miners to be). So, at this point any listening nodes that want the low feerate transactions should be receiving them reliably before they appear in blocks.
Yes, good summary.
The
minRelayTxFee
and incrementalRelayTxFee
both express a minimum cost for relaying data across the network. It would be a bit odd to charge more for the first announcement of a transaction, but then make it cheaper to replace the transaction, or vice versa to make it cheap to make a first announcement and expensive to replace a transaction. So both mempool policies were lowered from 1 s/vB to 0.1 s/vB in Bitcoin Core 29.1 and the upcoming v30.0.Bitcoin Core limits the amount of memory dedicated to the mempool data structure to 300 MiB by default. When this memory limit is reached, your node will start evicting transactions with the lowest descendant feerate (a heuristic for what’s going to be mined last), and then increase its minimum feerate to the feerate of the last evicted transaction + 1×
incrementalRelayTxFee
.
If “too many transactions get submitted”, there still will only be one block full of transactions added to the blockchain every block. More competition for blockspace means that the minimum feerate in the block is pushed up. Higher feerates means that the subjective value of transactions must be higher for the senders to bid sufficient fees for the transaction to get into the block.
That’s bad news for transactions whose subjective value is low. I’ll leave it up to you to categorize whether that would apply to LN transactions or not.Accepting transactions below 1 s/vB means that it cheaper transactions can be used to fill up the mempool. It makes the price discovery for blockspace more finegrained, but doesn’t fundamentally change the mechanism to reach the equilibrium. If there is low blockspace demand, there might be a few more transactions added to the blockchain that otherwise would not have occurred. It also makes consolidation of a lot of the UTXOs created in the past two years much more financially viable.
I don’t recall claiming that getting rid of data embedding would require a hardfork.
I do however think that effectively combating data embedding would require significant cuts to Bitcoin’s scripting capabilities.
Your first sentence is:
A merkle tree IS A ZK PROOF.
Later you write:
…but they totally do prove a statement without revealing all the knowledge.
A zero-knowledge proof reveals no information except that the statement is true. This is not a property that merkle trees have. Many proofs don’t reveal all the knowledge. E.g., ECDSA signatures don’t reveal the private key, and yet they prove that you are in possession of it.
So no, merkle trees are not ZK proofs, and it’s just false to claim that they are.
RBF only works if you have some way to estimate to begin with.
Depends on what you mean with “works”. Bumping blindly to higher feerates if you didn’t get confirmed in a block does work fine, if you are not trying to pay the minimal amount or make it into the next block.
Got a CA DL that’s not a Real ID in April. I also use my passport card for flying, it’s not an issue.
Thanks. Librehans’s argument is a strawman. He says op_return is not better than inscriptions, so it has no upside. But op_return is slightly cheaper than storing data in payment outputs, and much less harmful. This improvement over data being stored in payment outputs is the central reason for the op_return increase.
The linked tweet by Mononaut describes a scheme that is akin to brc-20 and plans to store data in one or several payment outputs when the op_return limit is exceeded.
If you are curious about running command line commands, you could check out the Bitcoin Core RPC documentation. It should mostly match Knots behavior:
v29.0 still had the old default, the recent point release v29.1 dropped the minimum relay transaction feerate to 0.1 s/vB.
According to Clark Moody’s dashboard, about 6.6% of listening nodes are now running Bitcoin Core 29.1.
No, the configuration option has been available for a very long time, and there had been some users already that generally accepted transactions even with zero fees. The big change was that suddenly miners started confirming transactions below 1 s/vB, which lead to the default value for
minRelayTxFee
to be changed in Bitcoin Core 29.1 and Bitcoin Core 30.0.The average transaction fees per block were about 0.03 BTC in the past week, or about 0.9% of the total block reward.
I have no idea about Monero, but in Bitcoin, when there are two blocks found at the same height, they tend to contain largely the same transactions. Any transactions that were included in one block but not its competitor, can generally be included in the competitor’s successor block, should the competitor become part of the best chain.
The outcome depends on how much of the hashrate is accepting the offending transactions. If both the majority of the network and the majority of the hashrate is rejecting some transactions, yes, the miners that include those transactions will have a greater delay for their blocks to reach other miners. This delay causes a higher stale rate, so the lenient miners would be more likely to lose blocks to competitors that can propagate blocks quickly.
However, if the majority of the network rejects some type of transactions, but a majority of the hashrate accepts those transactions I would expect the effect to present differently: miners are generally well-connected, and the miners with loser mempool policies would likely all be connected via the subgraph with the loser mempool policies. A miner publishing a block with offending transactions would see it propagated quickly to the majority of the hashrate. Meanwhile the minority of miners not accepting the offending transaction would be the only ones affected by the delay: they would be more likely to still find a competing block when the majority of the hashrate had already switched over to extend the chaintip with the offending transactions. I would argue that in this case, the delay actually helps the majority miners with the loser policy because it causes the stricter minority to waste hashrate, similar to a selfish mining attack.
So in short, yes accepting unpopular transactions hurts your block propagation if both the network and other miners don’t accept them. However, you may also be collecting more transaction fees that may offset the loss to some extent, as the propagation delay we are talking about would cause an effective revenue loss of around 1%.