pull down to refresh

There has been a lot of debate about a recent discussion on the mailing list and a pull request on the Bitcoin Core repository. The main two points are about whether a mempool policy regarding OP_RETURN outputs should be changed, and whether there should be a configuration option for node operators to set their own limit. There has been some controversy about the background and context of these topics and people are looking for more information. Please ask short (preferably one sentence) questions as top comments in this topic. @Murch, and maybe others, will try to answer them in a couple sentences. @Murch and myself have collected a few questions that we have seen being asked to start us off, but please add more as you see fit.
109 sats \ 1 reply \ @anon 6h
Won’t removing the op_return cap definitely remove upward fee market pressure by allowing transferors (who would have otherwise needed to use multiple or more complex transactions) to send all their arbitrary data a single transaction with a single output?
Assuming people who have been using witness data discount will continue to do so.
Murch for president.
reply
10 sats \ 0 replies \ @Murch fwd 6h
Bitcoin Core currently considers a single OP_RETURN output with 80 bytes data standard. Above 143 bytes, it’s cheaper to use inscriptions.
There may be some use case that needs multiple OP_RETURN outputs which previously used several transactions instead of doing a single transaction. If they would have created let’s say three transactions instead of a single transaction with three OP_RETURN outputs, they can achieve the same thing with less blockspace, so they pay a bit less, but they also demand less blockspace.
It might open up the avenue for some people creating larger OP_RETURNs that wouldn’t have bothered to do so if it were non-standard even though they could have just submitted it to Mara’s Slipstream before.
Overall, I think the novelty will wear off, and in the long run, valuable payment transactions will price out frivolous data transactions. We cannot really prevent data transactions that are very valuable to their senders either way.
Murch for president.
Hah, thanks for your support. I’m not sure I’d be the right guy, I’d feel bad golfing so much.
reply
63 sats \ 1 reply \ @oklar 7h
If you feel happy from your own viewpoint of the consensus and mempool rules, does not upgrading Bitcoin Core until the time you feel like it makes sense to yourself a valid action to take at present?
reply
Absolutely. Bitcoin Core does not have an auto-update feature, because the users should always consciously choose which rules they want to enforce.
I would recommend running a maintained version though, if your node is part of any wallet infrastructure. Even so, you can run one of the two latest major branches.
reply
167 sats \ 1 reply \ @flat24 9h
I think that Bitcoin should stay as it is, to be just money, if people 👥 want to do other things should use other networks, in the end they are the other shit.
Is it true that this type of update could affect Bitcoin's decentralization?
reply
10 sats \ 0 replies \ @Murch fwd 6h
Is it true that this type of update could affect Bitcoin's decentralization?
If there are transactions that regularly get mined but do not get relayed, we are incentivizing the users that are invested in such transactions to run less strict mempool policies and mining pools to build out tooling for direct submissions. The largest mining pools have the most resources to build out such tooling and are most likely to get direct submissions. Our choice is to let the largest mining pools benefit disproportionally or to make these transaction fees available to any node that wants to build a block template. In so far, yes, this change could slightly improve decentralization by mitigating some centralization pressure. Other than that, I don’t see how it would affect decentralization, and especially claims that it would have a negative effect seem to be based in hyperbole.
reply
I read actual types of non-standard tx are discussed here https://b10c.me/observations/09-non-standard-transactions/
looks like most types are not relevant to op-return.
Could this PR be the beginning of reducing other mempool restrictions?.. also asking myself. With Libre Relay seems not that important now
reply
10 sats \ 0 replies \ @Murch fwd 5h
Other mempool policies are targeting DoS issues, detrimental use, and protect upgrade hooks. The OP_RETURN limits are the last obviously paternalistic, "we’d rather you don’t use Bitcoin for this thing that some people want to use it for" rules. I don’t anticipate that this is just the start of other mempool policies being removed.
reply
Users should be given clear configurable options to decide what's in their mempool, why were these options taken away?
reply
1568 sats \ 4 replies \ @Murch fwd 4 May
Generally, a configuration option should be provided when one can provide recommendations when the configuration option should be used and how it should be adjusted. Historically, it looks like the vast majority of Bitcoin Core nodes use the default configuration. When at least about 10% of nodes accept a transaction, it propagates somewhat reliably to all nodes in the network that accept them.
This means that if the default configuration for the OP_RETURN limit were raised or removed altogether, such transactions would soon after the release reliably propagate and, perhaps a little later, also get mined by a larger proportion of the block authors. Node operators setting a lower limit would do so at their detriment: they would download the transactions after an announcement from their peers, reject adding it to their mempool, then download the same transactions again when a block includes them, incurring increased bandwidth and extra latency on the updating to the latest chain tip.
Some proponents of the increased limit argue that the configuration option should not have been added in the first place, as it is ineffective to locally configure a different limit, and that the configuration option is even less useful after dropping the limit. Other contributors argue that setting a lower limit only harms the node operator, and removing the configuration option takes away control needlessly.
reply
312 sats \ 3 replies \ @hgw39 13h
With all due respect, I don't think you've answered the question. Your make valid points for the justification of the increase in the limit and explain that individual nodes may be harming themselves, default settings are the norm and the transactions will be propagated anyway. You've even pointed out that the ability to configure this setting was debated anyway and some consider that it shouldn't have been added. But the question asks something different, something I am wondering also. As a sovereign node runner in an open permissionless network, why does this PR propose to remove the ability for me to configure the mempool settings on my own node? If I make a decision that harms myself and doesn't make any difference to the propagation of transactions, isn't that for me to decide and bear the consequences of? It's like re-using addresses, commonly accepted as poor for privacy but there is no rule in the protocol that permits it. It's up to me to understand and accept the consequences.
reply
100 sats \ 2 replies \ @Murch fwd 6h
With all due respect, I don't think you've answered the question.
You are right, thanks for bringing this up.
[…] why were these options taken away?
At this time, the configuration option has not been removed. There is a pull request that proposes to increase the limit and to remove the configuration option, but several reviewers have argued that the two changes should be separated or that only the limit should be changed without dropping the configuration option. I would consider this part of the pull request to be still in question.
But the question asks something different, something I am wondering also. As a sovereign node runner in an open permissionless network, why does this PR propose to remove the ability for me to configure the mempool settings on my own node?
Originally, I was in favor of dropping the configuration option, because I consider it harmful and cannot come up with a situation in which I would recommend its use to a node runner, but after talking about this with several people (including one that consulted a psychologist!), I now support leaving the configuration option in the upcoming release, although I would still prefer that it be deprecated and removed eventually.
reply
0 sats \ 1 reply \ @hgw39 2h
Thanks for a clarifying this Murch. You just saved me a visit to a psychologist!
reply
Hahah, thanks for maintaining your humor through this drama.
@Murch has provided an excellent answer, but I would also like to mention that there were also some technical arguments in favour of retaining the option. It could allow an ideological miner with a stratumv2 or datum template provider, or ideological mining pool to exclude these OP_RETURN transactions from their block template. This has sparked a conversation around potentially separating transaction relay and mining policy, or even retaining the option for this reason. Some might argue even further that nodes adopting a stricter OP_RETURN size limit in their policy could be more likely to relay lower fee transactions to these miners or pools. However, I don't think this is a good idea for all the reasons Murch has laid out. The issue was opened here: https://github.com/bitcoin/bitcoin/issues/32401.
A similar option also exists for denying the usage of bare multisig (p2ms) outputs (-permitbaremultisig). These are a type of output intended for multisig, but widely used for data embedding through the counterparty protocol and stamps. It is currently default off. Making it default on, or removing it, have led to similar controversies in the past. My expectation is this will be revisited again too depending on the outcome of the current controversy.
reply
252 sats \ 7 replies \ @DarthCoin 14h
  1. What was the main reason /concern to add this PR?
  2. If the concern was that the blockchain size is too big for IBD, can we let things "AS IS" but somehow "archive" the past blocks, let's say pre-2017-fork era? And those who still want to use UTXOs from that era, could just do a simple migration or something ?
  3. What will happen if we do nothing?
reply
  1. The main reason for the PR (in my opinion) is that it achieves harm reduction. Similar to how local governments might provide a drug addict clean needles, giving data embedders the option to utilize OP_RETURN is better than having them stuff witnesses. This is because (1) OP_RETURN is provably unspendable and does not bloat the UTXO set and (2) does not leave dust outputs behind like witness stuffing (i.e. inscriptions). Secondly, removing the limit reduces the need for nodes to need to pull transactions at block-sync time because the next block will be made up of transactions already in your local mempool. Finally, because mining pools don't care about standardness, relay should not either. I think it was gmax that said what is relayed should closely match what is mined.
  2. IBD solutions already exist today. I forgot the name of one of them but it essentially has UTXO set hints that allow you to parallelize the process of sync. This is the main reason why IBD is slow today, and if you setup a node from scratch you'll see that CPU/networking are not utilized to their full potential. Libbitcoin is another example of how parallelizing can make it such that your network speed is the bottleneck for sync.
  3. If we do nothing then block updates are slightly worse for everyone as mentioned before. Additionally, if your code relies on mempool estimates for fees it will have a less accurate prediction of what to use for a fee rate. It would be similar to someone running the ordisrespector patch. Even if the transactions they don't like are not in their own mempool, they will need to store it anyway when a block is mined that contains them. Finally, if we do nothing there is a very real likelihood that a third party may resort to VERY BAD data storage methods like Bitcoin Stamps which utilize fake pubkeys and stuff data in them. These are much worse because they permanently bloat the UTXO set with each transaction created.
reply
21 sats \ 2 replies \ @Murch fwd 6h
Good answer, a couple remarks:
  1. Embedding data in witnesses does not inherently pollute the UTXO set, but it is often (mis-)used in a way that does. Even worse is data embedded in fake pubkeys or fake pubkey hashes as that data needs to be stored in the UTXO set.
  2. You are thinking of SwiftSync.
reply
21 sats \ 1 reply \ @DarthCoin 1h
Is not also what Lopp explain in this article with UTXO snapshot? https://blog.lopp.net/bitcoin-node-sync-with-utxo-snapshots/
reply
25 sats \ 0 replies \ @Murch fwd 1h
That article is about "AssumeUTXO", SwiftSync is a new proposal from this year.
reply
Follow up question:
Would phasing out OP_Return limits stop a third party from resorting to VERY BAD data storage methods?
In other words, if there is nothing stopping them from using, for example, fake pubkeys now, what would stop them from using them if we relaxed some OP_Return filters?
reply
We can’t stop anyone from using fake pubkeys, but using OP_RETURN would be cheaper and less malicious, so if they want to be perceived as good actors they would switch. If they continue to use fake pubkeys, even while having other options, they reveal themselves as obviously malicious.
reply
That means if they are malicious we can't do anything at the protocol level, regardless of whether OP_return is there or not, correct?
This is something that is confusing to me, because if that is the case, then no matter what happens with this debate, Bitcoin is just a sitting duck waiting to be defenestrated by someone with the right backing and know-how.
And if there is something we can do against a bad actor, then why not do it now, instead of rolling the red carpet and hoping they will behave themselves?
By the way, the long form replies are very helpful. Thank you for taking the time.
reply
How would someone get around the standardness policy currently for OP_RETURN size?
reply
Some Bitcoin node implementations such as Libre Relay have less strict mempool policies and relay transactions that would be considered non-standard by Bitcoin Core. Some mining pools use similar mempool policies and additionally accept direct submissions out-of-band.
As transactions with multiple OP_RETURN outputs or larger OP_RETURN outputs are permitted by the consensus rules, blocks that include these non-standard transactions are accepted by all nodes.
reply
Also important to mention: Libre Relay automatically peers with other Libre Relay nodes. So even with only a small minority of Libre Relay nodes, transactions still propagate reliably.
reply
So in other words... attempts to 'filter' transactions successfully by running a client with 'stricter' mempool policy than core... is pretty much hopeless? ????
reply
It takes only about 10% of nodes to somewhat reliably propagate a transaction. Even fewer are enough if they preferentially peer like Libre Relay does (TIL!). So, yes, it’s not effective.
reply
10 sats \ 1 reply \ @petertodd 22h
You learned that today? Sheesh.
My first full-rbf fork of Bitcoin Core with preferential peering was released in 2014.
Censorship, stifling information, is always more difficult than spreading it.
reply
A similar PR was proposed by Peter Todd 2 years ago, why was it rejected then? What has changed since then, why would this get approved now?
reply
For the record, this pull-req wasn't my idea. I was asked to open it by an active Core dev because entities like Citrea are using unprunable outputs instead of OP_Return, due to the size limits. And yes, that's the thing that has changed since.
reply
With all due respect, and I mean this totally neutrally...
Is there any way to get Citrea to just stop? Like hey, 'cut it out'?
Some of the criticisms I've heard are specifically that one company wants to make use of larger op_return data fields... so why change mempool policy for one company specifically? What if another company in the future wants to do something else, even if in the spirit of harm reduction, should we change mempool policy just for a company to play defi?
Another question: how many unprunable outputs are we talking about from Citrea? Hundreds? Thousands? Tens of thousands? Is this something really core to their business model? Is there any other way of them doing what they want to do, without requiring such a change? How polluting would they (or the others to follow them) really be for the network?
Finally, since LibreRelay works with only a limited number of nodes in the network available... why don't they just use LibreRelay? That way they could have larger op_returns without necessitating unprunable outputs. THANK YOU
reply
112 sats \ 1 reply \ @petertodd 4 May
Is there any way to get Citrea to just stop? Like hey, 'cut it out'?
In this particular case, definitely not. Even with the "prove your bytes aren't arbitrary data" soft fork proposal they're doing something valuable enough to make grinding bytes feasible.
why don't they just use LibreRelay?
Because this type of Citrea transaction is time sensitive. Only two major mining pools mine oversized OP_Returns right now. That means there's a decent chance this type of Citrea transaction – a tx that responds to fraud – would fail to be mined.
reply
21 sats \ 0 replies \ @tlindi 9h
they're doing something valuable enough to make grinding bytes feasible.
To whom it is so valuable, that we noders are forced to store their valuable data bytes forever?
reply
deleted by author
Why wouldn't the core dev open this themself?
reply
26 sats \ 0 replies \ @BITC0IN 23h
which core dev asked you to open this PR ?
reply
62 sats \ 2 replies \ @k00b 4h
Calling it a standardness rule seems... uncomfortable. It seems too high, in that a standard is something generally reached by consensus. I think it should be referred to as a convention, which denotes both that it is commonly adhered to, but deviation is not a significant issue.
Would there be any point in changing the naming from "standardness rules" to "conventions"? "Standardness rules" seems easier to conflate with "consensus rules" which is the most common source of confusion I've seen.
reply
110 sats \ 1 reply \ @Murch fwd 4h
That’s fair. Instead of referring to transactions that fail Bitcoin Core’s mempool policy as non-standard, we should really be talking about "transactions that are [not] accepted by Bitcoin Core’s mempool policy defaults". It’s a bit of a mouthful, though.
reply
42 sats \ 0 replies \ @k00b 4h
True. Explicitness tends to limit outrage memeing though.
"core is changing its tx conventions" vs "core is changing its tx standards"
idk maybe the results are similar.
reply
Won't spammers abuse large OP_RETURNs to bloat the blockchain and make IBD take longer?
reply
Unlikely. Inscriptions / witness stuffing is more economical past about the 100 byte mark due to cost. Witness is 4x discounted while OP_RETURNs are not.
reply
OP_RETURN data appears in output scripts. Output scripts are not subject to segwit’s witness discount. Per the block weight limit of segwit, we can have up to 4 MB of witness data, but only up to 1 MB of any other data in blocks. Output scripts are part of the other data.
Adding more output script data to blocks would result in smaller blocks. As OP_RETURN outputs require little computation to validate, OP_RETURN data would not slow down IBD.
reply
What does "standardness mean" in reference to OP_RETURNs?
reply
Bitcoin Core does not accept "non-standard transactions" into its mempool. Bitcoin Core 29.0 and earlier consider transactions with more than one OP_RETURN output or an OP_RETURN output of more than 80 bytes of data payload non-standard.
reply
Will more than 1 OP_RETURN per transaction be possible if this PR gets merged?
reply
Neither the count nor the size of OP_RETURN outputs is limited at the consensus level, so all nodes accept such transactions when they appear in blocks.
Currently, Bitcoin Core does not accept transactions with more than one OP_RETURN output to its mempool.
Peter Todd’s pull request proposes to allow transactions with multiple OP_RETURN outputs into the mempool.
reply
What are the current OP_RETURN limits and what restrictions are being lifted?
reply
Bitcoin Core 29.0 or earlier do not accept transactions with more than one OP_RETURN output or with an OP_RETURN output of more than 80 bytes of data payload to its mempool. Neither the count nor the size is limited at the consensus level. Peter Todd’s pull request proposes to drop both the limit on the count of OP_RETURN outputs and to drop the limit on the data payload.
reply
Shouldn't we be fighting spam, why are we making policies less strict, shouldn't we be making them more strict?
reply
While most Bitcoin Core contributors do not seem particularly excited about ordinals, inscriptions, runes, or similar projects, most of them appear to agree that the ability to embed data in the Bitcoin blockchain is a product of other characteristics of the Bitcoin network such as censorship resistance and a flexible scripting system.
Fighting "spam" transactions at the mempool policy level is ineffective, especially when such transactions have spent over $280M in transaction fees in the past two years which translates to plenty of financial incentive for mining pools to accept such (consensus-valid!) transactions out-of-band to pad their revenue.
The only way to properly make inroads on curbing spam would be to soft fork out the spam mechanisms. However, even going back to a small amount of whitelisted output script templates would not prevent data payloads in fake pubkeys or fake public key hashes, signatures per grinding, or other transaction fields that can hold arbitrary data.
When inscriptions were discussed as a concern for the Bitcoin network at a Bitcoin Core contributor meeting, the prevalent position was that making this quixotic fight the main priority of the project was not the best use of the project’s limited resources.
reply
Are current relay and mempool policies effective for filtering out spam transactions?
reply
The default mempool policies in Bitcoin Core are effective at preventing OP_RETURN outputs with more than 80 bytes of data. They are NOT effective at preventing OP_FALSE OP_IF ... OP_ENDIF (inscriptions) and fake public keys.
reply
Transactions have been shown to somewhat reliably propagate to all nodes that accept them if at least 10% of the node population accept them to their mempool. Even a strong majority of the nodes of up to 90% of all nodes would have little to no impact on the propagation of consensus valid non-standard transactions relayed by the other 10% of the nodes..
Additionally, Libre Relay nodes, an implementation with less strict mempool policy than Bitcoin Core, preferentially peer with other Libre Relay nodes which means that they effectively relay transactions among each other even with much smaller network penetration, and some mining-pools accept direct submissions through web interfactes or APIs that do not rely at all on network propagation.
reply
What will be the worst case scenario if users still could set their own limits for OP_RETURN?
reply
Since transactions with larger OP_RETURN outputs or multiple OP_RETURN outputs are consensus valid, they will appear in blocks as long as they find their way to mining pools and mining pools choose to include them. Any nodes that don’t allow such transactions in their mempool will download them twice: once when the transaction is first offered to them by a peer, where they download it, evaluate it, and drop it due to their mempool policy, and then a second time when they receive the block announcement that includes the transaction.
Whenever nodes are missing transactions from a compact block announcement, they have to request the missing transactions which increases the latency until they can relay the block. The relay is delayed more when large transactions are missing. Nodes that run a mempool policy that is stricter than what regularly appears in blocks therefore use more bandwidth and relay blocks more slowly.
Slower block propagation benefits larger mining pools by increasing the number of stale blocks and delaying other miners in switching to the new chaintip. Bigger miners disproportionally win over competing blocks and the block author doesn’t suffer from the propagation delay when a block is found.
reply
Whenever nodes are missing transactions from a compact block announcement, they have to request the missing transactions which increases the latency until they can relay the block. The relay is delayed more when large transactions are missing. Nodes that run a mempool policy that is stricter than what regularly appears in blocks therefore use more bandwidth and relay blocks more slowly.
For what it's worth, to people who are far more influential... There is a growing 'army' of folks who want to run more restrictive mempool policies to 'kill the spam'. Anti-ordinals, anti-memecoins, anti-BRCs etc... This is preached nonstop over and over, just do 'x' and you can stop the spam. "It's your memepool do X" etc etc.
What some of these "educators" don't explain however... is what you just said. About increasing latency, delay, bandwidth, speed at which blocks are relayed, miner centralization etc.
People are free to run whatever they want... but some of the "influencers" in the space only tell half the story or don't explain some of the downsides. It's like 'do this it's good' but without explaining why 'that' may not be a great idea. Thank you guys
reply
Right! However, this is not a static situation as this is fully solveable with a relatively small amount of code: if people are truly serious about filtering, they could simply add a "purgatory" inside the mempool (or outside it, that doesn't matter) where all the crap lives that one does not want to relay or mine, but is still valid for (a) new txn inputs (where the new txns automatically end up in purgatory too unless the offending parent gets mined) and (b) compactblock reconstruction.
reply
104 sats \ 6 replies \ @028559d218 7h
wait, what are you talking about specifically? Is this a mempool policy? or something else?
my understanding is that a 'complete' solution does exist, the 'purifier' solution but of course it is a hard-fork.
reply
I mean: if Knots devs want to not lose performance on block validation due to the filters, this is solvable by not purging policy-offending txs completely, but - in the simplest form - flagging them as "non-standard" and checking for that flag, before relaying or creating a block template.
This would eliminate the downside of having to re-request all the txs again when you get a compact block or a package with a child tx in there and largely remove the filter-specific part of any block withholding issue too.
good point
reply
0 sats \ 0 replies \ @dwami 15h
good point 2
reply
If relaxing op_return standardness limit seeks to make 'spam' prunable, then what are proponents of this change assuming about the long-term feasibility of running a 'full' (unpruned) bitcoin node?
reply
221 sats \ 4 replies \ @Murch fwd 6h
If the blocks were generally not full, and this change would cause the blocks to be full again, I would see how there would be an argument that the blockchain is growing faster than necessary, but blocks have been almost consistently full for the past 27 months. I’m honestly not sure why opponents of this change think that the proposed change would make nodes more expensive to run. If a lot of OP_RETURNs were added to the blockchain, it would make the resulting blocks easier to validate and reduce the overall blocksize compared to blocks with a higher proportion of witness data. It seems to me that there are some misconceptions here.
reply
Thank you for your response.
I’m honestly not sure why opponents of this change think that the proposed change would make nodes more expensive to run.
Sorry, maybe you misunderstood my question, as my assumption has been that the proposed pr will do the opposite.
To perhaps rephrase (respond if you wish): if demand for arbitrary data-anchoring in Bitcoin increases in the near- to long-term, and it seems it might, is this pr a preemptive attempt at defending would-be node runners by making this data prunable?
And if that's the case what would the implications be if average plebs started being unable to run full nodes?
reply
200 sats \ 2 replies \ @Murch fwd 2h
Sorry, I mixed in a bit of what I have been seeing in other discussions here. There seem to be many people claiming that relaxing the OP_RETURN limits would make it more expensive to run nodes. I am trying to push back on that argument, as I don’t consider it tenable.
AFAIA, some people at Ocean feel that the blocksize generally should be reduced to 300 kB because it is already too expensive to IBD.
To perhaps rephrase (respond if you wish): if demand for arbitrary data-anchoring in Bitcoin increases in the near- to long-term, and it seems it might, is this pr a preemptive attempt at defending would-be node runners by making this data prunable?
Yeah, I think that would be a fair characterization of an argument in this debate. There are a bunch of 2nd-layer protocols, sidechains, and rollups, etc. in development that aim to use the Bitcoin blockchain for anchoring, proofs, or data storage in some other way. Dropping the OP_RETURN limit would ensure that they can at least use OP_RETURNs if they must write data to the blockchain instead of writing in more harmful ways like fake pubkeys or fake pubkey hashes. OP_RETURNs would need to be part of a complete copy of the blockchain, but would not live in the UTXO set which needs to be kept highly available. Fake pubkeys/pubkey hashes live in both the UTXO set and the blockchain.
And if that's the case what would the implications be if average plebs started being unable to run full nodes?
A pruned node keeps the last two days of blockchain and the entire UTXO set. A node with a complete copy of the blockchain would retain the data in the blockchain and will also keep the UTXO set of course. The UTXO set is also stored on disk and usually only recent outputs are in the cache in the memory. A bloated UTXO set especially affects IBD and generally slows down loading UTXOs from disk.
reply
100 sats \ 1 reply \ @unschooled 2h
Ok so as I'm understanding you: given increasing interest in data anchoring on Bitcoin, with this merge, block size still doesn't exceed 4mb, meaning there's no new burden in storage requirement on noderunners. The nature of using op_return instead of other arbitrary data means this merge will mitigate the problem of lengthy network synchronization as well as improved mempool uniformity and therefore faster block relaying (which benefits miners) 🫡

I very much appreciate all the back-and-forths you've afforded me here on SN as I try to wrap my mind around all this. Not to mention all your other contributions to Bitcoin.
reply
100 sats \ 0 replies \ @Murch fwd 2h
Yep, that’s my understanding as well.
34 sats \ 1 reply \ @Bitcoiner1 13h
What do you support? Unlimited Core or Knots further restricting OP_RETURN?
reply
10 sats \ 0 replies \ @Murch fwd 6h
I think it is a reasonable idea to drop the OP_RETURN limit at this time. I left the following concept ACK:
Concept ACK on removing the limit on OP_RETURN size and count.
The limit on OP_RETURNs size is ineffectual, causes increased node traffic, and drives mining centralization. While I have no personal desire for more large OP_RETURNs on the blockchain, it is preferable to have data be written into OP_RETURNs than the UTXO set, and it is preferable to see the transactions I’m competing with in my mempool, than to incentivize build-out of mechanisms for direct submission to the largest mining pools.
Regarding Knots, I would not let my money be touched by a node that is maintained by a single developer as a side project via picking open (unfinished) Bitcoin Core pull requests to a 1000+ commit patch set.
reply
Will Taproot wizards and other spam companies and projects start using OP_RETURN to put jpegs on the blockchain?
reply
I can't speak for the "spam companies and projects", but as far as we're concerned:
I'm not going to emphatically say "no" because who knows, maybe in the future we'll have some application where it makes sense but we currently do not. We have released and sold two inscription collections (that both put data in witness data). In both of those cases, it's not clear to me that having the data in OP_RETURNs would have been better. The main reason we put the data in witness data was for compatibility with the ordinals protocol. Our collectors want to be able to manage, store, and trade our collection in wallets they're already using -- and there are several ordinals-specific wallets out there, and any wallet with coin control could technically be used. You could add opreturns to ordinals, but it would be weird. imo, having inscription data in the input instead of the output makes the protocol cleaner and easier to implement.
So for our collections, compatibility is a bigger concern than something like the witness discount. High-value inscriptions are actually reasonably price-insensitive when we're talking about a 4x cost premium. We did a LOT of work on Quantum Cats to bring the cost down, but that was by a factor of about 100, not 4.
Outside of digital collectables, we've done a lot of work with OP_RETURNs holding data commitments for multi-transaction protocols using OP_CAT or other covenants. In those cases, usually we're putting one or two merkle roots and maybe a little metadata tag in an opreturn. So we actually haven't felt any real pain with the 80-byte limit. We actually run into consensus limits more often than we run into the OP_RETURN limit.
In general, if we did want to stick a bunch of data into an opreturn it would probably be something where:
  • we are not inspecting it in a future transaction (a la the OP_CAT state caboose trick)
  • it's something where either its an op_return-specific protocol or something other than ordinals (maybe something we cook up, maybe something our customers want)
  • its something where we really care about malleability (opreturns get covered by SIGHASH_ALL, witness data does not)
Nothing currently fits that bill, so I don't think so, but you never know. Maybe some of the people mad about this have a fun idea they can share.
reply
i don’t know… when it comes to our so-called “spam” work, it’s supposed to have cultural/sentimental value to some people
for example, for some people, a wizard jpeg is a way to express their love for bitcoin
it’s hard to predict if under some circumstance OP_RETURN will somehow increase the cultural/sentimental value of some jpeg, but my intuition is it probably will not?
then again, if for some reason it does, you can expect a lot of people will do that. from that standpoint, for those who for some reason want to avoid “spam”, it would make more sense not to remove OP_RETURN limits, to prevent the possibility that collectors ascribe some cultural/sentimental value to that in the future
reply
To add a data payload via OP_RETURN, you have to add an additional output to a transaction. This requires 11 bytes of transaction data overhead. The data is not subject to segwit’s witness discount.
Vojtěch Strnad shows that inscriptions have at least 118.75 vB overhead. However, the data payload of inscriptions is subject to segwit’s witness discount.
According to Strnad‘s calculation, data payloads of 143 bytes or larger are cheaper with inscriptions. It would therefore only make sense for small payloads to move to OP_RETURN, and it seems economically unattractive to prefer OP_RETURN outputs over inscriptions to embed images in the blockchain.
reply
157 sats \ 1 reply \ @DarthCoin 4 May
If OP_RETURN still cannot stop all the garbage, why is so important to remove it? Does it affect future development / improvements for LN ?
reply
There are currently three ways of embedding data in the Bitcoin blockchain in use.
  1. Inscriptions are written to an unexecuted branch of a taproot leaf script. This data appears in the witness stack of inputs. Witness data is only validated once when a node first validates a transaction’s scripts and is stored as part of the full blockchain record. When a pruning node performs IBD using the assumevalid option, witness data doesn’t have to be downloaded or evaluated at all. Witness data is discounted by 75% compared to other transaction data, but is malleable until a transaction is confirmed.
  2. OP_RETURN data appears in the output script. OP_RETURN outputs are only validated once when the node first sees the transaction. The data in output scripts is not discounted and not malleable, as signatures on the inputs generally commit to the exact output scripts. As OP_RETURN outputs are unspendable, they are stored as part of the transaction in the blockchain data, but do not get added to the UTXO set.
  3. Some schemes (e.g., Stamps) embed data in payment output scripts using either fake public keys or fake hashes. The data for output scripts is not discounted. As these output scripts are not clearly unspendable, these output scripts must be retained in the UTXO set.
Citrea recently announced that they plan to write data to payment outputs because they need non-malleable transaction data that gets confirmed timely in excess of the current OP_RETURN standardness limit. As the use of payment output scripts cannot be easily prevented, encouraging them to use OP_RETURN outputs instead would be harm reduction as at least the data would not be written to the UTXO set that every node has to retain forever.
The proponents in the mailing list thread and pull request further argue that it is more expensive to write large data payloads to OP_RETURN outputs than inscriptions and therefore use cases that already use inscriptions are not incentivized to use OP_RETURN. The break-even point for that appears to be 143 bytes above which payloads are more expensive to be embedded into OP_RETURN outputs.
I am not aware of any effects on LN development.
reply
152 sats \ 5 replies \ @javier 4 May
Is there any estimation on how much would this affect fees for the average user, considering external projects (like Citrea) using it? Any possibility that this could saturate the mempool and boost fees beyond reasonable?
reply
Citrea is ramping up to launch a rollup on Bitcoin that would use Bitcoin as the data availability layer. It does not require a consensus change to deploy, i.e., they don’t need to ask for permission. I did not study their protocol in detail previously. Skimming their technical specs, they mention that one type of proof is committed every ten minutes (i.e. 144 txs per day), but I also read that they store all necessary data on Bitcoin, therefore it is not clear to me whether there are more transactions that would write to the Bitcoin blockchain.
If it is just those 144 txs per day, their scheme would be unlikely to significantly impact the fees for other users, but I’ve previously seen an overview of dozens of other similar projects in the making (probably with various degree of success), so there may be multiple other schemes in the future for which we’d prefer that if they must, they only write to the blockchain and not both the UTXO set and the blockchain.
reply
Ok, so just another speculative question, answer if you want: so the final objective of all of this is to enable proper Citrea operation, which will imply bringing Defi and nUSD (similar to Tether) directly to the Bitcoin blockchain without the need of a sidechain?
reply
No, that is a misunderstanding.
From what I can see per a cursory internet search, their testnet is about a year old and has over 40,000 txs per day. It looks like Citrea is going to launch regardless.
The only aspect of this that ties into the OP_RETURN debate seems to be that the change could convince them to write their data only to the blockchain, whereas they are currently planning to write data to both the blockchain and the UTXO set.
reply
40,000 a day is 1666 an hour or 277 a block. With 3000 other transactions a block that's... >9% of each block?
reply
It's a Rollup, so I'd hope they're not actually storing all data on Bitcoin?
What makes a UTXO unprunable? Which projects are making unprunable UTXOs?
reply
The UTXO set represents all spendable pieces of Bitcoin. A fullnode must have all UTXOs in order to be able to validate transactions. If some nodes were to discard some UTXOs and these UTXOs later get spent in a transaction, the network would fork as some nodes follow the blockchain in which the coins get spent and others would start forming an alternative chaintip.
There have been several cases of "blockchain graffiti" in which messages were left in pubkey hashes, and counterparty/Stamps deliberately used 1-of-3 bare multisig outputs to store data in two of the public keys (leaving the UTXO spendable per the third key). Recently, Citrea announced that they would store some data in pubkey hashes to embed non-malleable data in transactions with timely confirmation. A part of the motivation for dropping the OP_RETURN limit is that some consider it harm reduction to allow OP_RETURN payloads of 100 bytes instead of Citrea forging ahead with writing permanently to the UTXO set.
reply
What makes a UTXO unprunable?
Any non-zero unspent transaction output that is not provably unspendable.
An output script starting with OP_RETURN is provably unspendable, so it can be pruned.
But an output with a fake public key or fake script hash (such as produced by STAMPS) is unspendable, but that's not always obvious. Thus these outputs must be retained in the UTXO set. These are the most harmful ones.
reply
Mostly right, but…
Any non-zero unspent transaction output that is not provably unspendable.
…UTXOs are allowed to have an amount of 0 per consensus rules, the dust limit is also just a mempool policy.
reply
Are you saying that zero amount UTXOs are retained in the UTXO set?
reply
Yes, it is consensus-valid to create and spend UTXOs with an amount of 0. Therefore, any full node must retain them to not potentially be forked off the network.
reply
Culture is what protects Bitcoin from external forces, shouldn't non-technical arguments be valid when considering these types of changes?
reply
100 sats \ 0 replies \ @Murch fwd 8h
Yes, cultural aspects can weigh into this. Funnily enough, most Bitcoin Core contributors would prefer if Bitcoin were only used for monetary transactions.
I think where the main disconnect lies is that there are more goals beyond just preventing data transactions. Mining decentralization, censorship resistance, and having a flexible scripting system are also important aspects of the Bitcoin system, and many Bitcoin Core contributors expect that the arms race of trying to prevent data transactions will take too much time from other important work.
Finally, monetary transactions alone have not been providing enough blockspace demand to keep blocks full, but have still been too expensive for many uses at the same time. It seems to me that overall we still need to find more scalability improvements, especially for small payments. I don’t think the network would thrive in the long run if we were to shrink-freeze consensus rules on the status quo just to combat monkey pictures.
reply
A concern from x from a technical bitcoiner:
"If we open the shitcoin floodgates, which is what removing the opreturn limits does, then fees will go high and stay high forever, drowning out all legitimate onchain activity.
Bitcoin will be impossible to use permissionlessly at that point."
Murch please address this concern.
reply
We have had multiple phases in Bitcoin’s history in which colored coin protocols, NFTs, or other schemes started using Bitcoin as a data layer. While some of them temporarily increased demand for blockspace, the mempool has so far always emptied eventually. Currently, the one-week average feerate in blocks is below 4 satoshis/virtualbyte.
OP_RETURNs are more expensive than inscriptions for larger amounts of data, and the data payload is not subject to segwit’s witness discount. Previous uses of OP_RETURN were shortlived as they became to expensive for the operators and either switched to other networks or were optimized out of existence. It is not clear to me what scenario the writer is picturing in which loosening the OP_RETURN limits would lead to a substantial amount of OP_RETURN data that could be classified as "opening the shitcoin floodgates".
reply
Is it possible to stop the abuse of payment outputs (i.e., bare multisig, fake pubkeys, and fake pubkey hashes) that are used to embed data, thereby creating unprunable UTXOs that bloat the UTXO set?
reply
I think this answer covers the same topic already: #972077
reply
What can we do to stop spam at the consensus layer of Bitcoin?
reply
tl;dr: you could make data publication much more expensive by requiring transactions to prove that data in them isn't arbitrary. But even that will not totally stop data publication. Also, not all “spam” requires data to be published.
reply
You would have to try to forbid the spam via consensus rules, i.e. soft fork it out.
This would very quickly boil down to us choosing between having a flexible scripting system or not having spam transactions. Even if we restricted ourselves to whitelisting only specific single-sig payment schemes and forbidding all other transaction types, it would be possible to embed data in other transaction fields like signatures, public keys, or pubkey hashes. The trade-off seems rather unattractive to me.
reply
Shouldn't we debate the controversy of this PR on Github since it's where the code gets merged to make these changes?
reply
The GitHub repository is the workspace of the Bitcoin Core contributors. We welcome constructive contributions from anyone, and spend a lot of time encouraging others to also make it their workplace.
Think of the repository like a co-working space: you can rent a desk for the day, you can work on something by yourself or collaborate, but if you hold a riot in the co-working space, shouting at people trying to do work, you are probably gonna be asked to step outside.
We use pull requests to review and evaluate proposed changes to the Bitcoin Core code base. We discuss the proposed change conceptually, evaluate the approach, and collaborate to hone the implementation of the idea. There are about 40 part-time and full-time Bitcoin Core contributors, and maybe a hundred occasional contributors.
GitHub pull requests are linear comment threads, and it is not at all designed to scale to social media level engagement. A lot of us get an email every time someone comments on a pull request we are participating in. When suddenly hundreds of people start engaging with a pull request, this quickly adds a lot of noise to our inboxes and the technical review comments quickly get lost in the flood of other comments. The moderators step in to fade out some of the repetitive or abrasive comments, in order to increase the visibility of constructive comments. Eventually someone feels mistreated and this supercharges the parallel discussion on social media.
Overall, there are probably better venues to have this sort of conversation. For example the Mailing List, Delving Bitcoin, Stacker News, or Reddit. (Twitter and Nostr seem to contribute more to the outrage, so I’m not sure discussions are all that productive there. ;))
For the Bitcoin Core repository, if you contribute relevant new arguments or other constructive comments, please by all means, go ahead. But before posting, please try to catch up on the state of the debate by reading the mailing list thread and all (visible) comments on the pull request before yours.
reply
Was this PR initially proposed because of Citrea BitVM needs? If so don't they only need a slight bump in OP_RETURN size, why is it being proposed to make the size unrestricted?
reply
Citrea does not require an increase of the OP_RETURN limit.
AFAIU, Citrea is planning to launch a ZK Rollup on Bitcoin. To that end, they intend to write a proof into the Bitcoin blockchain every ten minutes. This proof takes 100 bytes of data. Their current plan is to write that data into one OP_RETURN output accompanied by a fake pubkey hash. Outputs with fake pubkey hashes cannot be safely discarded, therefore this unspendable payment output would pollute the UTXO set for all times. This would just work for them with the currently established mempool policies.
One of the motivations for the pull request would be that it could be suggested to Citrea that they instead write all data to a single slightly larger OP_RETURN output and we at least do not incur the unspendable outputs in the UTXO set.
Overall, the OP_RETURN limit was introduced to guide the use of Bitcoin away from excessive blockspace use for data while the available blockspace was only partially demanded. In the past 27 months, there have only been a few block that have not been full, so undemanded blockspace is no longer a thing. OP_RETURN outputs would have to compete for blockspace with other transactions at all times. The limit is neither as important nor as effective today as when it was introduced. Instead of providing mining pools with a financial incentive to accept oversized OP_RETURN outputs out of band, and to have debates to slightly loosen the limit further every once in a while, it seems easier to get rid of the incentive and debates by dropping it altogether.
reply
To that end, they intend to write a proof into the Bitcoin blockchain every ten minutes. This proof takes 100 bytes of data. Their current plan is to write that data into one OP_RETURN output accompanied by a fake pubkey hash
That's a common misconception. Indeed, Citrea is frequently writing a state diff and a corresponding proof into the blockchain, but they are using inscriptions for that. Those OP_RETURN transactions are something different. They are used only in case of a dispute during pegout for watchtowers to provide a chainstate proof for a heavier chain than the one the operator claimed to be the canonical chain.
These OP_RETURN transactions likely never hit the chain because there are strong disincentives for operators to make any invalid claims. Thus, ironically, lifting the OP_RETURN limit hardly has any effect in practice. It barely effects the Citrea protocol and it likely doesn't effect what kinds of transactions we see in the chain.
reply
Thanks for adding color.
reply
0 sats \ 0 replies \ @anon 20h
Isn't this why they need it to be standard though? Because those transactions need to be reliable and can't afford to wait for friendly miner which may take longer than the thief.
reply
Is it possible to stop abuse of witness data? If so, how? (i.e ordinal theory inscriptions, "jpegs").
reply
We could:
  • try to forbid the inscription envelope at the mempool level, which would probably lead to more nodes running less strict mempool policies, or more direct submissions to mining pools.
  • soft fork out the inscription envelope specifically, but it would be trivial to come up with another similar construction that allows embedding data in the witness stack, and given the time line of soft forks, it would be hard to effectively ban new constructions that evade the soft fork rules. Some argue that being willing to ban the construction would be a sufficient signal for NFT activity to move to another network, but I am not convinced that this is true.
  • soft fork in a much smaller limit on witness data, but that would simply increase the overhead for storing data, not actually prevent it.
  • whitelist only single-sig payment output scripts, forbid all other scripts at the consensus level, but that would still allow people to embed data in fake pubkey hashes or fake pubkeys.
  • whitelist only single-sig payment output scripts, and require that signatures prove that the scripts are spendable, but data could still be embedded by grinding signatures or stuffing them into transaction fields that hold arbitrary data like the input sequences.
Overall, it seems unfeasible that we make data payloads prohibitively expensive while payment transactions remain incredibly cheap. In the long run, small valuable payment transactions will outspend frivolous data transactions, and valuable data transactions will outspend valueless payment transactions.
reply
What does it mean when someone says "Fix the Filters"?
reply
The OP_RETURN output type was introduced as a harm reduction to divert users from storing data in unspendable bare multisig outputs that would stay in the UTXO set forever. Bitcoin Core has limited standard transactions to one OP_RETURN output of at most 80 bytes data payload for many years. When people started putting inscriptions into witness stacks, some people argued that such inputs should be "filtered" in a similar manner.
"Fix the filters" is a demand that Bitcoin Core developers amend mempool policy to limit or forbid all data carrying transactions.
reply
I'll add that I verified off-band with someone notable that represents the "Fix the Filters" cause.
The gist I gathered for Fix the Filters: Adjust relay/mempool policies on nodes to hinder spam on the blockchain without removing existing, functioning filters.
reply
Sure, close enough?
reply
If we prevent these transaction from going into our mempools doesn't that prevent or delay these spam transactions from being mined therefore discouraging the spammers?
reply
Yes, in the sense that when you configure your node to have a more strict mempool policy than that of its peers, it will not add offending transactions to its mempool, and therefore also not relay them.
However, each node makes at least 8 outbound connections through which transactions are relayed, and listening nodes can even have more than 100 connections. A single peer announcing transactions is sufficient for a node to hear about them, so it is sufficient for about 10% of nodes to relay transactions for them to reliably reach miners that want them.
In conclusion, your node might not be participating in the relay of such transactions, but the relay of offending transactions is only prevent if almost all nodes on the network participate.
reply
Is there any conflict of interest with Bitcoin Core and companies like Citrea, in ref to this PR?
reply
Not that I’m aware of. Neither Jameson Lopp nor Peter Todd are regular Bitcoin Core contributors. All arguments that I have seen were based on where the data would be stored, whether the OP_RETURN limit is still effective, and whether we want to have this debate every few years.
reply
Why would a spammer use OP_RETURN if it's cheaper to use Witness data to store arbitrary data?
reply
OP_RETURN data is signed by the signatures in inputs, and therefore cannot be malleated, while at least the signatures in witness data are malleable. Some users also perceive OP_RETURN as the "correct/righteous" way to embed data.
OP_RETURN outputs are also slightly cheaper for small data payloads up to 143 bytes, but more expensive for bigger data payloads.
reply
Is allowing standardness for larger OP_RETURNs a slippery slope? If we allow this won't we continue to allow things that make bitcoin less for money and more for arbitrary data?
reply
Larger OP_RETURNs may slightly reduce the friction for users to embed small amounts of data into the blockchain. Since OP_RETURN data is written into an output script, such a data payload is not subject to the witness discount. This means that larger OP_RETURNs would be significantly more expensive than inscriptions which are already happening.
After watching the NFT enthusiasts spend over $280M in transaction fees over the past 27 months, it seems to me that there are strong financial incentives for miners to facilitate such use. As the block subsidy is exponentially shrinking, these incentives will only grow. It will inherently be possible to embed data in the Bitcoin blockchain. We can make it a tiny bit harder by adding friction at the mempool level, which will incentivize mining pools to facilitate private relay and out-of-band submissions, benefiting the largest mining pools. We can make it significantly harder by restricting outputs to a small whitelist of payment output scripts at the consensus level, but that would neuter Bitcoin’s scripting system. The question boils down to what trade-offs we are willing to make and how much overhead there will be to inserting data—we cannot completely prevent it.
Bitcoin’s scripting system is what will allow us to build innovative scaling solutions and is necessary for the network to succeed in the long run. Mining decentralization was at its high point in 2017 and has been trending toward more centralization. Censorship resistance seems more important than fighting a few monkey pictures.
So, to me, it is preferable that a modicum of data be written to OP_RETURN outputs rather than to unspendable payment outputs that will linger in the UTXO set forever, and I don’t think it’s worth to make sacrifices regarding the scripting system or mining decentralization just to bring down the hammer on some Bitcoin users that happen to enjoy trading graffiti. Either way, the worst of the most recent colored coin and NFT wave appears to have passed. Feerates are down, and blockspace demand from runes and inscriptions appears to have jumped the shark:
reply
Won't large OP_RETURNs allow people to spam the mempool with 100kb transactions and mess up bitcoin for everyone by bloating the mempool and not allowing legitimate transactions in the mempool?
reply
Bitcoin Core’s mempool data structure is limited to 300 MB by default. If a node’s mempool is full, it will start evicting the transactions with the lowest feerates. Transactions get into the mempools of nodes much the same way as they get into blocks: if they offer a higher feerate, they are added to the mempool, potentially displacing lower feerate transactions, whether they are big or small transactions.
OP_RETURN transactions do not have any advantage in the competition for blockspace, and OP_RETURN data appears in the output script. The output script is not subject to segwit’s witness data discount, and therefore each byte of OP_RETURN data incurs 1 kvB (4 wu) of blockspace.
reply
102 sats \ 3 replies \ @ChrisS 4 May
What is the difference in defining a transaction as valid versus defining a transaction as standard and why do we need this difference?
reply
The consensus rules specify what a node must accept in a block. These rules must match perfectly across all node implementations. If a block or a transaction in a block are evaluated differently by some nodes, the network would experience a consensus failure as some nodes accept the block while others fork off.
A node’s mempool policy defines what the node will relay and consider for the block templates it produces. We use mempool policy to e.g. discourage transactions that are expensive to validate or might trigger security issues for some nodes, and the premature use of upgrade hooks. Generally, mempool policy is more strict than consensus rules and may diverge across nodes. When many nodes agree on mempool policy, it reduces the amount of bandwidth necessary to propagate transactions and blocks across the network, and reduces the overall latency of block propagation, because most nodes can reconstruct the full block from compact block announcements.
reply
63 sats \ 1 reply \ @ChrisS 4 May
Consensus rules define valid transactions and a nodes mempool policy defines what it considers standard? Are there any consensus rules surrounding the size of op_return?
reply
Consensus rules define valid transactions and a nodes mempool policy defines what it considers standard?
Yes, that’s right. Whatever a node accepts to its mempool, it will use to build block templates and relay to peers. The name is derived from a function called isStandard(…) in the Bitcoin Core code base, which implemented some of the mempool policies.
Are there any consensus rules surrounding the size of op_return?
OP_RETURN outputs are not limited by consensus rules other than indirectly per the block size limit.
reply
102 sats \ 1 reply \ @ChrisS 4 May
What is the difference between the utxo set, the mempool and the blockchain. How does that relate to how much "stress" is put on a nodes computing power and storage limitations when considering transactions with larger OP_return data. How does this change when storing data in OP_return output vs storing data in the witness data?
reply
The UTXO set represents all spendable balances of Bitcoin users on the network. Every full node must retain the entire UTXO set.
The blockchain represents the entire transaction journal of the UTXO set. The blockchain is used by new nodes to work out the current UTXO set and converge on the shared state of the network. The network as a whole needs to retain the blockchain, but individual nodes may prune most of the blockchain data after IBD.
Each node maintains its own mempool to keep track of unconfirmed transactions. Nodes use the mempool to reconstruct the full block from compact block announcements, to build block templates for mining, to estimate feerates, and to announce unconfirmed transactions to their peers. Optimally, nodes retain all unconfirmed transactions in their mempools that are expected to appear in blocks to front-load transaction validation to speed up block validation and relay.
Both OP_RETURN outputs and witness data are only relevant to transaction validation and need to be retained for a full copy of the blockchain. Neither of the two is retained in the UTXO set. Data payloads in OP_RETURN outputs and witness stacks are cheap to validate as they both appear in unexecuted sections of scripts. Either will be pruned by pruned nodes when the node moves sufficiently far past the block containing the transaction.
reply
If there will be a hard fork resulted from this PR (split chain like in 2017), what will happen with existing LN channels? Will exist on both chains with 2 LNs ? that will be some kind of madness... It's a rhetorical question, I know.
reply
31 sats \ 2 replies \ @Murch fwd 6h
If there would be a hard fork, the outputs funding the lightning channels would exist in the UTXO sets of both networks. If both channel participants run Lightning Network software compatible with each network, they could maintain both versions of the Lightning Network channel. However, it would probably require a novel solution to distinguish the channels on the two separate networks. If at least one of the participants does not want to use the channel one one of the networks, they would likely try to close it. If transactions between the two networks are replay-safe, they could close it only on one network, otherwise, they might close it on both and open a new one only on one network once they have acquired coins that only exist on one network.
reply
0 sats \ 1 reply \ @DarthCoin 6h
it would probably require a novel solution to distinguish the channels on the two separate networks.
That's what I'm talking about. many users will get confused and many mistakes will be made. A mess...
reply
If a hard fork were imminent, it might be the cleanest/safest solution to close down most lightning channels before the fork and until the dust settles.
reply
Thank you for the most balanced, honest, and technically sound discussion on the whole OP_RETURN subject.
reply
Thank you for taking the time to read this. :)
reply
Are there any services/APIs for searching through OP_RETURN text?
reply
A quick google search led me to https://bitcoinstrings.com, but it doesn’t look particularly searchable. Maybe https://opreturn.net/bitcoin/opreturn/ works better?
reply
0 sats \ 1 reply \ @dwami 8h
There might be some different opinions here...
reply
10 sats \ 0 replies \ @Murch fwd 5h
I read your linked post. I agree that both major factions in this debate appear to be arguing earnestly for the most part. There seem to be some misconceptions, but that makes this debate also an education opportunity. I don’t agree that having a few data transactions in Bitcoin spells the doom of the network—that is hyperbolic.
reply
The proposal doesn't adequately address the increased burden on node operators. Node operators bear the costs of storing, processing, and transmitting all blockchain data in perpetuity, yet receive no compensation for storing arbitrary data. This socializes costs while privatizing benefits to data storers. There are numerous off-chain solutions for arbitrary data storage that don't burden the Bitcoin network, including sidechains, layer 2 solutions, and purpose-built data storage blockchains. The proposal doesn't sufficiently explore why these alternatives are inadequate.
More facts about how OP_RETURN limits are working well as designed, and two analogies: “car safety belt doesn’t work, because someone died with seat belt on” or: people can still take effort to climb over your fence, doesn’t mean all fences in the community should be removed.
"Audience are too dumb to understand this”. But most of us can appreciate: only 7 OP_RETURN larger than 83 byte this year, and the rest millions are <= 83 bytes. (Jason Hughes)
reply
Was the @optimism network down?
reply
Only for flight mode a few times per year.
reply
deleted by author