Slop everywhere
There has been some discussion about the use of LLMs in generating BIPs. While LLMs are useful tools, they also remove a proof-of-work moat that used to defend BIP editors' time: proposals at least required some effort to follow BIP guidelines.
The same problem no doubt affects the Bitcoin developers mailing list. This mailing list has traditionally been a space where people working on Bitcoin software and protocol discuss ideas. Rather than being fully open to anyone who wants to post on the list, the list has editors who attempt to filter out posts that are obvious spam, off topic, or unserious.
Last month, someone named Lazy Fair posted their idea: "A safe way to remove objectionable content from the blockchain." I don't know if this post was generated with the help of an LLM, but it certainly feels like the author doesn't have a good grasp of the subject about which they are writing.
In response, both Greg Maxwell and Saint Wenao pointed out a pretty major flaw with the proposal, while Ethan Heilman suggested that there was a better tool for what Lazy Fair wanted to achieve (deleting content from blocks that are already in the longest chain): zerosync and using zk proofs to prove validity of blocks.
The rose growing in manure
At this point, something interesting happened: @petertodd responded and said:
Rather than being a solution, the technology behind Zerosync is a potential threat to Bitcoin.
This is not something I had heard before. Perhaps it is widely known and I'm just revealing my ignorance, but Todd's argument is new to me and I found it pretty interesting to think about:
For Bitcoin mining to remain decentralized, blocks need to be widely propagated in a form suitable for creating new blocks. ZKP/Zerosync makes it possible to prove that a block hash and all prior blocks follow the protocol rules and were thus valid. However, valid block hashes alone are insufficient to mine on top of because they do not contain the UTXO set data necessary to mine a new block.Why do miners have an incentive to distribute the blocks they find? Ultimately because doing so is necessary for the coins they mined to be valuable. But if full nodes can be convinced of the validity of coins without full block contents --- thus allowing those coins to be sold --- that weakens the incentives to distribute block data in a form that allows other miners to mine.With regard to HTLCs/Lightning, HTLCs rely on a proof-of-publication to be secure: for the HTLC to be redeemed, the redeemer must publish the pre-image in the Bitcoin chain, allowing the other party relying on the HTLC to recover the pre-image. Again, ZKP/Zerosync weakens this security, as the validity of the transaction spending the HTLC can be proven without actually making the pre-image available.
So, if we were to change Bitcoin to accept
Single rose becomes a garden
Waxwing responded to Todd with some thoughts about this:
I do think, long term that ZKP over history is correct, and that (see typical rollup design) data carrying in state can do the job that you are (correctly) insisting, must be done. (And the corollary: "harmful data on the blockchain" is a wrong mental model and should be abandoned, irrespective of architecture.)
Todd elaborated a little on his concern that ZKP could be harmful to HTLCs and lightning and pointed out:
It's quite possible that ZKP's are, in the context of decentralized blockchains, an exploit that will prove to be impossible to patch. Similar to how merge mining is an economic exploit that may well be impossible to patch.Sometimes seemingly good ideas are ultimately killed by clever exploits.
To which, waxwing:
I have a sneaking suspicion you're wrong here, but I can't justify it. (Hence 'interesting!'). Would love to hear others opinion on the topic.
(It's also neat to see waxwing say: "apologies to OP; we've drifted off topic here" -- classy guy!)
At this point Boris Nagaev weighs in:
Peter's main concern is that ZKP-only validation can break HTLCs: if a spend is proven valid with a ZK proof but the actual transaction data (including the preimage) never needs to be revealed on-chain, the HTLC payer (the counterparty relying on that preimage) can't learn it and can't claim its incoming HTLC. That undermines Lightning's security because "proof of publication" collapses into "proof of validity without data availability." Any succinctness/ZK approach for Bitcoin has to preserve the guarantee that the preimage is actually published and readable.Related, if the network drifts toward relying on ZK proofs without simultaneously guaranteeing open access to raw blocks and transactions, block data can become gated by a few data providers. That is a data-availability risk that comes with any ZK deployment that omits strong DA, and it would erode self-sovereignty for routing nodes. Any practical ZK/succinctness design for Bitcoin needs strong data availability, so anyone can fetch raw transactions, not just validity proofs.To Peter: rather than trying to "defeat ZKP," maybe the pragmatic path is to shape any ZK/succinctness work so that the design itself carries the necessary data (e.g., preimages must be published on-chain and retrievable) and ships with strong data-availability guarantees (so raw tx/block data stays broadly accessible, not just proofs). If the "good" ZK system makes data availability a built-in feature, it can occupy the niche and leave less room for alternative designs that drop those guarantees.
Waxwing returned to the effect of ZKP on mining in his response:
find myself reflecting more on Peter's original point (how spectacularly deleterious to mining this could be, let alone the data availability stuff), and I'm wondering if we can just go radically in the opposite direction: what if mining was done just on an accumulator over the utxo set, instead of the utxo set itself? Complete redesign of the protocol, but .. possible, I think? Tx inputs would have to have set membership proofs. I wonder if anyone's done this particular analysis.
And Boris again:
The aim is to shape early design choices so the incentive-compatible equilibrium includes DA and forced publication, rather than slipping into a DA-weak equilibrium where only a few parties hold full data.
The thread ends (so far) with a clarification from Todd:
I want to be clear that in this context, the existence of strong ZK math is an exploit on the Bitcoin protocol, in much the same way that a mathematical advancement that could be used to break SHA256 preimage security is also an exploit on the Bitcoin protocol.It may be the case that the power of ZK techniques is sufficiently strong that Bitcoin needs to be redesigned to mitigate them; there is even a small chance that this is not possible and Lightning/HTLCs eventually become insecure due to it. No different than how there is a small chance that quantum computing relevant to cryptography turns out to be real and numerous protocols become insecure due to it.
A lot of this is on territory about which I know relatively little, but reading along it felt like a few windows or doors were opening through which I could get a glimpse of new aspects of Bitcoin.
It previously had not occurred to me just how important the data availability promises of Bitcoin are. Naively, I assumed that immutability was the important quality offered by the blockchain. Always more to learn, even from perhaps less than interesting foundations.
Lazy Fair has not added to the thread over than in their original post, but nonetheless, I thank them for being the nutritious bed of bullshit out of which has grown this very interesting conversation.
Lazy Fairis defo bot-assisted at the least. I've had many proposals like this on both my repos and my security mailing lists. You cannot ignore them though: the bot can get it right (or get something right) (or in this case, highlight a real problem by being wrong)