pull down to refresh

I'm sharing this because it's all the rage to demand multiple implementations of the bitcoin protocol. This is a live example of how specifications necessarily come up short and implementations often fill the gaps differently. Fortunately with lightning it merely results in periodic hair pulling because there is a much weaker kind of consensus going on. With bitcoin, unexpected behavior in the consensus protocol is disastrous.
There has to be an existing postulate for this at least, but I'd guess one could prove that a specification, no matter how complete, is always less deterministic than an implementation of said specification.
Apples and Oranges to compare disparate implementations of Lightning vs. Bitcoin
Issue at hand is explicitly due to the nature of it being interactive, where Bitcoin L1 isn't
It's also not a disaster for a minor implementation of Bitcoin to fuck up, that specific implementation either doesn't get its transactions mined or crashes. Network unaffected. Would take a large share to be a disaster, which is precisely why no implementation should have a large share by default. #ArchiveCore
Lightning is much more dangerous for network health to have disparate implementations, one bad mass-upgrade of a given lightning implementation could take a lot of peers down and storm force closures in a cascading shit show.
Fortunately the minor lightning implementations are extremely minor, more so than say knots vs. core... and generally edge nodes vs. backbones of the network.
reply
17 sats \ 3 replies \ @k00b OP 12h
Apples and Oranges to compare disparate implementations of Lightning vs. Bitcoin
I agree, but they're both fruit which is about as much as I was saying.
Issue at hand is explicitly due to the nature of it being interactive, where Bitcoin L1 isn't
You mean peers interact with each other more often and directly in lightning therefore L1 is relatively noninteractive, right? If you mean something else, I need help because I wouldn't say that L1 "isn't" interactive.
It's also not a disaster for a minor implementation of Bitcoin to fuck up, that specific implementation either doesn't get its transactions mined or crashes.
It's probably at least a minor disaster for miners and economic nodes running the minority implementation in a fork. I agree the network could cope though.
Would these accidental forks be more common with a plurality of implementations? If so, wouldn't that be enough to create a network effect that defaults the network to one implementation?
reply
Would the framing of L1 is asynchronous where L2 is synchronous help? Or broadcast/unicast?
If you wanted, you go as far as signing a transaction to miners via carrier pidgeon, since there's not any sort liveness or quorum inherent to that communication.
Even miners which do need to stay online to function are dealing only in broadcast traffic, and implementation chooses only to listen and shout correctly, having no implications on its peers.
Even a minor implementation mining a block out of spec (shout correctly) would have that block rejected, with the user having their tx mined in that block remaining unaffected as it would still be mined in both forks, all without any handshake happening first.
nodes running the minority implementation
In Lightning it effects all nodes if its size-able enough to cascade given the interconnected-ness of the network, not necessarily just those running the breaching implementation as would be the case in an L1 situation.
Would these accidental forks be more common with a plurality of implementations?
Likely, the trade-off of that being more isolation. Better to have 10x more fork issues at 1/10th the size, like forest fires, controlled burns prevent uncontrolled ones.
90% of the network running one distribution is a risk even with its own prior versions, new version can quickly become a large share of the network just given that distributions overwhelming share.
If so, wouldn't that be enough to create a network effect that defaults the network to one implementation?
That's the effectively case now, I'd expect it to remain so. However, it would demarcate implementation from distribution, which is more important given how a large break on either layer likely has to come in the form of an "update".
Archiving Core would decentralize that distribution, but not necessarily the consensus code, since replacement distributions would largely be forks of it.
Any changes to consensus code, or new/ported implementations, would then be either graceful or ungraceful based on its ability to gain distribution first. The more likely it is to break something, the less likely it is to be widely distributed.
The result of that would I think inevitably become library-ification, and a lexicon shift to Bitcoin distributions rather than Bitcoin implementations. To further analogize to Linux, the equivalent to consensus code there is the hardware.
BTCD has shown this is possible, its a differing implementation with a solid track record of "just working", yet the majority of its users use it as a Library (LND). Libbitcoin literally intends to be a library.
reply
17 sats \ 1 reply \ @k00b OP 8h
I get your definition of interactivity now. I sense there's something generalizable there that might help me reason about systems better. Like, what can we say about highly interactive systems that isn't true of less interactive systems and vice versa?
In Lightning it effects all nodes if its size-able enough to cascade given the interconnected-ness of the network, not necessarily just those running the breaching implementation as would be the case in an L1 situation.
tbh I haven't given this much thought nor interactivity vs interconnectedness vs consensus. I think I've underappreciated the risks of having different lightning implementations at the very least.
The result of that would I think inevitably become library-ification, and a lexicon shift to Bitcoin distributions rather than Bitcoin implementations. To further analogize to Linux, the equivalent to consensus code there is the hardware.
That certainly seems like a direction we're heading in - more modularization. Even with consensus code isolated and safe from accidental forks, I suspect we'll never break free of a dominant distribution. There are too many network effects at play. Switching costs would be lower which should yield more diversity than we have now at least.
reply
something generalizable
mutual dependency vs. one-way dependency maybe, participant behavior vs. environment behavior, relationships vs. language
the risks of having different lightning implementations
Yea I think its a bit of a paradox, everyone has taken for granted that its less risky on the L2 because the L1 is what's notoriously consensus driven... ignoring that its a loose-consensus on L2 that's inherently more fragile
Incentives are likely a factor in this narrative remaining undisputed, there's service layers to offer in L2 stacks that influence the implementations, there's not really any services that coupled closely to the L1. The Lightning implementations are all very modular, the bolts necessitate it, yet they all ship as part of value-added ecosystems (even if they're optional)
we'll never break free of a dominant distribution
Likely, but even Debian, RedHat, Arch and their children is more distributed than Bitcoin is at this stage.... all generally dividing up the same hardware.
My issue with Core is that, due to its legacy, very few even think about distribution and are never forced to make a decision. That default distribution had made it a politburo more than it is an implementation, leaving both ossifists and expressionists dissatisfied.
Archiving it would at least shake that up, even if it was a 1:1 Team/NGO migration to a new repo under a new name achieving a similar share of update distribution, Core loyalists would stand to benefit the most in affirming such a mandate.
This has become apparent with the Nostr NIP's repo already too, the repo is a politburo for the distribution channel that is nostr-tools and dictates by default how any number of things are done. This despite really only a handful of the NIP's resembling a level of network-wide consensus.
reply
I'm guessing issues with my node is partly what triggered this? Would it make any difference if I receive via a lightning address instead of NWC? Or is the issue independent of that?
It seems like the fact that I have a direct channel is part of what's causing the problem?
reply
The issue is related to the implementation of your Lightning node - method of requesting an invoice and your channels are not relevant.
I deployed a fallback, but it doesn’t work because the first attempt to probe scores the channel as invalid temporarily, then the fallback fails too.
I’ll deploy a workaround tomorrow rather than a fallback and that should get LDK-backed Alby Hub folks back to receiving sats for zaps.
reply
Interesting. Still trying to understand the problem better. I was able to receive my rewards from last night, as well as withdraw sats from my SN account. Does that mean you don't use probing for those payments?
reply
0 sats \ 1 reply \ @k00b OP 10h
We only use probing for zaps. Before we give an invoice to the sender we want to be reasonably certain we’ll be able to pay the receiver.
Before, the probe failing indicated there isn’t a route. But now probing is generating false negatives.
reply
ah, got it
reply
121 sats \ 0 replies \ @ek 14h
I'm looking into lnprototest. It's a test suite that can be used to check whether an implementation is spec-compliant or the spec needs to be more precise.
But even if every implementation tested itself against it, and the spec would leave no room for interpretation, we still wouldn’t catch issues like this in advance, because, as you mentioned, probing isn’t part of the spec.
Hmm, tough spot to be in. Here's a hug emoji: 🫂
reply
100 sats \ 9 replies \ @k00b OP 15h
We ran into this because when stackers zap each other, we want to be reasonably certain the zap will succeed so that the sender doesn't have to move money around unnecessarily.
Roughly speaking, LDK changed how they respond to these "estimate requests" in their latest release which differs from LND's expectations. Matt's position is valid imo but implies specification on the basis of reasoning being deterministic.
But, what is most reasonable can change should the context change, so one could argue existing or past behavior should be the implicit specification. Neither is the ideal way to specify something though, because the whole point of specification is that the arguments are had in advance.
Anyway, this is all to say that the dividing line between specification and implementation is blurry, wavy, and unintuitive. Protocols are a beautiful mess.
reply
I suspect probing isn't specified because we all agree that probing is undesirable and hope that MPP and better payment predictions via statistical analysis will decrease the motivation to probe. Yet everyone probes in the meantime because deterministic UX is good UX.
reply
50 sats \ 7 replies \ @xz 12h
Why exactly is probing undesirable? I understand it's definetly optimal. But I mean, we even have probing built into third party tools like balance-of-satoshi (bos probe.)
Serious question though. Not a dev. Is this mainly a privacy concern, or just spammy?
reply
Is this mainly a privacy concern, or just spammy?
Both! (At least I think so.)
It's spammy cause you're briefly using routes and resources without paying for them.
It's a privacy problem too. If you probe enough you can determine channel balances which are otherwise not public info.
reply
50 sats \ 5 replies \ @xz 12h
Meant to say sub-optimal
I see! But if a node processes a payment through a channel, ergo, you know that its capacity is greater than the payment. Isn't this comparible to paying for a chocolate bar in cash with a $20 note and the other witnesses knowing you have $20? I.e. participation requires some kind of reveal?
Does the privacy problem lie with network stats analysis tools?
The spam part is clearer as a problem to me. However, if all channels have a minimum fee, does this not help?
reply
21 sats \ 1 reply \ @xz 12h
Sounds curiously like the L1 spam problem, in that if you have some filters it stops some spam, but if your channels are all zero-fee, it's kind of like leaving your letter box open to mail or anything that will fit in that hole.
Default channel base-fee to 10 sats, what would that do for probing?
reply
Default channel base-fee to 10 sats, what would that do for probing?
Not much because probing is free except for the time value of the bitcoin locked up to be used during the probe. You may get fewer probes with a high base fee, assuming the probe is designed to find cheap routes. But spammy/privacy revealing kind of probes, the ones that are bad, won't care about your base fee.
reply
However, if all channels have a minimum fee, does this not help?
Probing is free currently, but there are a few proposals for rate-limiting and charging for abuses like this.
Isn't this comparible to paying for a chocolate bar in cash with a $20 note and the other witnesses knowing you have $20?
It's more like asking everyone in the restaurant to pay $20 for your chocolate bar, recording who could, then telling them never mind. Then asking them to pay $100 for your bottle of wine then never mind. Then asking them to pay for your $500 dinner and so on.
participation requires some kind of reveal?
It does. The problem is that nodes trust that participation is genuine. Probing works by pretending that you're making a payment when you know it will fail once it reaches its destination. There's no way for a node to turn off probing if they want to participate in routing payments.
reply
0 sats \ 1 reply \ @xz 12h
I was thinking many of these tiny ad-like transaction were all of the probes. So, they just use HTLC, then release it. Thanks for the explainer. That's much clearer now.
I wonder whether they'd be some kind of WOT system for nodes that are well-established and known to you. A rule set, like, has this node opened a channel to you in the past, did you initiate a channel with this node in the past, has one of your trusted nodes opened a channel to this node in the past, and so on.. that might develop.
reply
Trust is tricky in distributed systems. It introduces subjectivity which can be manipulated and tends to discriminate in uneven and unintended ways. But it could allow trusted peers to pay lower sybil fees which should make the network healthier as a whole.
Brink researchers released a paper using a trust+sybil fee approach. Other proposals have escalating fees when abuse is detected.
Sharing because it’s interesting, but SN uses a trust+sybil fee approach to deal with a similar class of problems. I was surprised to learn people were independently exploring trust+sybil fee for the lightning network itself.
Specification in lighthing payment needs someone little specialist.
reply