0 sats \ 0 replies \ @LNAnon69420 27 Dec 2023 \ parent \ on: SlushPool(Braiins) is dying. Here's how to save it bitcoin
Fair enough, perhaps it is a negligble concern at Braiins scale. I am only trying to form a fuller model of the cost inputs for my own understanding, as pools live and die on the margins, particularly in FPPS schemes. I believe you confirmed that there is not quite a way to avoid some pre-commitment of capital, but please correct me if I misunderstood
A rough cost formula might then look like;
((Channel Open fee + Channel Close fee) * (Frequency of open/close)) + (Payout Capacity * TLV of BTC)
Cost variance appears to greatly hinge on the number of channels the pool operator must maintain, with 1 large channel to a large node being cheapest, and per-client channels to individual nodes being most expensive. Reliability of the payment being delivered follows inversely, highest by per-client channels, lowest thru the general network, with some potential cost of inbound liquidity borne by the client as reliance of using the general network increases
An unexpected Peter Todd reply!
I get what you mean, perhaps I should have clarified that I meant liquidity of the pool operator themself as opposed to node or channel liquidity concerns. A newly mined block needs 100 confirmations to spend. For a say once daily payout, not all mined blocks in the cycle can necessarily be spent in this payout. A pool paying out 100 bitcoin/day needs 100 bitcoin of outbound ready to serve the payout at the time it is executed (at least in this once daily example), and they will not necessarily have 100 bitcoin ready to spend into outputs per cycle from freshly mined coinbase's. To guarantee payouts, it seems to me that the pool operator must pre-commit funds to these channels, implying they have at least 100 bitcoins available before having mined them. Is there something I am missing so as to avoid this pre-commitment in the business logic sense?
The reasons I listed are generalizations of why pools haven't yet from what I've seen in my direct experience ;)
- Relative to onchain payouts, where the pool just needs an address, LN requires additional components on both pool and client side
- Liquidity does very much matter, in the sense that pool needs to essentially 'pre-commit' funds to channels as opposed to paying out directly. Coinbase rewards into channel openings would be interesting!
- Consumer demand is generally easier to sell to corporate team than notion of creating demand
I see, my assumptions hinged on a payout node opening individual channels per client to guarantee reliable payouts. Using the greater networks liquidity could surely reduce the liquidity req's on pool-side, though clients would then need to source their own inbound?
Sure, the bigger the channels, the less often tx need to be performed. Seems like the pool still needs some hundreds of bitcoin locked up in outbound channels? As payouts occur, you suggest looping vs. reopening- this seems to suggest another cost for the client to perform? Would not the client prefer to just close the channel, claim the funds onchain, and leave the source of their inbound with the closing costs? (Another cost for pool if they opened the channel)
Monetizing the flow of the node makes sense, I'd be interested in seeing what revenues could be expected to offset expected costs. Maybe The Lightning Pulse could open-source a model of how to perform payouts? Definitely interested to see how Nifty would sketch it out
Can you expand on the cheapness?
In my rough estimate, say a pool pays out 100 bitcoin a week across 100 customers, and they want to do this 100% via Lightning. For a week of daily payouts, they need to commit 700 bitcoin, say as a single batch tx with 100 outputs to every node. Call it .01 btc fees/week to open these channels.
Seems like its more expensive to perform than on-chain payouts, or am I missing something? Not even including the nominal risk of hot wallet infra, payroll for employees to manage, or the time-locked value of BTC
You're really saying fiatjaf is a "rent-seeker", a "non-technical midwit", and accusing him of a "fiat mindset"?
It is curious to me in this debate how many actual developers fall on one side and influencers on the other
There are a few assumptions that went into this paper to arrive at a rough ~750BTC number - the 'vulnerability' can be better described as an attack vector to steal sats from unwitting node ops. I'll try to boil it down
- First, a threat actor needs to own at least the 30 largest routers on the network
The top 30 nodes by capacity encompass something like half of network cap - remember, thats not just outbound sats but inbound sats. They need a significant portion of the network committed towards them
-
They then flood the network with htlc's and make them unresolvable, triggering FC's
-
They need to keep the btc mempool congested w/o clearing at over 10s/vb for at least 2 weeks
This is because a default configured watchtower defaults to 10s/vb and default TLV's of 2 weeks on FC's. If they can prevent you and your watch tower from broadcasting the closing state, they can publish it themselves, ascribing themselves the balance of that channel.
- The threat actor hopes he gains more BTC from vulnerable node ops than he does from triggering justice tx's from good node ops. You could even take into account the value of inbound liquidity and the assumed loss of fee revenue bc they were running a huge router network!
How to mitigate?
Increase default config settings, currently this can mean paying more than you need to for closes. LND is putting effort into bettering fee estimator
Hopefully that helps y'all think on if this is an actionable vulnerability or not!
PS for nerds out there: the authors claim that their k lopsided weighted max cut problem hasn't been studied before... I found a stack exchange post giving a 1/2 approximation algorithm within 5 minutes of googling
would take w grain of salt
fun little experiment is querying the graph with your own node -- using BOS on LND you'll probably get a number around ~1200 btc