315 sats \ 6 replies \ @nullcount 22 May
I have doubts that advanced pathfinding techniques are even that useful in a highly malleable network like LN.
Pathfinding is huge in static networks like roads (traveling salesman, etc.)
But in the LN, if there is a long path from point A to point B, you can just make a new path (a.k.a. channel) direct to the destination and for pretty cheap and have it operational in less than an hour.
reply
101 sats \ 2 replies \ @ambosstech OP 22 May
Our benchmark testing is ongoing and the initial results are extremely promising vs off the shelf LND.
In a dynamic graph like Lightning, the need for advanced computation is even greater.
reply
10 sats \ 1 reply \ @nullcount 22 May
By "off the shelf LND", do you mean a new LND node that has not yet built a robust probabilistic routing model?
LND uses probabilistic routing to build a local model of payment success probabilities for every attempted path. After many payment attempts, an LND node should optimize its own pathfinding.
reply
0 sats \ 0 replies \ @ambosstech OP 23 May
We're testing both zeroed-out mission control and a learned mission control.
Apriori is decidedly difficult to benchmark against!
reply
20 sats \ 0 replies \ @kilianbuhn 23 May
I'm suspicious of machine learning path finding in general. Big fan of machine learning. But classical algorithms are already good in these graph algorithms
reply
0 sats \ 1 reply \ @nikotsla 26 May
Totally agree with this... what to benchmark will be an issue and it's liquid (highly malleable as you said)... I don't know if the effort will pay off, too much computation to adapt every change in the graph.
reply
5 sats \ 0 replies \ @ambosstech OP 26 May
The computation and retraining is not intensive at this size of the network graph. As the network graph increases in size, the improvements in payment reliability offered by this type of calculation should increase versus anarchistic, source-based routing.
reply
60 sats \ 0 replies \ @DarthCoin 23 May
BOYCOTT AMBOSS!
YOU HAVE BEEN WARNED.
reply
20 sats \ 2 replies \ @twood 23 May freebie
- how many nodes and channels were in the dataset used to train this?
- i think it is basically impossible to do a proper train/test/validation here. for example, if you have one side of the channel in the training set then the other side might be in the test set and you have leaked the answer to your model to learn. you can't fully sequester this information. how did you try to deal with this?
reply
30 sats \ 1 reply \ @random_ 23 May
Group by channel ID and then split.
https://dplyr.tidyverse.org/reference/group_split.html
reply
10 sats \ 0 replies \ @twood 23 May freebie
That leaks node-level features from the train set into the test/validation set then no? Like if 90% of my channels are in the train set, and the top performing feature is "positional encoding" then the train set can basically learn my nodes positional encoding and whether my channels tend to be balanced or not. Then it just extrapolates that to my channels in the test/validation set.
reply
10 sats \ 0 replies \ @kingzing131 23 May
Any effort to solve scalability is a great opportunity.
reply
0 sats \ 0 replies \ @StoneCodlHodl 23 May
deleted by author
reply