pull down to refresh

The language it's written in, C.
It is much harder to guarantee security against stack busting attacks, it is more prone to errors in memory management, specifically freeing no longer used resources, both of which are very common ways in which hackers break the security of network servers. C++ has the same problem, but is mitigated somewhat by its object oriented design, making it easier to not forget to write garbage collection code, where C has nothing at all.
Rust is very trendy at the moment because it has a very low overhead garbage collection system based on complex hints that provides stronger protection against these two described attack vectors. I dislike it because it's also object oriented and has a much slower compilation speed due to it, in the same way as C++.
Go is a slightly older language which dispenses with the complex syntax of object oriented code which lightens the memory protection and garbage collection load, and replaces it with a fully active garbage collector which provides this stack and memory leak protection. Go's throughput and memory management is a less efficient because it is based on heuristics instead of explicit memory usage lifetimes, but because the language is simpler and freeing memory is never the responsibility of the programmer, the benefit is the code is guaranteed free of these two vulnerabilities without either the learning curve of the complex syntax Rust uses for this, or the compilation time cost. Rust also basically wholesale copies the lazy compilation scheme used in Go with Cargo. And it has a macro language. For reasons of simplicity, Go doesn't have a macro language, as its interfaces (also found in Java) handle dynamic typing requirements for generic programming.
I can understand why people think Rust is the new hotness, but ultimately the problem space of memory management is not such a big puzzle that it cannot be largely automated. And Rust has one advantage over Go in the area of throughput of bulk, CPU bound processing, that it works with kernel threads, but Go has coroutines and atomic FIFO queues called "channels" which do not quite so effectively distribute CPU bound work to processor threads, but reduce the scheduling cost, meaning that Go programs can have far lower response latency.
Go basically sacrifices some of this parallelism for latency. To get similar bulk processing, CPU bound tasks to run as fast in Go, you have to build a subprocess management system, and have dedicated programs that use an IPC to deliver their results back to the controller, I used this on a CPU mineable proof of work a few years ago in a project I was working on, and it saw a 20% increase in hashrate, compared to letting the Go runtime schedule multiple coroutines.
The other thing that Rust can be a little better at, due to its better parallelism, is high load network heavy tasks like video streaming. Essentially to get the same optimisation in Go you have to hijack the scheduling system in the runtime and replace it with manual scheduling, which clutters up the concurrent programming in naive Go concurrent programming with explicit handling of memory and event priority.
Overall, it is my opinion that Go has got more things right when it comes to building secure servers, and Rust's advantage is only in less common edge cases, and the tradeoff between complexity and security works for the majority of network services. This is why Docker, Kubernetes, IPFS, and several other well known systems are written in Go.
Basically, Go obsoletes C and C++ in every area, where Rust adds additional cognitive load in order to achieve the same end. Go is easy to learn, easy to read, and compiles very fast due to its extremely simple, directed acyclic processing graphs, where the structure of OOP languages like Rust and C++ creates a huge amount of requirement for the compiler to have complex heuristics for terminating loops in processing graphs and import graphs.
Go code is more maintainable when written in accordance with the idioms set out by the Go Authors and that have been codified over the years. It is cheaper (faster) to train Go programmers and bringing new people onto a project also has a much shorter lead time. And lastly, as the Go runtime matures, it shaves more and more of the memory utilisation and parallelisation deficiencies, to the point it is now closing the difference with fiddly languages like Rust in performance.
Rust is a good language for C++ and Java programmers, but Go is better for beginners and costs less to maintain overall.
But, circling back to lnd... that repository is a hellscape of non idiomatic code and build steps, and despite being the most used LN node, seems like the funds mostly go to marketing and the devs are both small in number, overworked, and probably underpaid.
CLN, on the other hand, seems to have a lot of developers but that doesn't really compensate for the vastly larger risk of CLN having remote vulnerabilities that simply would not exist in either a Go or Rust version.
It's my intention to fork and fix lnd in the future, once Indra launches, because it's a travesty. btcd and neutrino are both well written apps, but lnd is a total dogpile.
Nothing can possibly change the nature of the threats and advantages between these two most popular LN nodes. Serious business services are never going to use CLN because of the everpresent risk of remote vulnerabilities.
And before you jump in with "LND got broken by those big witness transactions"... Those were only DoS vulnerabilities, and they were caused by overzealous resource management policies in the code that differed from Bitcoin Core. Literally was just the changing of a couple of numbers to fix them, that's why the LND nodes were down for less than a day in most cases. IMO, the lack of specification of reasonable limits on witness sizes is a vulnerability to resource exhaustion attacks on Bitcoin that are yet to come.
The attacks will be complex to orchestrate, but they will be devastating to the network, and if the same policies were used in the protocol and core implementation as btcd used to have, the problem would be solved, and it will raise the spectre of a possible need to make another soft fork, this time to actually address possibly the first serious vulnerability in the protocol. Which is the legacy of the Blocksize Wars, btw.
Ah, I see. Thanks for your in-depth explanation!
reply