pull down to refresh

Testing this now but can't reproduce it with 9735, can you show me the error exactly?
reply
I think I got it, merging now...
The new change will check if the port is occupied before installing LND and then bump it
Take it for another spin, should work over the existing failed install, but if not just stop the hung lnd first systemctl --user stop lnd so the script can confirm its not lnd hogging the port
After its up if you cat ./lnd/lnd.conf you should see what port is is listening on, it increments so likely 9736
reply
0 sats \ 5 replies \ @ACYK 13h
Thanks for your efforts here. This is what I got:
Have to head back out. Will try again later. I went from Lightning.Pub to Alby Hub, so previous Lightning.Pub files were still around. Seems like the install may have worked? I'll have to figure out how to get the nprofile to pop up again. I have the nprofile saved from the last time I was running Lightning.Pub but that isn't working as a source in Shockwallet. I get a "Cannot connect to source. error enrolling admin token. Nostr connection timeout" error in Shockwallet . Maybe wiping Lightning.Pub and starting from scratch with a fresh nprofile is the way to go. LND is also hanging around from the previous Shockwallet install. Wiping that and reinstalling might also help.
reply
Doh, I mistyped the cat for lnd.conf:
cat ~/.lnd/lnd.conf
But yea if you installed with the old script though there may be some issues with autostart/restart of the services since that used sudo, would want to clean those out:
Stop existing services: sudo systemctl stop lnd lightning_pub Disable services: sudo systemctl disable lnd lightning_pub Remove old systemd units: sudo rm /etc/systemd/system/lnd.service /etc/systemd/system/lightning_pub.service Reload systemd: sudo systemctl daemon-reload
You can reset the admin credentials with rm ~/lightning_pub/admin.pub and that will create a new connect string cat ~/lightning_pub/admin.connect
If you no attachments then yea could instead rm -rf ~/lightning_pub ~/.lnd ~/lnd to be fully clean to be sure the script makes fresh autostart services like from scratch
reply
0 sats \ 3 replies \ @ACYK 3h
Seems to still be having an issue with the previous install even after the steps above (up to and including $ rm -rf ~/lightning_pub ~/.lnd ~/lnd ). Seems it is still finding migration files somewhere. I should just try this on a separate machine at some point
reply
those are informational no worries there, the migrations are baked in db schema, and the env is ephemeral
Lack of wallet status though may imply LND isnt happy systemctl --user status lnd
Happy to sync with you on telegram etc
reply
170 sats \ 1 reply \ @ACYK 1h
I got it working. I was hesitant to stop the AlbyHub LDK node as I wanted to avoid downtime there, but I decided to bite the bullet and do that.
After that, Lightning.Pub installed fine.
Afterwards, I couldn't start AlbyHub again as I got the following error: {"error":"listen tcp :8080: bind: address already in use","level":"error","msg":"echo server failed to start"}
My guess is this is the conflict which was preventing Lightning.Pub from starting properly when AlbyHub was already running.
If I again start AlbyHub first, and try the fresh Lightning.Pub install, I get the same result. "Can't retrieve wallet status". Screenshot at bottom. It did have the "Port 9735 is in use. Checking next port..." line, but, that must not have been the only issue.
Heading back to AlbyHub for now as I have other family members using it, but, I'll get another machine going at some point to test out Lightning.Pub. Or, if you have other ideas how to make them both work without wasting too much of your time, we could take that offline as well. Thanks for your help!
reply
ah.. 8080... that's a REST port for LND, either LDK or Alby must also use it...
I think adding a line to lnd.conf should be enough to remedy that though
restlisten=localhost:8080
then restart servicessystemctl --user restart lnd && systemctl --user restart lightning_pub
For testing things on this same machine more generally though, you may want to look into using VMs for isolation. Assuming you've got memory to spare you can boot up full operating systems with virt-manager:
That's actually how I test/dev across OS's/Architectures
If resources are a little more scarce there's also linux containers, sort of like docker in that it's lightweight, but a little easier to use like a VM. Ubuntu actually maintains the tooling for these: