0 sats \ 4 replies \ @drdoom 4 Sep 2023 \ parent \ on: Removing the last few middleman tech
Well I can keep spitballing; if we get too structured here I'll have to bill you by the hour haha
Though am indeed building on my own right now just fleshing things out & prototyping; it's why this is great to have converstation with likeminds.
Definitely agree with you let's not reinvent the wheel. On that regard, I've given the backend some thought too and this ties in to what it would look like a little bit in terms of UX and how it works. IPFS is an example of something that is rather exotic and so I think we should be looking seriously at other existing file systems/networked systems already in production that guarantee file integrity specificaly the workhorses known as Git and ZFS which both happen also to be software engineering pinnacles IMHO. Primarily the reasons these 2 are so rock solid & widespread is that they are extremely focused on limited features but absolutely nailing those.
And like you said DNS/CDN pretty much fine and as such my thinking is you bootstrap on them; nothing is going to go anywhere if its not resolvable over http so what we could have is 'nodes' that expose a port and are as such reachable on http complete with the addressing system so that any given node can host all or some of the endpoints (content) over existing DNS + http ex: mydomain.com/somehash123435dfs/mynote.md
Hard pass on NFTs but methods of authentication/permission is important and this can effectively make it useful for communications too. Some method to accommodate licenseable content may also be worthy of implementation to help individual creators publish/monetize their endpoints if they so desire.
Like you pointed out we don't need a new tech stack, I would only suggest a clever content routing/addressing system combined with intuitive UI/UX & some aspect of branding - that aside we can tackle the more pressing and challenging political, monopolozation and man-in-the-middle vulnerabilities you highlight.
Your collaborative/crowdsourced/bot driven purchasing strategy seems very promising. I think also perhaps something early on to decide is if we would want to work towards IRL hardware ie- to get rackspace and/or have a data-center strategy.
Ok…. however you didn’t answer my question :) Let me ask it more succinctly:
What workflow exists today that your system would replace/do differently?
I still have no idea what you want to build. A “clever content routing/addressing system combined with intuitive Ui/UX “ is a new tech stack in my opinion. Which, as I said, I’m not opposed to. I am just looking at this from a product management / requirements point of view and trying to understand the “why”. The “how” isn’t of particular interest to me (at this moment).
reply
HTTP and the domain name system (and by extension, email) today are the fiat of internet information exchange. The endpoints are not immutable, nor is the content at those endpoints verifiable (in most typical situations where the provider does not explicitly provide a checksum; SSL is some aspect of general end to end security but there's no checksum provided for the site's JavaScript bundle file for example). When endpoints go down, unless you have your own copy its gone (possibly recoverable on wayback machine, internet archive, or if you are lucky as a torrent/from a seeder if it was a large file for example)
We have verifiable/checksum based systems already like in Git or ZFS (and torrents)
So it's not about replacing the internet 'workflow' but rather integrating these verifiable file systems under the hood to make a better internet. Just like Bitcoin made a better money. It made the exchanging of information immutable; the endpoints are permanent.
At the risk of being too pie in the sky another 'workflow' example is email. There are of course a dozen contenders for making the next 'email' including Nostr but it's got its imperfections and I don't necessarily think they are optimizing for what I describe here. When you send an email, you don't want it eaten by man in the middle. This is a critical flaw right now today with email - some messages are actually blocked at the provider level like Gmail. And for those not blocked/rejected their solution to spam is to read your emails and filter on your behalf; today you send an email to someone for the first time and you must assume 10% chance you will go in the spam box and there's about a 1% chance every 60 days the recipient might look there and might actually see you in a sea of other msges.
Hash based endpoint system (instead of routing through IPs and domains) solves this and you can verifiably open your inbox - spam or no spam - and rest assured your mail provider or ISP hasn't filtered out messages; its like a Bitcoin address if someone sent you BTC you are not missing that transaction. To prevent/mitigate spam we can use sats to let users set an inbound fee not far unliike how Stackernews does; set your price for receiving a message from an untrusted provider and now even if you do get spam at least they paid you for the priveledge.
At the end of the day, you're going to need that 'killer app' to give these features first class treatment (not to mention native integration w/ Lightning for example) so yeah a browser / inbox viewer is something I have been tinkering on.
I understand you are looking at more logistical/infrastructure level vs yet more software engineering but having common goals helps bridge systems so perhaps we can find ways to overlap our endeavours & keep focused on doing what we respectively do best.
reply
Ok. I got it…. yes.
E-mail over IPFS perhaps? Something like the UUCP of old? :)
I am more “full stack” and definitely recognize the role software engineering principles play, as well as infrastructure.
I have seen many versions of “pay for deliverability” over the years. It already exists today. Go spin up a g-mail account and subscribe to marketing e-mails from say McDonalds/Ford/take your pick of bigco. Checkout the headers. Do the same with hotmail/yahoo. You’ll see some interesting bits set. (I worked for an organization that used a particular service to ensure deliverability and I don’t want to get in trouble :) ) .
I have inherent concerns with any pay to deliver system being abused as it is now. Now, granted, in theory?, CANSPAM etc requiring a pre-existing business relationship and (at some level) raw self interest/capitalism prevents too much “abuse” (and a compliant one click un-subscribe helps).
I like immutable/archivable/copyable end-points. I think that has so many uses. I suppose that’s what the blockchain gives us? Or , like you said git/zfs (I suppose those are kind of blockchains at their core).
Verifiable deliverability is key. Whether that is “email” (or chat) or for things like digital goods. Gumroad or something. Especially when paying via a non reversible payment method like BTC.
Thanks for taking the time to present examples. That really helped me understand where you are coming from.
This is exactly the kind of stack I would love to see running on top of the MorseNET system. Expanding access to the existing centralized system isn’t ideal (creating more on-ramps to hell lol).
reply
E-mail over IPFS perhaps? Something like the UUCP of old? :)
Neat idea! It was interesting to read up on UUCP just now. New reinvents old.
Go spin up a g-mail account and subscribe to marketing e-mails from say McDonalds/Ford/take your pick of bigco. Checkout the headers. Do the same with hotmail/yahoo. You’ll see some interesting bits set. (I worked for an organization that used a particular service to ensure deliverability and I don’t want to get in trouble :) ) .
Oh that's a really fascinating (and mildly disturbing yet not surprising) tidbit, see what they are up to there.
Again here we have an example of providers making decisions in the middle of what should otherwise be neutral communication.
However, if the user is the one who is doing the negotiation, ie- setting the price they demand for new inbound messages from untrusted parties - does this mitigate your concern? There could be also a zero or low fee kind of 'spam' box the interface builds in that ultimately allows for anyone to send a message to anybody (or endpoint) provided there is some way to reliably transmit those messages (which depending on our design could result in the same fate as an exceedingly low fee on a BTC tx).
I like immutable/archivable/copyable end-points. I think that has so many uses. I suppose that’s what the blockchain gives us? Or , like you said git/zfs (I suppose those are kind of blockchains at their core).
Exactly these are local blockchains (with free addressing/hashing out of the box) that each node can install/include as part of its (Docker and/or perhaps Nix) stack to provide verifiable file integrity. This hypothetical protocol could then be used to reflect that verification in the user's UI to show them that they are indeed looking at the correct file & that everything is working as intended.
Since each node can run Git or ZFS in isolation there is not one 'the blockchain' to download; operators are hosting mini-blockchains and can choose to host/sync/replicate as many or as few of them as they so desire (ie- maybe you just want to host your own website and thats it - others might choose to host your site or a number of their favorite sites - a documentary filmmaker might choose to host his films).
Either way, some kind of distributed index/ledger mechanism can be deployed in whereby each node broadcasts the last time it performed a proof of confirmation/integrity check for having the correct checksum of a given file that has been published to said index, or for broadcasting new files. This is where I have been actively researching where Bitcoin integrates - as you know it's the world's best timestamping system. And possibly we can even creatively store the index directly on the blockchain too. Tangent: initial design doesn't necessarily need it since ZFS/Git + http + torrents and checksumming should make replication/downloading/delivery reliable and hopefully fast enough but I have even seriously evaulated Bitcoin SV since you got unlimted block size to work with there and so content could actually be hosted directly on the chain (... Craig's chain, that is... yeah have pretty much scrapped that idea)
Extra points btw both Git and ZFS have snapshot/revision capabilities that make it trivial to empower the user to roll back to previous versions and verify the hash/checksums for historical files too. Imagine having that capabiltiy built into Wikipedia. Right now we just have to trust that their "History" page is not hiding/skipping certain changes from purview.
Thanks for taking the time to present examples. That really helped me understand where you are coming from.
No problem as you can see I have no quams ranting on here. I'm coding everyday on this so it is always fresh on my mind; also working towards a public beta but one-man product development can be rather slow pace. The main driver is that I want to share and publish content knowing it's not going into a black hole. And I think a lot of talented devs & content creators alike out their realize they are increasingly playing a rigged game by publishing content on the www - sure you can avoid the controlled platforms/social media and publish on to your own domain (and pay increasingly gouging annual renewal prices). But even SEO spam aside there's a deeper dark pattern affecting the discovery of blogs and independent sites - primarily the issue is search engines are queitly pruning/adjusting their algos to intentionally block these very independent sites by design (YouTube glaringly bad but you are seeing the same thing slowly happen to Google results too).
Had a somewhat popular Geocities website back in the day and maybe you also recal a special charm about that era. Good old Yahoo category index would let you browse the web and discover at your own pace. Non JavaScript by default internet is also something I long for again (despite being a full stack JS coder today).
You are obviously on the ball - seeing similar patterns as much as I love to optimize for nostalgia and improved UX it is important to plan ahead for the worst; huge props to you on that regard indeed we should take cue from the signs and have something before the next 'thing' happens.
This is exactly the kind of stack I would love to see running on top of the MorseNET system. Expanding access to the existing centralized system isn’t ideal (creating more on-ramps to hell lol).
Same dillemma with money. On one hand, we need to use Bitcoin wherever possible if we will ever have a chance to sway the masses. On the other, the masses use fiat so we can't throw the baby out with the bathwater. I love the notion of hedging a bet on a better internet future much like Bitcoin is promising for money while straddling that balance of doing business with the devil for as long as necessary for the rest of the world to transition.
Sounds like you are working with some clientelle who need mission critical end to end file & messaging integrity.
Will do some reading up on short wave I'm completely new to that stuff. Other Bitcoiners are experimenting with HAM too so you will at least have a small market of guys willing to launch blimps with you :)
reply