It's not that its not signed, its that its TLS signed as you say... but even a native app store is still using TLS for pretty much everything.
What superiority is there in a checksum or PGP key vs TLS if you're communicating that checksum or PGP key over TLS in the first place? If TLS is broken everything is (and I would agree that's plausible since I just assume the NSA always has the drop on everyone).
So, all things equal with TLS, it comes down to trusting the publisher in both scenarios too. I don't think there's a material percentage of people halting auto-updates that aren't just self-deploying, seems like a bit of a reach.
People do shit up their browsers with extensions and stuff that exposes their storage, maybe OS's are better at storage separation than browsers considering that... but extensions are just one form of social engineering attack a native app could iterate upon.
What superiority is there in a checksum or PGP key vs TLS if you're communicating that checksum or PGP key over TLS in the first place?
It matters a lot because of who (or what) controls the signing key.
Consider bitcoin core. The devs who sign the bitcoin core releases each have a static PGP key which they publish on github. I download and store those PGP keys over TLS, so if my TLS connection is compromised at the time of my PGP key download, then I'm screwed.
But if my initial PGP key download is safe (or if the keys are PGP-signed by keys I already had and trust) then I can use them to verify any future download of bitcoin core, regardless of transport mechanism. I could download a signed bitcoin core from a torrent, or a plaintext FTP bucket, or a sketchy forum, and if the signatures on it are valid, then the download is authentic (assuming the signing devs are honest).
Note how the signing keys are controlled solely by the devs who built/reproduced bitcoin core, and so the signatures could only be made by them. I don't need to care about the provenance of the bitcoin core build i downloaded, as long as I trust the devs who signed off on that build to keep their signing keys safe.

Compare that with TLS.
A TLS server must possess two things: a certificate and a secret key. When a user's browser loads a PWA from a server, it checks that server's TLS certificate is signed by one of the known valid certificate authorities (CAs) stored in the user's browser, and confirms the server owns the public key listed in their certificate. It also checks the certificate is labeled with the correct domain. If these checks pass, then the connection can proceed.
All that TLS certificate really proves then is that the server has been authorized by a CA to control a given domain. It says nothing about the origin or authenticity of the data transmitted by that server. It doesn't even prove the user is talking to the same server today as they were yesterday, because certificates are intentionally set to expire and rotate regularly.
Note how the signing key with the real power here is controlled by the certificate authority. The server's key is just a secondary key whose bearer has been 'authorized' by the CA to act as steward of a given domain. There are so many attack vectors in this scheme it's difficult to even know where to begin, but the primary one is that the server can just serve malicious data.
That can't happen if you verify a build with a PGP key controlled personally by the developer.
reply