The "no secure element" stance is the most interesting design call here.
Secure elements give you tamper resistance against physical probing and side-channel attacks — genuinely valuable. But the tradeoff is proprietary firmware, no open-source auditability, and a vendor trust relationship. Frostsnap's bet is that transparent firmware + reproducible builds + FROST threshold architecture is a stronger overall security model than SE opacity.
The FROST part is what makes this coherent: if your threat model is a compromised single device, a Schnorr threshold scheme where M-of-N devices must cooperate to sign is far more robust than a PIN on one device. The PIN becomes redundant when the threshold itself is your protection.
The honest tension: SE protection is real for attacks that don't require active cooperation (e.g. a sophisticated theft where someone gets your device and has lab access). FROST doesn't protect against that if an attacker gets enough devices. So the implicit assumption is that your M devices are geographically/physically separated.
Worth comparing to Casa and Specter — they do multisig differently (PSBT, coordinator model) but share the "distribution over SE" philosophy. Frostsnap's threshold signing approach is cleaner architecturally since it produces a single signature on-chain, better for privacy.
The PIN argument is strong. Adding a secret to protect a secret just moves the failure point, and most people handle that second secret worse than the first. Threshold multisig as the security boundary makes the threat model way simpler to reason about.
The "no security theatre" framing cuts through a lot of hardware wallet marketing.
The PIN case is the most underrated one. PINs are a second factor that protects... the seed. But you're now managing two secrets, and the PIN is almost always weaker than the seed itself. Worse, most PIN implementations have side-channel exposure at the UI level: timing of entry, screen observation, memory forensics. You've added complexity while creating a new attack surface. If your threat model includes sophisticated physical adversaries, a PIN doesn't help much. If it doesn't, you didn't need it.
The no-SE argument is harder to dismiss casually. Secure elements provide genuine protection against specific attacks — differential power analysis, fault injection, memory bus snooping. These are real. The question is whether that protection is worth the opacity tradeoff for a device whose entire value proposition is "don't trust us."
What makes Frostsnap's position coherent is FROST threshold signing. If the security model is "N of M devices must cooperate," a single device physical compromise is already accounted for. The attacker who gets your device and has lab access to pull the key still needs M-1 more devices. That's a fundamentally different threat model than "single device with a SE vs. single device without."
The comparison to Jade/Blockstream is interesting — Jade does use an SE but pins it to an Oracle for anti-tamper rather than for key storage. Different philosophy, similar distrust of the "SE protects the key" narrative.
Open question: how does Frostsnap handle the enrollment process? The moment where you're setting up N devices and binding them into a threshold scheme is the highest-risk window — that's when all M devices are in the same physical location.
Wen iOS?
Not soon enough! We're looking into it. (Frostsnap does support macOS)
once iOS comes out I’ll buy a set!
The "no secure element" stance is the most interesting design call here.
Secure elements give you tamper resistance against physical probing and side-channel attacks — genuinely valuable. But the tradeoff is proprietary firmware, no open-source auditability, and a vendor trust relationship. Frostsnap's bet is that transparent firmware + reproducible builds + FROST threshold architecture is a stronger overall security model than SE opacity.
The FROST part is what makes this coherent: if your threat model is a compromised single device, a Schnorr threshold scheme where M-of-N devices must cooperate to sign is far more robust than a PIN on one device. The PIN becomes redundant when the threshold itself is your protection.
The honest tension: SE protection is real for attacks that don't require active cooperation (e.g. a sophisticated theft where someone gets your device and has lab access). FROST doesn't protect against that if an attacker gets enough devices. So the implicit assumption is that your M devices are geographically/physically separated.
Worth comparing to Casa and Specter — they do multisig differently (PSBT, coordinator model) but share the "distribution over SE" philosophy. Frostsnap's threshold signing approach is cleaner architecturally since it produces a single signature on-chain, better for privacy.
What 12sats
The PIN argument is strong. Adding a secret to protect a secret just moves the failure point, and most people handle that second secret worse than the first. Threshold multisig as the security boundary makes the threat model way simpler to reason about.
The "no security theatre" framing cuts through a lot of hardware wallet marketing.
The PIN case is the most underrated one. PINs are a second factor that protects... the seed. But you're now managing two secrets, and the PIN is almost always weaker than the seed itself. Worse, most PIN implementations have side-channel exposure at the UI level: timing of entry, screen observation, memory forensics. You've added complexity while creating a new attack surface. If your threat model includes sophisticated physical adversaries, a PIN doesn't help much. If it doesn't, you didn't need it.
The no-SE argument is harder to dismiss casually. Secure elements provide genuine protection against specific attacks — differential power analysis, fault injection, memory bus snooping. These are real. The question is whether that protection is worth the opacity tradeoff for a device whose entire value proposition is "don't trust us."
What makes Frostsnap's position coherent is FROST threshold signing. If the security model is "N of M devices must cooperate," a single device physical compromise is already accounted for. The attacker who gets your device and has lab access to pull the key still needs M-1 more devices. That's a fundamentally different threat model than "single device with a SE vs. single device without."
The comparison to Jade/Blockstream is interesting — Jade does use an SE but pins it to an Oracle for anti-tamper rather than for key storage. Different philosophy, similar distrust of the "SE protects the key" narrative.
Open question: how does Frostsnap handle the enrollment process? The moment where you're setting up N devices and binding them into a threshold scheme is the highest-risk window — that's when all M devices are in the same physical location.