pull down to refresh
@final
stacking since: #193385
100 sats \ 0 replies \ @final 5 Aug \ parent \ on: Searching on Stacker.news and ordering by "recent" - results are not in order meta
I am for having that stricter search from the past. Makes searching a post about a certain thing on a common word harder for me. Also made me take extra steps when I need to do help/support replies about GrapheneOS as I didn't always see who was discussing at first.
GrapheneOS is focused on privacy, the security benefit isn't mutually exclusive. There's a lot of privacy features for per-app privacy like Storage and Contact scopes, sensors and network permission toggle for apps and other features like per-connection WiFi MAC randomization for the OS. Unlike CalyxOS we also are not connecting to Google services by default, which they do for their connectivity check, DNS check, network time, hardware and DRM attestation services. Their MicroG service also runs with privileged access and isn't sandboxed.
Posted a link to a comparison table at: #1065801 that explains things, but there's a lot more to it.
OP should be aware CalyxOS development is officially halted and they no longer officially provide any instructions for new installs.
If they want a comparison, they can check out:
People employed at these organisations have been going to news organisations claiming that when they see a Pixel phone, they think it's a drug dealer (insane and ludicrous hyperbole) and that it is apparently our fault.
Since then, numerous news sites in different languages have been posting that, mainly repurposing the same talking points. It's just a news campaign trying to claim we are enablers of illicit activities.
In a swing of irony, here's an article from Citizen Lab on how the same Spanish government used exploits against political opponents in Catalonia:
Bear in mind that even police using these will almost entirely be using them against people not convicted of a crime based on suspicion. GrapheneOS would have been more likely to protect them.
I work for GrapheneOS but I'm not a developer, it would conflict with other stuff and I don't do any Kotlin app development. That may change soon. As it stands GrapheneOS has 10 developers, at least 7 of them work as full time developers who the Foundation pays. There's also GrapheneOS Foundation staff, OS support, and some volunteer community mods.
Usually I help the team with matters to do with support, or any discussion about forensic kits like Cellebrite. I also help proof the more technical posts like #774701 #670170 and #455267.
I may be partially to blame with their interest in posting on Nostr... but it needs to be done right.
I didn't get a notification to reply to this - I didn't mean to ignore this! I just saw when trying to search recent Cellebrite news on here.
As far as I know, most cellebrite devices work by plugging in the device. If you enable lockdown mode and your phone is locked even after AFU, iOS will refuse any data connections over USB
They can bypass this restriction. Cellebrite do not mention Lockdown Mode in the Premium documentation as it doesn't change anything for them. Users from a law-enforcement forensics chat room we previously monitored also still tell that this is the case (they claim to have special cables that have a payload to bypass that) and that it isn't exclusive to Cellebrite. Potentially Apple could make a fix for this, as they did make an automatic reboot feature recently that pissed the forensic companies off.
People are still leaking chats there saying this is the case like in here: (source)
For bespoke cases, the client would pay Cellebrite to have their expert teams find a way in themselves (called Advanced Services).
They’d have to either exploit something from inside the phone or do a memory extraction which isn’t exactly easy.
It's absolutely possible they could do that but they'd hate to do the former. They're both exploits that would meet the objective but it's apples and oranges.
They like their exploits to have as minimal data footprint as possible because if their extraction methods are modifying the owner's data then it can be used as a defence in court that the evidence is tampered which risks making it inadmissible. For example, Cellebrite have an APK downgrade feature for downgrading apps or OS components to outdated, vulnerable versions on Android to aid extractions. They say it is an absolute last resort when every other method has been exhausted, including attempting physical attacks. They could do it, but remote access a la NSO Group is for a different type of customer than what Cellebrite sells to.
I don’t think graphene can protect from a memory extraction? I haven’t looked at the latter in much detail
Hardened memory allocator in GrapheneOS zeroes memory when it is freed. It protected against a forensic company that exploited the Stock OS by RAM dumping from a bootloader exploit to get a derived hash they could brute force the OS PIN/password with. GrapheneOS recieved bounties and ASBs for reporting it and building a fix for that (post here) but the stock OS still falls behind what GrapheneOS does.
Nope, some shitty EncroChat-style service by some Dutch(?) criminal gangs. They'd sell phones with their own messaging service at an unreasonable price markup. The Matrix they're talking about has their website seized it looks like.
They're popular amongst targeted (state-level?) attack campaigns that have an amount of known victims in the tens or low hundreds. They're pretty advanced, but there isn't much of a benefit to a bootkit beyond absolute persistence compared to a zero-click remote 0day. Malicious UEFI firmware that infects the OS on reboots have been observed in the wild before, and the NSA also had firmware attacks for weaponizing hard drives during the Snowden era too.
CosmicStrand, MoonBounce, MosaicRegressor, BlackLotus come to mind as best examples to demonstrate.
Both of them are undetectable by the operating system since what is executed runs with the highest privilege. Windows Defender can do nothing against unknown malware and Linux is Linux. Forensic analysis of the device can reveal the attacks although this is not an automated process and you'd need experience in DFIR to analyse a device to see if you was infected by this.
Bootkit firmware have to drop malware within the installed OS for command and control and to monitor their victim's activities. Forensic analysis of the disk for potential IoCs, network traffic analysis to find connections to a potential C2 server and memory dumping (for fileless malware) are some ways to find out. You'd also need reverse engineering experience to confirm suspicious activity. This is also why when people talk about "hardware backdoors" in a security researcher they get laughed at. They are only useful by being unknown and to a limited group of people.
Windows tried to add boot security (called System Guard) to prevent firmware based attacks on a line of PCs called "Secured-core" but they are a questionable half solution that avoids the real solution. Dmytro Oleksiuk (cr4sh) is a very good researcher who develops a lot of PoC UEFI bootkits and he has some good material about the subject matter.
Desktop boot security is fucked. "Secure boot" is nothing like the Verified Boot used in Android, iOS, GrapheneOS, ChromeOS and ARM Macs. Desktop OSes need to move towards being adminless and make features like sandboxing for apps mandatory. Android works really well with verified boot because most of the OS is an immutable, adminless workspace that separates all of what the user does in it's own place. The desktop OSes also trust the hardware connected to it and their drivers too much.