pull down to refresh
@final
stacking since: #193385
0 sats \ 0 replies \ @final OP 18 Dec \ parent \ on: REVEALED: Here's the Cellebrite Premium Device Support Matrix for July 2024 security
I didn't get a notification to reply to this - I didn't mean to ignore this! I just saw when trying to search recent Cellebrite news on here.
They can bypass this restriction. Cellebrite do not mention Lockdown Mode in the Premium documentation as it doesn't change anything for them. Users from a law-enforcement forensics chat room we previously monitored also still tell that this is the case (they claim to have special cables that have a payload to bypass that) and that it isn't exclusive to Cellebrite. Potentially Apple could make a fix for this, as they did make an automatic reboot feature recently that pissed the forensic companies off.
People are still leaking chats there saying this is the case like in here: (source)
For bespoke cases, the client would pay Cellebrite to have their expert teams find a way in themselves (called Advanced Services).
It's absolutely possible they could do that but they'd hate to do the former. They're both exploits that would meet the objective but it's apples and oranges.
They like their exploits to have as minimal data footprint as possible because if their extraction methods are modifying the owner's data then it can be used as a defence in court that the evidence is tampered which risks making it inadmissible. For example, Cellebrite have an APK downgrade feature for downgrading apps or OS components to outdated, vulnerable versions on Android to aid extractions. They say it is an absolute last resort when every other method has been exhausted, including attempting physical attacks. They could do it, but remote access a la NSO Group is for a different type of customer than what Cellebrite sells to.
Hardened memory allocator in GrapheneOS zeroes memory when it is freed. It protected against a forensic company that exploited the Stock OS by RAM dumping from a bootloader exploit to get a derived hash they could brute force the OS PIN/password with. GrapheneOS recieved bounties and ASBs for reporting it and building a fix for that (post here) but the stock OS still falls behind what GrapheneOS does.
Nope, some shitty EncroChat-style service by some Dutch(?) criminal gangs. They'd sell phones with their own messaging service at an unreasonable price markup. The Matrix they're talking about has their website seized it looks like.
They're popular amongst targeted (state-level?) attack campaigns that have an amount of known victims in the tens or low hundreds. They're pretty advanced, but there isn't much of a benefit to a bootkit beyond absolute persistence compared to a zero-click remote 0day. Malicious UEFI firmware that infects the OS on reboots have been observed in the wild before, and the NSA also had firmware attacks for weaponizing hard drives during the Snowden era too.
CosmicStrand, MoonBounce, MosaicRegressor, BlackLotus come to mind as best examples to demonstrate.
Both of them are undetectable by the operating system since what is executed runs with the highest privilege. Windows Defender can do nothing against unknown malware and Linux is Linux. Forensic analysis of the device can reveal the attacks although this is not an automated process and you'd need experience in DFIR to analyse a device to see if you was infected by this.
Bootkit firmware have to drop malware within the installed OS for command and control and to monitor their victim's activities. Forensic analysis of the disk for potential IoCs, network traffic analysis to find connections to a potential C2 server and memory dumping (for fileless malware) are some ways to find out. You'd also need reverse engineering experience to confirm suspicious activity. This is also why when people talk about "hardware backdoors" in a security researcher they get laughed at. They are only useful by being unknown and to a limited group of people.
Windows tried to add boot security (called System Guard) to prevent firmware based attacks on a line of PCs called "Secured-core" but they are a questionable half solution that avoids the real solution. Dmytro Oleksiuk (cr4sh) is a very good researcher who develops a lot of PoC UEFI bootkits and he has some good material about the subject matter.
Desktop boot security is fucked. "Secure boot" is nothing like the Verified Boot used in Android, iOS, GrapheneOS, ChromeOS and ARM Macs. Desktop OSes need to move towards being adminless and make features like sandboxing for apps mandatory. Android works really well with verified boot because most of the OS is an immutable, adminless workspace that separates all of what the user does in it's own place. The desktop OSes also trust the hardware connected to it and their drivers too much.
This one must be done manually for now, we're just going through with the public testers on potential bugs or issues.
There should be an Android 15 release for GrapheneOS in Stable in around 1-3 days. If this update is considered to not have any serious issues then it will be sent OTA.
Some additional corrections:
While this figure is subject to change, at the time of this video being produced, Memory corruption has been the cause for almost every in-the-wild Zero Day on not just Android but WebKit and Chrome.
Meant to say "this figure" being the Zero Days "In the Wild" spreadsheet by Google which I forgot to link as well. Had originally referenced a security researcher's keynote in my drafts but I thought the spreadsheet would be better.
Here is the link to the spreadsheet: https://docs.google.com/spreadsheets/d/1lkNJ0uQwbeC1ZTRrxdtuPLCIl7mlUreoKfSIgajnSyY
Here is the page about it: https://googleprojectzero.github.io/0days-in-the-wild/
My concern with the iPhone has always been similar to that of an ISP. I use a VPN because I don't want a single point (ISP) that gathers up all of my website traffic and knows every site I've visited. I assume they store/sell this data either now or in the future.
Most developed countries typically have retention policies for ISPs, although they aren't usually very long because the scope of data to store is humongous, around over a year. If an ISP says they don't sell or share this data other than in law enforcement, I would trust it since it's a legal obligation, but I know others won't trust that. Think it would be a very controversial move and data selling is more common in social media / marketing firms.
I figured apple may do something similar. My understanding (could be wrong) is they track and send back to the mothership every screen wake, touch screen interaction, app download, (perhaps internet traffic?) etc... you do with your phone.
That's excessive and unnecessary and information like that isn't valuable enough to build infrastructure to collect of it's billions of users. Important information is what is identifiable or provides details on key events of the person / device, and having a full chronological timeline down to the press is bloated intelligence.
You should expect Apple to know what information you provide on your Apple ID and activities on Apple's services that are not end-to-end encrypted. iCloud provides an end-to-end encryption option but it's not a default so the content (files, contacts, messages) can be accessed on there. E2EE is default for iMessage but if iCloud backups are enabled for iMessage then Apple could provide decryption keys for the content if that backup isn't end-to-end encrypted.
If you are using an Apple service like iCloud Email, Apple Music, App Store and more, you should also expect the activities you do on those apps to have relevant data collected like purchase / download history, track history, email history etc.
Apple's privacy policy is generally well in comparison to many other companies, the flaws is the lack of end-to-end encryption and the large scope of information they collect: https://privacyspy.org/product/apple/ - Apple provide VERY personalized services which in-turn require a lot of information, many of these features are stored and managed on-device as they say.
For Private Relay you could expect Apple knows who is connecting to it and that's about it: https://support.apple.com/en-us/102602
I don't know if these claims are true or not and I don't know if they apply if you don't use iCloud (I do not).
Then the scope they collect is much less, core Apple ID information, app store purchase / download history and device information would come to mind.
Do you leave them running concurrently or just use one at a time?
I use the feature to automatically end a secondary user profile when you leave it. Considering it like turning it off, you wont get notifications from it and it won't run in the background, but it keeps data safer.
You mentioned that an iPhone with Signal is probably good enough, I wanted to see if you could elaborate on that.
I meant that in my own requirements, I am a simple person with not a lot to target for. Some GrapheneOS users may use iPhones for miscellaneous low-risk use like corporate things and use a GrapheneOS phone for other higher-risk activities instead. I own Apple devices but they aren't used in any meaningful ways. iPhones are predominantly a target by sophisticated actors with moderate success, but the scope of who they target is also extremely limited and not a concern for most. GrapheneOS should be a choice for people who could be a victim for that, but obviously GrapheneOS isn't exclusively for them... else I wouldn't be using it.
Do you have suggestions on what to do about legacy calls/sms?
Don't use them if you can help this. I don't use my number beyond when it is forced to be used like for accounts, although your use case is a lot different to mine so I can't really stop you. Project doesn't recommend using SMS for communications and cellular networks aren't designed with privacy in mind and cannot be fixe, a user using mobile data would usually be aware of the privacy shortcomings and use it anyway, like I do.
On the iPhone, it just seems 'easier' to solve because mysudo "just works" and even google voice on the iPhone feels more 'isolated' from the system than when I install google voice on graphene (because the google account at least appears like a "system wide account" and needs google play services so who knows what data google gathers from me).
The play services on GrapheneOS are unprivileged and only access what you allow them to, and what you do in the Google apps themselves (for example searching an app in Play Store may be in your account activity history somewhere). It appears that way because that's originally designed that way, but a compatibility layer essentially tricks Google Play services into believing it is running with privileged access when it isn't.
You can use an additional user profile to contain the play services and apps dependent on it into just that profile, and allow it to run and share notifications to other profiles to get alerts like texts from a text app you use? This is just a suggestion and it may not fit your use case though.
CalyxOS is essentially just an ordinary OS, it's a LineageOS fork with bundled apps. It arguably does things worse due to slow patching and adding their play services component as a privileged process. GrapheneOS doesn't run play services with privileged access.
GrapheneOS is hardened and more secure. That's what made me choose it all these years ago:
None, those features are mostly built into stock OS apps or are done over the Internet through Google. Some can be run by installing those apps again. Having a NPU on the device does help speed up AI/ML operations and we could take advantage of it. Camera app uses it for image processing.
A lot of people think it's a hardware thing, but dedicated hardware to do certain processes faster has existed for a long time... like GPUs.