I followed the link to the Pixel 9 bug/exploit and saw this:
"Over the past few years, several AI-powered features have been added to mobile phones that allow users to better search and understand their messages. One effect of this change is increased 0-click attack surface, as efficient analysis often requires message media to be decoded before the message is opened by the user"
Haven't we learned our lesson on this? Don't read and act on my sms messages without me asking you to!
> Haven't we learned our lesson on this?
What is the purported lesson we should have learned? Users choose phones with rich messaging features. This was a major selling point for iPhone, first, with iMessage, and later with Android until iOS caught up with RCS.
One of the things Apple's Lockdown mode does is disable previews of images or links that are sent to you.
It seems like the lesson is that you shouldn't be processing data sent to the device by random strangers without the user explicitly choosing to open the file or follow the link.
That should be the default behavior, not a special lock down option that also disables other features.
Why can't they just make it like most email clients? No preview by default, give a banner with an option to explicitly allow a preview for that specific message or conversation?
>That should be the default behavior
It is! The phishers try to socially engineer their way into getting link previews or in fact clickable links period.
Screenshot here of the automatic link/preview disable-
https://www.bleepingcomputer.com/news/security/phishing-text...
I tend to agree.
But how does that prevent one from receiving and opening a malicious message?
Because many people know not trust unknown senders.
I should have said “a well crafted malicious email” or SMS etc.
No such thing as completely idiot proof. But I think we can all agree an exploit that requires a click is a lot better for the intended victim than one that doesn't. This way they at least have a chance to not click it. Then we can start tackling the other problems with separate solutions
Phishing is big business and ways to combat are not fool proof. Education helps. Spam detection helps.
Education helps, but it puts the burden on the user. The real fix is shutting down the phishing source, not just filtering the symptom.
You know that E-Mail clients blocking stuff came after right?
Sorry, but that is an insanely defeatist attitude blended with a hint of blaming users for wanting features.
Image decoders are pure functions and all should have been rewritten as 100% safe Rust years ago.
Users need functionality.
It’s up to us to figure out how to provide that safely.
Saying to users they shouldn’t have those features isn’t sage advice, it’s admitting failure.
They are actually pushing Rust quite hard now in Android:
https://blog.google/security/rust-in-android-move-fast-fix-t...
Even to the baseband firmware:
https://blog.google/security/bringing-rust-to-the-pixel-base...
Rust wont save you from malicious SVG+JS files, EPS/PostScript files and so on.
The thing is, nobody's happy just previewing jpegs and pngs.
Before you know it, people want to preview SVGs, PDFs, video, HTML and so on.
And to do that properly means you've got to support obscure formats like JBIG2 and CCITT Fax. Malicious vector images with a billion elements to render. XML that lets one file embed another.
And good luck getting the budget to re-implement them all from scratch in a better language, when the only business value the feature delivers is a postage-stamp-sized preview image.
Perfection is the enemy of the perfectly good.
And let's be honest, you'll have what, 0.0001% of users who want to preview CCITT in 2026? Less? Probably less.
The business value is reduced attack surface which is a marketable attribute. Seems like the exact type of thing Apple would do.
At what point do we just refuse to parse obscure rarely used formats
Most of these are solved problems to one degree or another. Web browsers have generally switched over to decoding legacy unsafe formats like PDF using safe managed languages, typically JavaScript.
> JBIG2 and CCITT Fax
Since performance isn't such a critical concern with obscure legacy formats, it really wouldn't be much more than a day or two of work for a competent developer with AI agent tooling to convert an existing decoder to safe Rust.
Meta set nearly a hundred billion dollars on fire for a total failure that everybody saw coming, a trillion dollars is what the current AI investment crazy is pouring into concrete and TSMC chips, but... a couple of days for a developer is asking too much!?
> legacy unsafe formats like PDF using safe managed languages, typically JavaScript.
Are you ironic? If any JS and v8 have tons of CVE's.
Stop being deluded with these hip languages. Rust? you wish. Maybe inferno with proper namespaces AND in-kernel namespace support. No, not like Linux. LIke 9front.
Well, one could argue that the lesson from CVE-2017-0780[1] should've been "don't automatically decode rich messages from untrusted sources".
[1]: https://www.trendmicro.com/en_us/research/17/i/cve-2017-0780...
Stagefright is even older:
Where are users being given an actual choice? There is no option for "iphone without these features", and I would wager that it has 0 bearing on anyone's decision to purchase a new iphone
There is a choice, but almost nobody uses it: https://support.apple.com/en-us/105120
> What is the purported lesson we should have learned?
Not to automatically execute things within data that we have been sent.
The subtle lesson, which we won't learn is [astronaut meme] all communication is potentially remote code execution. This isn't a computer thing, it's in the inherent nature of how communication works for us too. You can be more or less careful, but you can't eliminate the problem entirely or else communicating ceases to be effective.
Hey, you! Stop executing code in my head!
I think it's "don't use parsers written in unsafe languages".
All languages are unsafe. Some just make it less obvious.
I think it's simpler: don't touch untrusted content unless/until you need to.
But that just moves it from 0-touch, to 1-touch (which is of course better).
But users are morons.
We STILL NOW, have people getting phished and pwning their employers.
Alas, there are a lot of things that you need to touch that are untrusted.
That's easy, and already done. Phones only touch untrusted content when they need to, it's just that they need to touch it immediately upon receipt
Didn't Android switch their codec stack to rust?
Even that's not sufficient. Consider an email client that doesn't parse images until you interact with the message. So you click on it, realize it's dodgy, but it's too late now because all the complex bug prone machinery has already been triggered.
Or my favorite, I marked an extremely suspicious message with what was almost certainly a malicious attachment as junk in a certain BigTech webmail client (the only other option was phishing which it most certainly was not) and it "helpfully" opened the unsubscribe link in my local browser without first asking me for permission. It's difficult to imagine the level of incompetence and dysfunction required to not only write but review, approve, and deploy such a feature in a security and privacy sensitive context.
The email client I use doesn't display images in an email until I explicitly ask it to.
Which came as a reaction to "tracking cookies" and the like being added to e-mail.
It was a reaction, not a proactive response.
Rather than tracking cookies it's a form of delivery confirmation via unique url. One of the mitigations is to configure the server to unconditionally fetch (and retain) all embedded media immediately on receipt of the message. Which makes the BigTech example all the more egregious.
That has no bearing on the points made in the comment you replied to.
Google owns Android. Google does not care about you or other users. Their customers are ads publishers. 0days does not matter for them! Because there is hardly one alternative: iphone (and Huawei, but maybe not everywhere). Not much to care about.
We all need a new phone OS and hardware level. Urgently.
0days does not matter for them!
This does not make much sense at all and is also not in line with empirics. It does not make much sense, because if flagship Android's security reputation worsens, more high-value customers (which are interesting to ad publishers) will go to iPhone. I think this is already an issue for Google because the most popular iPhones are all flagship models, whereas the most popular Android models are low- to mid-range Samsung A series:
https://counterpointresearch.com/en/insights/global-smartpho...
This reduces the opportunity for Google to extract money from their ecosystem (Ads, Google One, etc.) and gives it to Apple.
Second, it does not line up with empirics, because after Apple, Google has been the manufacturers most aggressively pushing hardware security. E.g. Pixels have had a Titan M secure enclave for a long time now (most Android manufacturers do not have any and rely on TrustZone), Google Pixel was one of the first devices to adopt memory tagging (MTE), etc. They do a lot of work to try to reduce the blast radius of 0-days, there is a reason why e.g. GrapheneOS has so far only supported Google Pixel devices.
The problem is more the lack of privacy.
> Google owns Android. Google does not care about you or other users. Their customers are ads publishers. 0days does not matter for them
"Google does not care about zero-day vulnerabilities" is an absolutely ludicrous claim.
The care from day one on.
dude google is the one reporting on themselves here.
I was at an "AI Security" talk recently, that centred around "While we blindly will injest inputs to and from AI, and that's a security issue. There's nothing we can do, so just deal with the aftermath".
Including saying "If a threat actor updates your internal documentation, they can use that to influence the AI".
If a THREAT ACTOR IS UPDATING DOCUMENTATION, YOU'RE COMPROMISE!
We're not talking about "Wikipedia Vandals" here
A "threat actor" can be a company employee who is intentionally permitted to update internal documentation, but not intentionally permitted to change the behavior of an LLM whose context window includes that documentation.
I think it's reasonable for a security conference to talk about how if you put the internal documentation in the LLM context, that means you're elevating the permissions of anyone who can edit the documentation by transitively giving them the ability to instruct the LLM in its "actions" (outputs).
While it should be obvious that's what you're doing, I would say most people I talk to about LLMs do not understand that all parts of the context window together shape LLM output, and there is no such thing as "only obey instructions from the system prompt".
My first thought was in agreement, “do they not realize that docs are context, sometimes even prompts, for humans too?”
My second thought was “perhaps they’re just very forward-thinking”, and now I’m sad about the future again.
Getting users to open a message isn’t a terribly high bar. As a user I would not find it acceptable if needed to be careful with which message I open. We tried putting the responsibility on the user with email attachments and I think it’s fair to say it’s been a disaster. Malicious attachments are probably the most important distribution vector for malware.
This isn't even an exploit if the crappy AI or whatever that's trying to do something fancy never "processes" the message. At least give me a choice before you automatically do that
ESPECIALLY when we're trying to be concious about the amount of resource that "AI" uses. I don't need to burn GPU cycles on something I can read with my own eyes.
> Don't read and act on my sms messages without me asking you to!
Being an accidental or curious tap away from an RCE isn't actually much better. The fix is using sanitizing and safe parsers.
> requires message media to be decoded before the message is opened by the user
I like seeing thumbnail previews of images in messages
I don’t know about android, but iOS has some pretty interesting architecture to prevent and sandbox that kind of attack
They put a lot of deliberate work to enable this feature in a way that is hard to exploit
And it really sounds like Google is not mentioning that stance
> Don't read and act on my sms messages without me asking you to!
Doesn't that just turn a 0-click exploit into a 1-click exploit? It's unlikely the user can make an informed decision to not process a potentially malicious message, without clicking on the message.
Preferably a two-click exploit. One to view the message and one (if I decide it's safe) to process it through your buggy code.
A 0-click exploit is horrendously worse than even a 1-click one. I often don't even open messages from numbers I don't recognize
Windows had autorun starting Windows 95, but stopped shipping it as a default in Windows 7 (2009). So, yeah, no we haven't learned our lesson.
extrapolating that line of thinking: "why does computer run malware, i asked it to not run malware ever!”
another fun parallel: "run this [...] and make no mistake ".
human context is just as bad as llms, i swear
I don't know if that is the right lesson. It's kind of like "don't click on links"... Err, no. You should be able to click any link without getting hacked.
I have always found the whole "Don't trust links" a faux-pax when it comes to user training. As it just means that the failure to secure systems in the first place has already failed.....
It's worse, often the saying goes "don't click on suspicious links"/"don't open suspicious attachments". If I (target of such hint) knew the link was "suspicious" I wouldn't click it! Users are not opening suspicious attachments, they open (what they think is) important invoice or message from their boss.
Wr aren't talking about clicking links even. This is a bug in some stupid code that tries to read your messages for you and act on them. No thank you!
Sure, in an ideal world different from this one. You should be able to do anything on any device and never worry about security.
Unfortunately, since we don't live in that world, we need to not open links, emails, text messages, etc, if they are sketchy.
A better solution may someday exist, but as of yet has not been found.
"Don't click on links" is not a solution, and it's not something people actually do, it's just something they think they do.
Corporate Security will tell you that it's ok to click links to the payroll system or hr or vanta or the 'secure email service' or jira or github or to docusign or the microsoft office document that a partner company sent you or an amazon delivery notification, but not ok to click links in the phishing email that looks exactly like one of those that they sent you.
It's not possible to tell whether a message giving you a link to something is 'sketchy' or not before clicking the link, and any 'security' that relies on people knowing whether a message is malicious or not by magic is broken in the real world.
In my company I regularly see genuine, legitimate emails that carry several huge red flags, like these conveyed to us on trainings.
If I can plausibly claim I wasn't sure it was legit (ie it was sent from the outside form the sketchy looking host), I'd always report it internally as phishing attempt. Just to make the security work with it.
There's also something about "admin" and "HR" systems in companies where they ignore everything they told you not to do.
I don't think I've worked anywhere yet that does 2FA, SSO, or even a vaguely usable system that doesn't look like it was made 30 years ago in these departments.
Which is extra troubling as these systems are the ones with the PII!
>It's not possible to tell whether a message giving you a link to something is 'sketchy' or not before clicking the link
Sure it is. It's just not something the average user can do. But what makes the situation worse is that most emails now use click tracking, so ALL links are sketchy. For example, emails from my union all link to 2mv.aplink.red and are 200 characters long and look like /dev/urandom output. No fucking idea what or who controls that domain, but it for sure is not my union. I've complained multiple times, including acting dumb and asking if they've been hacked because their email look shady as hell.
Email with the unsubscribe link wrapped in click tracking gets sent straight to SpamCop. I hate tech more and more every day.
I think you are providing a very good argument for why even technical users cannot distinguish legitimate links from sketchy ones.
> Don't read and act on my sms messages without me asking you to!
Somewhere there's an NSA agent reading this and laughing like a gin addict on payday.
How are they going to make trillions of dollars if not!?
"move fast and break things"
"But the users never know what they want to do! We have to shove suggestions and recommendations at them at every! waking! moment!"
"This is notably fast given that this is the first time that an Android driver bug I reported was patched within 90 days of the vendor first learning about the vulnerability."
This makes me feel better about Google, but also makes me kind of frightened of the rest of Android. I wonder what Apple's response time is?
Android vendors have been notorious about updates for a long time. Part of that is supposedly because all of the phone companies want to distinguish themselves from each other, and so they all want to fork the default Android UI so they can offer some psychedelic UI vision with some brand-specific features. But that means that when an update to stock Android comes out, it's a lot of work to migrate.
I don't think Android UI customization is the main issue. Many vendors are not even able to keep device firmware and Linux kernels in sync. Qualcomm and others are doing monthly bulletins:
https://docs.qualcomm.com/securitybulletin/may-2026-bulletin...
Since a lot of vendors are months or even years behind, their phones are full of known holes.
When it comes to security, basically: GrapheneOS > iOS > PixelOS >> Samsung OneUI >>>>>>>> everybody else.
Sadly, Samsung lets anyone who pays enough push bloatware and analytics on their phones. E.g. AppCloud from an Isreali company, Meta services that stay even when you remove Meta apps (only removable with ADB/UAD), etc. So there are only three somewhat serious options (and for two of them, you still give a lot of analytics to Apple or Google).
How is GrapheneOS able to get around the issue of SoC firmware blobs being slow to roll out?
they aren't, but they often push kernel/system patches faster than Google. they also have more kernel hardening in place, which makes some classes of exploits ineffective.
mainly by only supporting devices with consistent fast fw updates (which is how pixelos is also on the list)(samsung is also mostly on top of their shit but multiple security features are unavailable to third party operating systems so unviable)
I've reported security bugs to Apple before. Was a couple years back but I remember it taking around 6 months to patch (there was a couple back and forth for me to get a more reliable POC). Maybe 2 months from when I submitted a POC with 100% reproducibility
At least in the past there has been instances where Apple sat on security bugs for years until they were fixed, one example: https://jonbottarini.com/2021/12/09/dont-reply-a-clever-phis...
I've heard they cleaned up their program recently to respond much quicker nowadays
Not sure how much it helps, but I just run all my Apple devices in "Lockdown mode", don't install apps (use Safari), and try to mostly use Safari in private sandboxed mode.
This makes sense if you’re a human-rights journalist working in a dangerous country, with the threat of state-level actors looking to compromise you.
If you’re not then this seems quite paranoid, bordering on LARPing.
I turned it on a week ago to see what it was like. I expected it to be significantly annoying, but I found basically nothing changed other than a bit of text in safari that says it's in lockdown mode. Otherwise I wouldn't have been able to to tell the modes apart. I was expecting the browser to be slower without JIT or use more battery but I haven't noticed any change, it's all still snappy.
Apple over hypes the "you need to be in significant danger" part. Basically anyone can turn this on and it's fine. The UX seems mostly exactly the same either way.
I take it that you mostly communicate with other people using services that are not iMessage.
I’m not a heavy iMessage user but I use it a bit and I haven’t noticed a difference there either. Photos still load, maybe pdfs wouldn’t work?
It basically degrades back to SMS if you turn this on. Obviously, this is fine for a lot of things, but most people generally expect more than that out of their messaging app in this day and age.
BRING BACK EMOTICONS!
LARPing is imagining that Lockdown mode protects you from state-level actors. It is frankly baffling why a industry that has been laughing for literal decades at even the possibility of stopping state-level actors just turns around and uncritically believes Apple's marketing team with literally zero support, evidence or proof except for a long track record of failure. You would think that extraordinary claims would demand extraordinary evidence.
We have seen multiple software hacks resulting in >10 million dollar payouts. Apple's bug bounty program only pays out 4 million dollars (2 million dollars (2x) more than non-Lockdown) for a zero-click total compromise that can trivially worm to take down hundreds of millions of iPhones simultaneously. Even at the low end of that cyberattack payout range that is still a >2x ROI if your successful cyberattack depends on a iPhone zero-click, with many publicly known attacks being in the 10x ROI range. Lockdown mode, at best, raises the bar slightly for commercial profit-motivated attackers and reduces their profit margin from wildly profitable to slightly less, but still, wildly profitable.
And of course I am using the Apple bug bounty program as merely a available metric with at least some semblance of objective support. There are zero certifications, audits, or analysis that Apple has even attempted that would confirm any claim of protection against state level actors.
I strongly disagree that there is no evidence that Lockdown mode is effective; there have been numerous exposed, active iOS exploitation campaigns of which none have worked against Lockdown mode. When we're trying to prove a negative, that's actually some of the strongest evidence we can get.
The economics of the device exploitation industry are completely orthogonal from bug bounty payouts; the markets only overlap at the _extreme_ fringes. Trying to use one as a proxy for the other is meaningless.
I don't necessarily disagree but a lot of chains will bail out if they find like the Norton Antivirus app on your phone so
In this case the body of evidence is still quite powerful though, given that not only do we not have any forensic evidence of compromise from a phone with Lockdown Mode, but in all public cases where chains were RE'd back out of the forensic evidence, they don't work when tested on Lockdown Mode! So, there's even signal that the lack of forensics indicating Lockdown Mode compromises is not due to artificial targeting or detonation gates, but rather successful mitigation.
(as an aside): I'm not trying to say Lockdown Mode is infallible; I am sure phones in Lockdown Mode are or will be compromised. But it's clearly a very powerful tool, and to try to argue that it is some kind of marketing-driven conspiracy, against the body of evidence of its success, using bug bounty payout numbers (???), as the grandparent post did, is ridiculous.
That is a total strawman. The standard of “effective” being used by the person I was responding to and Apple themselves is “protects against state actors targeting you”, not “has any benefit whatsoever” or even “has a material benefit”.
Protecting against state actors is not a instantaneous property of the present. It demands durable protection against compromise by state actors who can easily spend tens to hundreds of millions of dollars on teams of hundreds for multiple years to develop novel, durable exploits known only to them. To the extent that compromises exist, they would require expected resource expenditure in excess of what state actors can deploy or are in excess of the value derivable by state actors which is going to be in the hundreds of millions to billions of dollar range to constitute as being "effective against state actors targeting you".
Protecting against state actors means secure against Iran, Saudi Arabia, China, and the NSA. That is the unsupported marketing bullshit I am calling out.
> We have seen multiple software hacks resulting in >10 million dollar payouts
This sets a nice price bar for exploitation. Is someone willing to pay 10+ million dollars to get access to your phone?
The obvious caveat here is that for a lot less than 10 million dollars someone can be hired to hit you with a metal pipe until you give up your passcode.
> click total compromise that can trivially worm to take down hundreds of millions of iPhones simultaneously
Where is the profit motive in doing this? Possibility is one thing, but a realistic threat is another.
Is someone willing to pay 10+ million dollars to get access to your phone?
Not yours specifically usually, but there is a lot of money in a general tool that law enforcement can use to read out phones. Of course, most of them focus on physical access. In the few Cellebrite reports/presentations that have leaked, iPhones would fall after a relatively short time (IIRC a few months), but did better than most Android phones (except GrapheneOS).
Also, sometimes you do not need the 10M exploit, you can buy many cheaper exploits and make a chain yourself.
The obvious caveat here is that for a lot less than 10 million dollars someone can be hired to hit you with a metal pipe until you give up your passcode
If they hit you with a metal pipe, it's likely that you won't survive even if you give up your passcode. So most likely you are protecting something or someone else. Set up a duress PIN so that you have options in that case.
... really? Zero-click RCEs can be used on arbitrarily many phones until they are discovered which usually takes on the order of months. You do not need to burn them on every individual target.
As a example of how they might be used in that fashion for profit, NSO group had a revenue of 240 million dollars in 2020. Many of their customers were governments who wanted to spy on activists and journalists. NSO group was in the business of economies of scale to democratize access to journalist devices by reusing a small stockpile of exploits across many targets with enough revenue to assure a steady stream of new exploits as fast as they were burned.
You’re right, I misstated. It’s not 10 million per exploitation, it instead limits the pool of people who can exploit you to those willing and have the ability to spend 10 million+ on an exploit.
That is still quite a small pool, and there are other network effects preventing any Joe blogs with that much capital from launching an exploitation campaign.
Again, no. You do not need to spend 10 million on a exploit if you are working with a company like NSO Group who sells white-glove access to target individual as a service. The cost lower bound is going to be on the order of ((cost of exploit) / (number of times exploit can be used)) and the denominator there is going to easily be in the hundreds to thousands. Of course prices are likely to be higher than the minimum due to profit margins.
To, once again, use the same example of NSO Group as it is infamous and well-documented [1]. In 2016 it was 500,000 $ upfront and 650,000 $/year for 10 devices. That article claims Saudi Arabia was monitoring 15,000 phones at a average cost of 10,000 $/phone. In [2] it was 7 million $ for 15 devices, but the upfront versus marginal cost per device is not broken down. And this was a relatively "above-board" company in the sense that they were a legitimate business entity with government deals which commands a premium relative to random unknown blackhat organization with no reputation.
And again, my original comment was discussing commercial profit-motivated attackers for which 1 million $ is easily within reach and just a cost of doing business to unlock greater amounts of profit. That is less than the cost of setting up a McDonalds. There is a vast, vast gap spanning factors of millions between Joe Schmo and commercial actors and a even vaster gap to state actors. There is no evidence that Lockdown mode is adequate against even commercial actors, let alone the vastly more capable state actors.
[1] https://prodefence.io/news/pegasus-spyware-operating-costs-c...
[2] https://www.reuters.com/business/media-telecom/meta-suit-aga...
I thought it was common knowledge that all kinds of Americans (not to mention other nations) are routinely compromised with zero-clicks, mostly developed in the US and Israel.
This is the kind of assertion without evidence that just muddies the waters. “All kinds” of people is so vague as to be an almost entirely vacuous category and “routine” means almost nothing without an actual quantification of how prevalent and frequent the problem is.
It’s undeniable that the proverbial guns for hire make it easy (if not cheap) to target basically anyone — but just because the vibes are bad doesn’t mean we can just say “it’s common knowledge that …”
The fact is mitigations are costly in terms of convenience and ease of use. Helping people make informed choices about whether to enable mitigations and bear that cost requires more than platitudes imo
"If you’re not then this seems quite paranoid, bordering on LARPing."
There are sooooooo many other situations where such device lockdown is warranted. Government intrusion, sensitive industry, journalism, anything ITAR/EAR covered, and more. Your reduction to a single issue is absurd.
Are you at an above average risk of being targeted by a state level threat actor?
No, just keep the usual tax/finacial/health data on my devices.
I consider Anthropic's Mythros security bug finder mostly marketing, but other things worry me that there might be a global hack contagion: for example, a few months ago I saw in the news that an executive at a US security company was caught selling information to a hacking group.
Except for disabled Javascript compilation possibly slowing down web sites, not getting some attachments in messages, and some graphics not showing up on some web sites, having Lockdown mode set doesn't seem to affect anything I do. For dev I use VPSs with ssh set for ensuring SSH agent forwarding is strictly disabled, as are reverse tunnels.
It seems like doing little things like this make sense because it is such a tiny hassle to be a little safer.
For the most part "AI Exploit Research" is just lots of automated fuzzing. It's nothing new, it just takes time, and they're just throwing a lot of CPU/GPU at that
Given that 42% of Android devices are unpatched as of now [1] it's an interesting decision on their part to release their research and make them all vulnerable
[1] https://gs.statcounter.com/android-version-market-share [2] https://www.cybersecurity-insiders.com/survey-reveals-over-1...
That's perennially the case. A big portion of the world buys bargain-basement android devices that are unsupported right out of the box.
Search "android phone" on aliexpress and there's top selling phones on the first page running android 8, android 10, etc. They're not getting security updates of any sort, let alone driver updates.
It frustrates me no end that there's so many fly-by-night Android phones available from China. But with zero way to change the software on them. It's not even like they're running weird chips either.
It would be nice to find one where the bootloader is unlockable, and you can just build a standard Android image and flash it..
The old way of keeping security bugs private is just completely broken now. If you aren't on a device that gets security updates you are in significant danger, regardless of what Google decides to publish. No name hackers are sitting on stacks of exploits these days and are actively using them.
"Now"
Everything you describe is absolutely nothing new. It's literally where the name "0day" comes form.
On brand-name android devices you can count on getting OS security updates. The first-party vendor can build and push these themselves. Driver and firmware security updates are a maybe. These often have to come from an upstream vendor, who may or may not care to fix the issues.
Smaller brands often ship budget android devices and never update them.
Semi-related: has the rate of published exploits picked up as if late, or is it simply the fact that there’s hype around ai as security tool (offense or defense) so it’s simply in the news more often?
Feels like there’s something new every other day - linux, windows, mobile, various commonplace tools used by everybody, the list goes on
If one reads between the lines in part 1, the code in question was introduced due to AI features and the exploit was found by humans:
https://projectzero.google/2026/01/pixel-0-click-part-1.html
So AI usage increases bugs and humans have to weed them out!
These days I'd expect much of Android is vibe coded with minimal review.
I just did some analysis on this last weekend, in 2024 there were roughly 100 CVEs published every day. In April we hit approximately 200 per day.
Going backwards from 2023, the doubling interval for published CVEs was approximately 4 to 4 1/2 years. Since then it’s approximately two years.
There has definitely been a rapid uptick.
Published CVEs seems a bad metric to use for this- unless we assume that the ratio of really nasty vulns/not-too-bad vulns is consistent.
Also the question remains if more CVE laden code was produced in the first place, instead of automated detection improvements.
It's easier to find a needle in the haystack if the haystack is 50% needles.
have the AI vibe code crappy apps so the related AI vuln finder can fix them
just doubled the value and use cases of your AI solution!
They've been doing that for a long while.
Publish something to Github in a public repo? It pulls it, scans it, and reports!
Especially if you accidentally put in keys
Another reason published CVEs isn't a great metric is that one of the largest contributors to the number of CVEs significantly increasing in the past couple years has been that the Linux kernel now submits almost all bugs as CVEs which wasn't the case before.
Good consideration but I still think there’s an uptick. This is all AI generated as I’m not in a spot to do anything more at the moment but this is a chart of ‘linux kernel’ CVEs rated as high/critical correlated with NVD.
There's been CVE's published for software that didn't even exist!
I wouldn't look at the numbers. There used to be a lot of "scam" CVEs before LLMs, that weren't actual vulns. Nowadays its more popular to collect CVEs, and there is a lot of people scanning with LLMs and reporting without checking (like it was in case of cURL). These CVEs are often not verified by anyone.
There probably is more vulnerabilities found, but the amount of CVEs is not a good metric.
Did you publish this anywhere? Would love to read more.
The rules around CVE reporting changed recently and it would be expected a lot more are accepted.
There are reports from people who manage security bugs in OSS that there has been a big uptick in reports: initially low quality ones that were mostly bogus, but now many more legitimate ones as well.
This is pure guesswork, I am not a security researcher, but my guess would be that AI is increasing the amount of low quality exploitable attack surface available, while simultaneously providing security researchers with an accelerant for their work. Which is to say, its great if you use it well and really bad if you use it poorly.
Not low quality if it works!
The low quality refers to the features with security holes. So no, it didn't work (in this hypothetical).
Those two things have almost nothing to do with one another. Lots of low quality things work they're still low quality.
But it is low quality if it's vulnerable to exploits. And if that's the case, I wouldn't say it really "works".
only until it's ransomware'd
I've reported a few very serious issues to vendors of widely used tools in recent weeks, and it's been even more difficult than usual to get them to be acknowledged - the teams that respond are reportedly swamped.
There definitely is hype around AI as a security tool right now. Someone else pointed out that the rate of CVEs has gone up, but that doesn't tell is why.
This article doesn't mention AI helping find this bug. Seems like humans can still do that on their own.
A bit of both (it finds new things and news is hyped/blown up), and a third factor is that more people are trying to find things. The authors might have been able to do this already, because you still need to have a decent understanding to get useful work out of it and verify the results, but the shiny new toy and FOMO factors make people spend more hours on it that they'd have spent doing something else otherwise
I've seen quite a few saying that they were inspired by the previous report that is presented as "the model pointed us to it" and you get FOMO about missing out if you don't snatch bugs now as well
I think AI helped researchers navigate better in the codebase, not necessarily the AI is succeeding in exploiting.
The Mythos announcement was crazy I think "...has already found _thousands_ of severe security vulnerabilities across _all_ OSes"!
Hmmm... I'd like someone to double check my thinking here. I posted this exact prompt for gpt 5.5 xhigh:
```
does this look right to you? don't do any searches or check memory, just think through first principles
static int vpu_mmap(struct file fp, struct vm_area_struct vm) { unsigned long pfn; struct vpu_core core = container_of(fp->f_inode->i_cdev, struct vpu_core, cdev); vm_flags_set(vm, VM_IO | VM_DONTEXPAND | VM_DONTDUMP); / This is a CSRs mapping, use pgprot_device */ vm->vm_page_prot = pgprot_device(vm->vm_page_prot); pfn = core->paddr >> PAGE_SHIFT; return remap_pfn_range(vm, vm->vm_start, pfn, vm->vm_end-vm->vm_start, vm->vm_page_prot) ? -EAGAIN : 0; }
```
And it correctly identified the issue at hand, without web searches. I'd love to try something more comprehensive, e.g. shoving whole chunks of the codebase into the prompt instead of just the specific function, but it seems the latent ability to catch security exploits is there.
So then.... I wonder how this got out in the first place. I know I'm using a toy example but would love to learn more!
That's not really a fair test because you're leading the model pretty hard, even if the prompt doesn't specifically say there's a bug to be found. It's basically the same objections that people raised in the thread where someone claimed current models are just as good as mythos.
I don't agree, and I'd like to understand your point of view.
To me, asking if a function has something wrong with it is just a very basic code review - something that should happen with every function. A competent, security conscious engineer would respond the same way as the model, unsurprisingly, since the model is... modelling competence.
Code review that finds problems in all code is useless.
right exactly, but clearly it's possible to elicit the behavior we want in the model, which means the capabilities are there!
The more interesting question is, how many issues will this prompt report to you in random code that is perfectly fine?
As an anecdote, I provided fragnesia.c and the subsequent proposed patch to fix the issue and while it was not able to discover an entirely new vulnerability, I think it was able to find 2 new ways of exploiting the same underlying bug.
This is quite impressive considering I’m just a dumbass with a Claude subscription.
On its own we can't judge if this is a workable way to find vulns, as we don't know how many false positives you'd get if you ran it on all the code. (iow might be https://en.wikipedia.org/wiki/Base_rate_fallacy)
How do you know it didn't search the web?
no tool calls!
I pasted the code into claude Opus 4.7 with no internet access and just asked it to just tell me what the function did, and it explained it and also called out the bug. I did not tell it to look for bugs:
> Observations & Potential Issues A few things worth flagging: 1. No bounds checking on the mapping size. Userspace controls vm_end - vm_start and vm->vm_pgoff. Here vm_pgoff is ignored entirely and the size is trusted blindly. If the VPU's register block is, say, 64KB but userspace requests a 1MB mapping, the driver will happily map 1MB of physical address space starting at core->paddr — potentially exposing whatever hardware happens to live at adjacent physical addresses. A defensive check would be:
---
70 day release cycles are very quickly not going to be fast enough to stop widespread use of exploits when you have bots able to scan every PR on every open source project as it comes out.
This is a great bug report! I am not a kernel expert by any means even though I have read some about it... 10+ years ago. And I was able to follow along and see what was going on.
It does make me scared for what other dangers lurk since this was a really bad one and it was so little work to find.
Also of note: so many security issues lately have been done using AI. This report makes me think two things:
1. Expertise is still immensely valuable, the more niche, the more valuable.
2. There are lots of niches still where AI doesn't dominate...
Do we have any evidence on how AI has affected NSO et als’ businesses? Does it render them obsolete? Or are they now superpowered?
Without knowing details I guess that ai is changing the game a lot and a lot of 'capital' in the form of zero days has been destroyed.
If this is the case it's good news for everyone else besides NSO and Co
Where are the iPhone jailbreaks didn’t see anything since a long time.. what’s happening? Did I miss them or isn’t anything available? I mean props to Apple however they do it but is it a matter of time in regard to the current timeline or what is actually going on?
Apple's security posture with lockdown mode, memory tagging, and secure allocators is significantly better than Android. You can read some about it here: https://security.apple.com/blog/memory-integrity-enforcement...
I say this as a decades-long Apple user, but you fell for Apple's marketing. Yes, they do good in-depth security, but Google Pixel also supports memory tagging (MTE), secure allocators (Scudo), and has a mode similar to lockdown (Advanced Protection, which does similar mitigations and enables MTE).
Also, in contrast to iPhones, Android traditionally relies a lot more on safe languages like Java and Kotlin (and now Rust). Of course, iOS is improving there as well with Swift.
The issue is that all other Android vendors outside Google Pixel and to some extend Samsung are just terrible when it comes to device security.
Finally, it should be said that iOS was also compromised relatively quickly according to leaked Cellebrite presentations. The only system they could not compromise at the time was GrapheneOS, because they fully use Pixel hardware security features and do a lot of additional mitigations (including many that iOS doesn't use).
Also, any discussion of iOS should come with a fat disclaimer that by default iOS devices have a huge hole: most people use iCloud Backups (and are nudged towards it) without ADP, so their iCloud backups are not end-to-end encrypted and their chats, etc. can be requested by law enforcement. That you yourself use ADP does not really matter if the people you are communicating with don't. Also, Apple manages the key dictionary for iMessage, etc. so they could insert themselves. I would not be surprised if default non-E2E backups are a compromise in the extension of the NSA PRISM program that Apple already participated in before the Snowden leaks.
Of course, Google isn't any better, but just to say that Apple's security/privacy story is selective. Yes, they help protecting against some malicious groups and non-allied states, but they also make sure that US law enforcement (and probably some allied powers) can access most data.
additionally: Google reports on their own jail breaks (who is project zero?!! lol). apple does not.
in fact apple fixed several high criticality bugs like these not that long ago - they just dont talk about it other than "you must fix now".
same problems, different comms, and the more people do this, the less transparent google will be.
Exploits that can survive reboots are almost impossible these days. And a jailbreak enabling exploit now requires a whole chain of exploits which are worth significant money and also get patched as soon as they become public.
So something like the old iphone jailbreaking scene is just impossible now.
Its still a thing but only on older devices. Eventually exploits get published long after they've been sold many times.
Always annoyed me how "Jailbreaks" don't get the same scrutiny as they're software vulnerabilities as they do on other platforms...
I've run into similar issues before. The solution seems reasonable, but I'm skeptical about the claimed performance improvements.
There have been some V4L2 enhancements to support hardware video decoding pending a merge for a long time, they do seem to be in the mainline kernel now, I guess people didn't want to wait that long.
Project Zero has to report bugs to Android through the front door, and deal with Android VRP severity classification? I always assumed they could just walk over to the Android office and advocate for their bugs, face to face.
If they felt it was too painful to do it the "normal" way then that would probably be the next thing for Project Zero try to get fixed.
This assumes that Android would listen to them.
hm. surprised there aren't idioms like copy_(to|from)_user for these kinds of kernel to userspace mappings for custom device nodes that ensure bounds are supplied...
I read about Pixel 9 Dolby Decoder bug, and it is based on integer overflow. It was a mistake to allow "+" operator to overflow, and this must be fixed in new languages like Rust, but it is not.
In Rust the decision about whether to pay for overflow checks or just wrap (because all modern hardware will just wrap if you don't check and that's cheaper) is a choice you can make when compiling software, by default you get checks except in release builds but you can choose checks everywhere, even in release builds or no checks even in debug.
By definition in Rust it's incorrect to overflow the non-overflowing integer types, and so if you intend say wrapping you should use the explicit wrapping operations such as wrapping_add or the Wrapping<T> types in which the default operators do wrap - but if you turn off checks then it's still safe to be wrong, just as if you'd call the wrapping operations by hand instead of using the non-wrapping operations.
That Dolby overflow code looks awkward enough that I can't imagine writing it in Rust even if the checking was off - but I wasn't there. However the reason it's on Project Zero is that it resulted in a bounds miss, and that Rust would have prevented anyway.
I love most of what Rust does, but this is something they just got wrong. The + operator should always trap on overflow. Which Rust kinda wanted to do (hence why it does that in debug builds), but then they chickened out about the performance risk for release builds, undermining the entire thing. The result is just weak lip service to "no UB!", since debug and release still have very different behavior
I think Zig has the most interesting approach here with 3 different "+" operators (+ aborts on overflow, +& wraps, and +| saturates) along with addWithOverflow builtin. It'd probably be a challenge for Rust to adopt that at this point, but it'd be a great improvement
"If you turn off checks" is misleading, it's just incorrect by default in release mode: https://doc.rust-lang.org/book/ch03-02-data-types.html#integ...
> is a choice you can make when compiling software
That is not a solution because it means the code can behave differently, and expose vulnerability if wrong compilation settings are chosen.
The functions like "wrapping_add" have such a long names so that nobody wants to use them and they make the code ugly. Instead, "+" should be used for addition with exceptions, and something like "wrap+" or "<+>" or "[+]" used for wrapping addition.
That's how people work, they will choose the laziest path (the simplest function name) and this is why you should use "+" for safer, non-wrapping addition and make the symbol for wrapping addition long and unattractive. Make writing unsafe code harder. This is just basic psychology.
C has the same problem, they have functions checking for overflow, but they also have long and ugly names that discourage their use.
> modern hardware will just wrap if you don't check and that's cheaper
So you suggest that because x86 is a poorly designed architecture, we should adapt programing languages to its poor design? x86 will be gone sooner or later anyway.
Also, there are languages like JS, Python, Swift which chose the right path, it is only C and Rust developers who seem to be backwards.
> That is not a solution because it means the code can behave differently, and expose vulnerability if wrong compilation settings are chosen.
If the software is correct nothing changes. The existence of people who write nonsense but expect you to work around that doesn't change between languages, they write crap in Swift or Python or Javascript just the same.
The long names are because there are, in fact, a lot of things you might want. Although Swift manages to take several pages and lots of diagrams to explain what wrapping is, that is in fact all their special operators do. What if you don't want wrapping? Too bad.
Rust provides saturating, which is almost always what you wanted for signal processing (e.g. audio) as well as separate "carry" booleans to do arithmetic the way you were probably shown in primary school, the wrapping most often provided on hardware and useful in cryptography among other places and explicitly cheap but dangerous and expensive but safe options. It also provides both kinds of division (and remainder), which doesn't matter for the unsigned integers but is important for signed integers and is a source of confusion and woe when languages provide only one kind or worse a mixture that makes no mathematical sense. These all need names.
Swift does those too but correctly ranks the options by likelihood of use.
You can’t ergonomically report an error from +. Also it is terrible fundamentals to panic from fundamental operations imo.
So all operations should be function calls imo. There is not much point in having operators
__builtin_add_overflow Exists and it’s basically free on most CPUs out there.
> __builtin_add_overflow Exists and it’s basically free on most CPUs out there.
This is a very C-flavoured "solution". For those who haven't seen it this involves a pointer (!) and we're going to compute the addition, write the result to the pointed-at integer and then if that didn't fit and so it overflowed we'll return true otherwise false.
The closest Rust analogy would be T::carrying_add which returns a pair to achieve a similar result.
And yeah, checking is "basically free" unless it isn't, that's not different. If you haven't measured you don't know, same in every programming language.
It's never been true that you can't write correct software in C or C++ the problem is that in practice you won't do so.
Huawei fixed it in their Cangjie language. According to the docs [1][2], it throws an exception by default and you can use an annotation to get wrapping or saturation instead.
(Cangjie seems like a pretty nice language in other ways as well. Similar to Kotlin with some improvements and no Java. Bootstrapping the toolchain from source seems difficult though.)
[1] https://docs.cangjie-lang.cn/en/docs/0.53.13/white_paper/sou...
[2] https://docs.cangjie-lang.cn/en/docs/0.53.13/spec/source_en/...
I've been using this as a touchstone for whether or not we are actually going to take security seriously for a long time.
We've moved slightly closer to this, but in a world where we're still arguing over memory safety being necessary we've probably still got a ways to go before we notice that addition silently overflowing is a top-10 security issue. It's the silent top-10 security issue, I guess.
Isn't it often combined with poor bounds checks to be exploitable? It's not as if rust or VM based languages don't help a lot with this
It isn't because no ISA implements add like that, so there's always performance on the table if you check every time, and people would probably endlessly moan about how Rust is 20% slower than C on this add-heavy microbenchmark.
That said you can enable overflow checks in Rust's release mode. It's literally two lines:
[profile.release]
overflow-checks = true
I wonder if it would make sense for ISAs to have trapping versions of add and subtract. RISC-V's justification for not doing that is that it's only a couple more instructions to check afterwards. It would be interesting to see the performance difference of `overflow-check = true` on high performance RISC-V chips once they are available.I think it is 3 extra instructions on RISC-V if you add signed numbers. So 1 addition (the most popular operation) turns into 4 instructions. What are those people thinking? I generally like RISC-V but this part in my opinion, is wrong. They should just have added "overflow enabled" bit to the add instruction.
In fairness I don't think it's quite as much of a no brainer as you'd think. Firstly, when RISC-V was developed Rust was still pre-1.0. People didn't think it would amount to anything. So most high performance code was C/C++ which doesn't have checked arithmetic.
Second, it's easy to say "trap on overflow" but traps are super annoying. You really ideally would want to avoid leaving user mode. As soon as you trap to the OS you're now dealing with signals which are pretty much the worst thing in the world. The 4 instruction case at least lets you just branch to other code.
So you ideally want an "add or branch" instruction, but there isn't enough space in the opcodes for that. The fallback is flags, which also massively suck. I don't know if anyone has a great solution to this problem.
> It isn't because no ISA implements add like that
MIPS does (did?). And VAX, IBM/360, ....
overflow check insns also in 6502 (bvs), x86 (into), m68k (trapv) - the gp was after something that wouldn't need to be checked every time, if interpreted as "no extra instructions" then that's a tall order. But the checks are practically free in modern big core CPUs since we have so much spare capacity in issue width in 99% of workloads.
It does seem like "What if we offer checked integer arithmetic operations?" is a cheaper experiment than CHERI's "What if we mechanically reify extent based provenance"?"
But also way less impactful. It would solve maybe 20% of serious security vulnerabilities whereas CHERI solves like 60% at least. More if you use its strong compartmentalisation capabilities (heh).
That said, CHERI is super complicated. Checked integer arithmetic operations would be way simpler.
> This is rendered even easier by the fact that the kernel is always at the same physical address on Pixel
OpenBSD fixed this back in 2017.
KASLR has been supported on Linux for a long time as well (2014). It has been disabled for GKI images for reasons:
https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux....
Most kernels have had KASLR support for well over a decade at this point. Linux does too but Pixel has it misconfigured.
fascinating how GrapheneOS achieves high security level on the same hardware where Google failed to even randomize android's kernel location
Randomizing the kernel location is of marginal utility at best. There are so many info leaks that KASLR ends up being only a small speed bump on the way to exploitation.
Here's a cool project that inventories all your KASLR info leaks: https://github.com/bcoles/kasld
Is Graphene vulnerable to these exploits?
The more interesting question is whether GrapheneOS had this vulnerability.
This published exploit sounds to be highly specialized to the specific build options.
It's easy to be secure if you just remove features. There's obvious tension here.
Could you be any more specific about what features they've removed such that the hardening functions work? Because I think there are none
They're quite open about it. https://grapheneos.org/features#attack-surface-reduction
You said removing features. This link is talking about making certain feautures optional and disabled by default, not removing them.
Did you happen to notice the phrase "stripping out code" in the first sentence?
And which features have been removed, as you claim? Removing code is not necessarily removing features. I use GOS and I honestly can't think of a missing feature compared to the stock OS, other than stuff not in AOSP in the first place, like gemini.
Disabled, is removed...
Removed from operation
Don't be ridiculous.
google has lost its focus with pixel phones
on selling ads or what do you mean their focus used to be that they've lost? I'm not at all negative about more paid features that they've been offering over time, from workspace to youtube to hardware. Still very conflicted about giving Google of all places my custom, but for e.g. phones it's hard to avoid and second-hand the prices are really quite competitive for a tangible hardware product (not a software subscription that you're stuck on). Not bad to shift focus to making these Pixel devices imo, so long as they remain open that is
KASLR isn't an effective mitigation against anything, and to me this is part of GrapheneOS's catalog of superficial but meaningless claims.
I've not seen someone refer to a portion of GrapheneOS's mitigations as superficial and meaningless before. What might an OS with significant improvements to usable attack surface reduction and exploit mitigations look like to you? What sort of things (given a team of less than a dozen contending with OS updates, upgrades and device support) would you have liked to see implemented?
I feel like people who hate on KASLR are basically the IQ bell curve meme but you haven't really provided much evidence to show which tail you are on.
And that is against a device whose BSP is actually open source and available for research!
Now imagine the dark horrors hiding in the BSPs of other Android devices... or embedded devices in general.
Frankly, it should be a requirement of Google's certification process that everything regarding drivers gets upstreamed into the Linux kernel. Yes, even if this adds quite a time delay to the usual hardware development process.
I hope the average person will soon understand the importance of security and will be OK with making the necessary sacrifices to achieve it. Almost everyone has something to protect, be it personal information or property (money, IP).
People love new technologies and features that make their lives easier, but so far only a small subset of these people have made a conscious decision to limit their exposure to risk by depriving themselves of benefits provided by some of these features.
It sure is wonderful to have your whole life digitized on a single computer. You can analyze, share, organize, gamify, record and so on every aspect of your life instantly and effortlessly. It's incredible, really. Technology is amazing. Expect for the pesky bad actors that can do the digital equivalent to most physical crime from the other side of the world anonymously without you noticing.
It's like germs - if you don't wash your hands after touching something questionable and you don't experience any negative consequences, you'll learn not to wash them most of the time. It's just a waste of time. Maybe if you've touched something really gross, you'd wash them, but that would be the exception. Security is the same. If you've been using computers the same way for years, you'll learn nothing bad happens so why bother having a hygiene, why bother making any tradeoffs?
Yes, you've heard the news of someone's nudes posted online, of someone's bank account drained or of some company's files ransomed, but you've also heard of something dying from a brain parasite after touching a muddy puddle and rubbing their eyes afterwards. That happens rarely, we shouldn't worry about it. A car can hit you when you cross the street, a lightning can strike you when you're just walking about, an aneurysm can end you at anytime. No one is washing their hands all the time or constantly trying to minimize the streets they cross or anything like that. That would be foolish and impractical, and I agree.
That mindset is carried over to digital security, sadly. The risks are higher, the effort to keep a good hygiene is lower, the ability for bad actors to completely fuck you is much greater than in meat space. The rewards are seemingly greater, too, until we realize that what we get from technology is just marginally better than what we get without it. Tech is amazing, but it doesn't make us transcend time and space. It let's us organize our schedule, tag people and places in photos and summarize chats. All of that is born out of meat space. Without tech we'd still have conversation, we'd still see new places, we'd still have calendars and todo lists. We get maybe 1% more than we would have if we didn't have any tech but we let all our information and property sit unsecured for that 1% gain. That's fucked up, because the risks are big and will get bigger. And the tradeoffs we have to make to secure our digital lives may seem annoying, but are actually quite trivial. Less unnecessary sharing, more isolation and compartmentalization, different computers for different tasks, less proprietary hardware and software, etc.. We could get 90% of that 1% benefit from tech if we spend just a bit of time and energy of securing out digital lives. But fuck it. Let's but the latest flagship, let's use it for ID, banking, communication, file storage, camera, health tracking, everything. Because it's a tiny bit more inconvenient to get multiple computers for different purposes, to not get the latest and newest, to not install a bunch of unnecessary shit, to be careful about the digital realm at all.
Not really on topic, but a rant. I'm tired of people (friends and friends of friends) complaining to me that they got majorly fucked one way or another and acting like the universe owes them not to get fucked while they buy a computer that exposes their asshole to the world.
Claude summarises your rant:
TLDR: People are lazy about digital security, get badly burned, then act surprised. Don't be that person.