cross-posted from: https://feddit.org/post/28915273
[…]
That marketing may have outstripped reality. Early reports from Mythos preview users including AWS and Mozilla indicate that while the model is very good and very fast at finding vulnerabilities, and requires less hands-on guidance from security engineers - making it a welcome time-saver for the human teams - it has yet to eclipse human security researchers.
“So far we’ve found no category or complexity of vulnerability that humans can find that this model can’t,” Mozilla CTO Bobby Holley said, after revealing that Mythos found 271 vulnerabilities in Firefox 150. Then he added: “We also haven’t seen any bugs that couldn’t have been found by an elite human researcher.” In other words, it’s like adding an automated security researcher to your team. Not a zero-day machine that’s too dangerous for the world.



“Finding” bugs by throwing shit at the walls and assuming people will sort it out provides negative value. You technically are finding bugs, but you could do the same just assuming every line of your code contains five bugs. The question is in “and then what”, and the answer is “someone needs to sort them out and deal with it”, and if you have people who can fix the bug, they’re perfectly capable of finding it themselves. The bugs still exist because there is not enough people to fix that. And slop gen doesn’t help with that either.
It’s only a negative value if the AI+review process takes longer than a human just finding the bugs.
One of the biggest hurdles in infosec right now is just the sheer volume of data. Sifting through hoards of data and finding anomalies is something AI actually excels at.