cross-posted from: https://feddit.org/post/28915273

[…]

That marketing may have outstripped reality. Early reports from Mythos preview users including AWS and Mozilla indicate that while the model is very good and very fast at finding vulnerabilities, and requires less hands-on guidance from security engineers - making it a welcome time-saver for the human teams - it has yet to eclipse human security researchers.

“So far we’ve found no category or complexity of vulnerability that humans can find that this model can’t,” Mozilla CTO Bobby Holley said, after revealing that Mythos found 271 vulnerabilities in Firefox 150. Then he added: “We also haven’t seen any bugs that couldn’t have been found by an elite human researcher.” In other words, it’s like adding an automated security researcher to your team. Not a zero-day machine that’s too dangerous for the world.

  • MangoCats@feddit.it
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    4
    ·
    2 days ago

    In other words, it’s like adding an automated security researcher to your team. Not a zero-day machine that’s too dangerous for the world.

    Missing the point? Hiring an elite human researcher isn’t easy, or cheap. It’s beyond the means of the vast majority of people out there. $20/Month Claude Pro subscription? Not so much.

    The question for me: How much better is Mythos than Opus 4.6 or 4.7, or Sonnet for that matter? Those models and similar from other companies are already being effectively leveraged by threat actors. If Mythos reduces the time x money cost of finding a new zero-day by a factor of 10 vs Opus 4.7 - that’s concerning. If it’s a factor of 1.1 - meh… the world is going to have to learn how to deal with these things sooner than later, and that means the “white hats” are going to need superior funding to the “black hats” along with cooperation to close the gaps they find, or the “black hats” are going to be getting a lot more annoying than they already are.

    • Nalivai@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      People for some reason assume that you can pay $20 for a bot and it will do something. You need a person with a lot of experience to get something useful from this bot, and every time we actually measure, the results that your experienced person will be quicker and better not using it at all, and doing the same work themselves.
      The corporate solution is to hire a not experienced person to wrangle the bots, but that’s a sure way to introduce bugs, not fix them.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        You need a person with a lot of experience to get something useful from this bot,

        Not entirely true. You get a lot more useful things from the bots when they are driven with people with a lot of experience. The problem that’s coming now is a magnified version of the “skript kiddiez” from early Google days where inexperienced people could just find exploits on the web and copy-paste them. Today, the LLMs actually can find vulns and develop exploits for people who don’t have any knowledge of the languages the exploits are being written in.

        every time we actually measure, the results that your experienced person will be quicker and better not using it at all, and doing the same work themselves.

        From my perspective, your data is out of date. I’ve been tracking the “usefulness” of frontier models in accelerating development speed for experienced people over the past 2 years. Two years ago, total waste of time. One year ago - equivocal, sometimes it accelerates an implementation, sometimes not. Six months ago, it was clearly helping more than hurting in most cases, and it has only continued to improve since then.

        Knowing what you are doing helps. Trusting that the LLM will help, helps - if you set out to show it’s a waste of time, a waste of time it will be. Lately, treating the LLM like a consultant, just hired, likely to disappear any day, helps. Take the time to run all the formal processes, develop the requirements documentation, tests, etc. Yes, that “slows things down” but not in the long run across realistic project life cycles - even with humans doing the work. Also along those lines: keep designs modular, with modules of reasonable complexity - monolithic monster blocks of logic don’t maintain well for people either. LLM implementations start falling apart when their effective context windows get exceeded (and, in truth, people do too.)

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        no CVE list, no CVSS distribution, no severity bucket, no disclosure timeline, no vendor-confirmed-novel table, no false-positive rate

        Yeah, that’s cooked data - it’s too easy to ask the LLM to give you the CVE list, the CVSS distribution / severity buckets, timelines, everything you might want.

        I have LLMs doing pull request reviews and as a default response they just give potshots, but if you prompt them they will point directly to the files and line numbers where the problems they are pointing out reside…

    • ashughes@feddit.uk
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      2 days ago

      How much better is Mythos than Opus 4.6 or 4.7, or Sonnet for that matter?

      Opus 4.6 resulted in 22 fixes in Firefox 148, compared to 271 fixes with Mythos in Firefox 150.

      source

        • frongt@lemmy.zip
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          1 day ago

          Firefox is a massive program, so yeah it’s gonna have a lot of bugs. Even a simple HTML rendering browser is a complex program.

            • Nalivai@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              What do you do with your browsers so they crash? Mine didn’t do that in at least a decade

              • MangoCats@feddit.it
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 day ago

                More often than crashing outright, I hit situations where the browser just isn’t working, won’t load pages or won’t execute button clicks on pages or similar and the only thing (on Windows) that will fix it is a reboot. In Linux usually closing the browser and restarting will get it going again. Yeah, BSODs are rare lately (though not entirely gone), but malfunctions still abound.

                • Nalivai@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  23 hours ago

                  Interesting. So far, all my experiences with stuff like that turned out to be faulty hardware.

                  • MangoCats@feddit.it
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    22 hours ago

                    My last (confirmed) faulty hardware crash (resulting from user operation, not just an outright failure to boot, or random crash “for no particular reason” other than a program trying to access a failing SSD or similar) came in the late 90s with a GPU card that would take down the system bus voltage in response to certain CAD operations - repeatably - do this rotation, watch the CPU do a hard reboot every time. Stay away from the GPU heavy operations - no problems.

                    These days the browser is the OS for over half of what happens on my work machines. And they’re almost, but not quite, 100% reliable, until they’re not. Working out those rare problems takes a long time, and with “progress” it feels like they’ve reached a kind of equilibrium where the rate of new problem introduction is about the same as the rate of known problem fixes.