• sp3ctr4l@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    2 hours ago

    Basically anywhere that LLMs are implemented… they are a security vulnerability, for any situation in which they are not sandboxed.

    Anything they can interface with?

    You can probably trick it or exploit it into doing something unintended or unexpected to anything else it is connected to.

    Either that or take advantage of the system that serves as the framework that connects it to other systems.

    Theoretically you could use an LLM to do something like come up with more accurate heuristics for identifying malware…

    But… they’re nowhere near ‘intelligent’ enough to like, give it a whole code base for some kind of software, and thoroughly make that software 100% secure.