• luciole (they/them)@beehaw.org
    link
    fedilink
    arrow-up
    10
    ·
    22 hours ago

    But will the chat bot understand itself? It’s fun when you start questioning the LLM line by line about its own slop in the same session and it starts flagging all sorts of things it did wrong. Why didn’t it write it correctly in the first place? Or is the fix wrong? Who knows? People I guess. The model is fed on knowledge but whether it will activate in response to your prompt and be restored unadulterated is a coin toss.

      • Kichae@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 hours ago

        That’s a problem, but the bigger issue is how the commercial models are tuned to tell you that you are never wrong.

        Or, more to the point, telling people who don’t know what they’re talking about that they’re never wrong.