• resipsaloquitur@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    1 day ago

    At least after hours of arguing with a bot and burning tons of money and energy you have a pile of code you can’t understand without paying a chatbot.

    • luciole (they/them)@beehaw.org
      link
      fedilink
      arrow-up
      13
      ·
      1 day ago

      But will the chat bot understand itself? It’s fun when you start questioning the LLM line by line about its own slop in the same session and it starts flagging all sorts of things it did wrong. Why didn’t it write it correctly in the first place? Or is the fix wrong? Who knows? People I guess. The model is fed on knowledge but whether it will activate in response to your prompt and be restored unadulterated is a coin toss.

        • Kichae@lemmy.ca
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 hours ago

          That’s a problem, but the bigger issue is how the commercial models are tuned to tell you that you are never wrong.

          Or, more to the point, telling people who don’t know what they’re talking about that they’re never wrong.