• luciole (they/them)@beehaw.org
    link
    fedilink
    arrow-up
    64
    arrow-down
    1
    ·
    1 day ago

    It’s hard having two decades of experiences in a domain I suddenly find myself at odds with. Reading about others having the same qualm reassures me that I’m not going crazy. On the other I feel drawn further into an untenable contradictory position.

    Once in a while I give in. It’s typically when I’m faced with a non trivial problem I realize will take me days of learning before I have any chance of tackling it. My colleagues start suggesting it or share some slop to “help out”. So I think fuck it I’ll study later for now AI will solve it I need this ticket closed asap. I fire up a “decent” paid model and I start feeding it context. Every time it’s a nightmare. Hours of trying stuff that doesn’t stick, of questioning, of arguing with a chat bot, of wading through “here are the facts” and “good catch” and “I owe you an apology”. It’s not a shortcut it’s a fucking dead end. Then the bitter aftertaste can only be cleansed with cold hard time consuming actual learning.

    • Furbag@pawb.social
      link
      fedilink
      arrow-up
      1
      ·
      44 minutes ago

      I am so glad to hear that I am not the only one who finds AI coding to be an almost futile exercise. I spend more time talking to the damn robot trying to get it to fix problems than I would if I had just done it more slowly and deliberately in the programming language I am familiar with, or just circumvented the automation effort and done the task manually. All three seem to take about the same amount of time.

    • resipsaloquitur@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      1 day ago

      At least after hours of arguing with a bot and burning tons of money and energy you have a pile of code you can’t understand without paying a chatbot.

      • luciole (they/them)@beehaw.org
        link
        fedilink
        arrow-up
        12
        ·
        1 day ago

        But will the chat bot understand itself? It’s fun when you start questioning the LLM line by line about its own slop in the same session and it starts flagging all sorts of things it did wrong. Why didn’t it write it correctly in the first place? Or is the fix wrong? Who knows? People I guess. The model is fed on knowledge but whether it will activate in response to your prompt and be restored unadulterated is a coin toss.

          • Kichae@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 hours ago

            That’s a problem, but the bigger issue is how the commercial models are tuned to tell you that you are never wrong.

            Or, more to the point, telling people who don’t know what they’re talking about that they’re never wrong.