Do you have any ideas or thoughts about this?

  • shalafi@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    11
    ·
    edit-2
    1 day ago

    Not in IT, huh? Because you missed my entire point. This isn’t like making a lame email that screams fake.

    I got stuck on a Google Calendar/Sheets integration. Almost no documentation or examples out there. After banging my head for hours it occurred to me to try this new AI thing.

    ChatGPT spit out some code, didn’t work of course, but I saw a new path I hadn’t considered and one I’d never knew existed! Picked out the bits I needed, got the script stood up within an hour, after wasting hours trying to do it from scratch.

    People like you were criticizing the use of fire back in the day. “Oog burned hut new fire thing!” “Oog antelope shit head, no use fire good.” “Fire bad FIRE BAD!

    • expr@programming.dev
      link
      fedilink
      arrow-up
      16
      arrow-down
      4
      ·
      1 day ago

      Cute. I’m a senior software engineer that has trained many different models (NLP, image classification, computer vision, LIDAR analysis) before this stupid fucking LLM craze. I know precisely how they work (or rather, I know how much people don’t know how they work, because of the black box approach to training). From the outset, I knew people believed it was much more capable than it actually is, because it was incredibly obvious as someone who’s actually built the damn things before (albeit with much less data/power).

      Every developer that loves LLMs I see is pretty fucking clueless about them and think of them as some magical device that has actual intelligence (just like everybody does, I guess, but I expect better of developers). It has no semantic understanding whatsoever. It’s stochastic generation of sequences of tokens to loosely resemble natural language. It’s old technology recently revitalized because large corporations plundered humanity in order to brute force their way into models with astronomically-high numbers of parameters, so they now are now “pretty good” at resembling natural language, compared to before. But that’s all it fucking is. Imitation. No understanding, no knowledge, no insight. So calling it “inspiration” is a fucking joke, and treating it as anything other than a destructive amusement (due to the mass ecological and sociological catastrophe it is) is sheer stupidity.

      I’m pissed off about it for many reasons, but especially because my peers at work are consistently wasting my fucking time with LLM slop and it’s fucking exhausting to deal with. I have to guard against way more garbage now to make sure our codebase doesn’t turn into utter shit. The other day, an engineer submitted an MR for me to review that contained dozens of completely useless/redundant LLM-generated tests that would have increased our CI time a shitload and bloated our codebase for no fucking reason. And all of it is for trivial, dumb shit that’s not hard to figure out or do at all. I’m so fucking sick of all of it. No one cares about their craft anymore. No one cares about being a good fucking engineer and reading the goddamn documentation and just figuring shit out on their own, with their own fucking brain.

      By the way, no actual evidence exists of this supposed productivity boost people claim, whereas we have a number of studies demonstrating the problems with LLMs, like MIT’s study on its effects on human cognition, or this study from the ACM showing how LLMs are a force multiplier for misinformation and deception. In fact, not only do we not have any real evidence that it boosts productivity, we have evidence of the opposite: this recent METR study found that AI usage increased completion time by 19% for experienced engineers working on large, mature, open-source codebases.

    • Phoenixz@lemmy.ca
      link
      fedilink
      arrow-up
      5
      ·
      1 day ago

      I am in IT. CTO, yet also still doing development

      Anyone that would deliver a pure AI project I would reject immediately and have them first look at what the hell it is

      That is the biggest issue with AI, people only use it for ready to go solutions. Nobody checks what comes out of it

      I use AI in my IDE exactly like you mentioned; it gives me a wrong answer (because of course) and even though the answer is wrong, it might give me a new idea. That’s fine.

      The problem is ready to go idiots that will just blindly trust AI, ie, the 90% of humans on this world

      • FreedomAdvocate
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        11 hours ago

        The other problem is idiots (who tend to pretend they’re experts and the only ones who understand what AI is) that don’t understand that AI is a tool to be used, and don’t realise that people who are actually good at their job are able to figure out how to use it properly to benefit them. They think because an idiot uses it incorrectly, that that’s the only way to use it.