We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

  • TheGrandNagus@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    ·
    1 day ago

    To be fair, the term “AI” has always been used in an extremely vague way.

    NPCs in video games, chess computers, or other such tech are not sentient and do not have general intelligence, yet we’ve been referring to those as “AI” for decades without anybody taking an issue with it.

    • skisnow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 hours ago

      I’ve heard it said that the difference between Machine Learning and AI, is that if you can explain how the algorithm got its answer it’s ML, and if you can’t then it’s AI.

    • benni@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      18 hours ago

      It’s true that the word has always been used loosely, but there was no issue with it because nobody believed what was called AI to have actual intelligence. Now this is no longer the case, and so it becomes important to be more clear with our words.

        • benni@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          12 hours ago

          I have no idea. For me it’s a “you recognize it when you see it” kinda thing. Normally I’m in favor of just measuring things with a clearly defined test or benchmark, but it is in the nature of large neural networks that they can be great at scoring on any desired benchmark while failing to be good at the underlying ability that the benchmark was supposed to test (overfitting). I know this sounds like a lazy answer, but it’s a very difficult question to define something based around generalizing and reacting to new challenges.

          But whether LLMs do have “actual intelligence” or not was not my point. You can definitely make a case for claiming they do, even though I would disagree with that. My point was that calling them AIs instead of LLMs bypasses the entire discussion on their alleged intelligence as if it wasn’t up for debate. Which is misleading, especially to the general public.

    • MajorasMaskForever@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      edit-2
      1 day ago

      I don’t think the term AI has been used in a vague way, it’s that there’s a huge disconnect between how the technical fields use it vs general populace and marketing groups heavily abuse that disconnect.

      Artificial has two meanings/use cases. One is to indicate something is fake (video game NPC, chess bots, vegan cheese). The end product looks close enough to the real thing that for its intended use case it works well enough. Looks like a duck, quacks like a duck, treat it like a duck even though we all know it’s a bunny with a costume on. LLMs on a technical level fit this definition.

      The other definition is man made. Artificial diamonds are a great example of this, they’re still diamonds at the end of the day, they have all the same chemical makeups, same chemical and physical properties. The only difference is they came from a laboratory made by adult workers vs child slave labor.

      My pet theory is science fiction got the general populace to think of artificial intelligence to be using the “man-made” definition instead of the “fake” definition that these companies are using. In the past the subtle nuance never caused a problem so we all just kinda ignored it

      • El Barto@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        1 day ago

        Dafuq? Artificial always means man-made.

        Nature also makes fake stuff. For example, fish that have an appendix that looks like a worm, to attract prey. It’s a fake worm. Is it “artificial”? Nope. Not man made.

          • atrielienz@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            1 day ago

            Word roots say they have a point though. Artifice, Artificial etc. I think the main problem with the way both of the people above you are using this terminology is that they’re focusing on the wrong word and how that word is being conflated with something it’s not.

            LLM’s are artificial. They are a man made thing that is intended to fool man into believing they are something they aren’t. What we’re meant to be convinced they are is sapiently intelligent.

            Mimicry is not sapience and that’s where the argument for LLM’s being real honest to God AI falls apart.

            Sapience is missing from Generative LLM’s. They don’t actually think. They don’t actually have motivation. What we’re doing when we anthropomorphize them is we are fooling ourselves into thinking they are a man-made reproduction of us without the meat flavored skin suit. That’s not what’s happening. But some of us are convinced that it is, or that it’s near enough that it doesn’t matter.