I found the aeticle in a post on the fediverse, and I can’t find it anymore.

The reaserchers asked a simple mathematical question to an LLM ( like 7+4) and then could see how internally it worked by finding similar paths, but nothing like performing mathematical reasoning, even if the final answer was correct.

Then they asked the LLM to explain how it found the result, what was it’s internal reasoning. The answer was detailed step by step mathematical logic, like a human explaining how to perform an addition.

This showed 2 things:

  • LLM don’t “know” how they work

  • the second answer was a rephrasing of original text used for training that explain how math works, so LLM just used that as an explanation

I think it was a very interesting an meaningful analysis

Can anyone help me find this?

EDIT: thanks to @theunknownmuncher @lemmy.world https://www.anthropic.com/research/tracing-thoughts-language-model its this one

EDIT2: I’m aware LLM dont “know” anything and don’t reason, and it’s exactly why I wanted to find the article. Some more details here: https://feddit.it/post/18191686/13815095

  • glizzyguzzler@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    4
    ·
    4 days ago

    You can prove it’s not by doing some matrix multiplication and seeing its matrix multiplication. Much easier way to go about it

    • theunknownmuncher@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      edit-2
      4 days ago

      Yes, neural networks can be implemented with matrix operations. What does that have to do with proving or disproving the ability to reason? You didn’t post a relevant or complete thought

      Your comment is like saying an audio file isn’t really music because it’s just a series of numbers.

      • glizzyguzzler@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        edit-2
        4 days ago

        Improper comparison; an audio file isn’t the basic action on data, it is the data; the audio codec is the basic action on the data

        “An LLM model isn’t really an LLM because it’s just a series of numbers”

        But the action of turning the series of numbers into something of value (audio codec for an audio file, matrix math for an LLM) are actions that can be analyzed

        And clearly matrix multiplication cannot reason any better than an audio codec algorithm. It’s matrix math, it’s cool we love matrix math. Really big matrix math is really cool and makes real sounding stuff. But it’s just matrix math, that’s how we know it can’t think

        • theunknownmuncher@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          edit-2
          3 days ago

          LOL you didn’t really make the point you thought you did. It isn’t an “improper comparison” (it’s called a false equivalency FYI), because there isn’t a real distinction between information and this thing you just made up called “basic action on data”, but anyway have it your way:

          Your comment is still exactly like saying an audio pipeline isn’t really playing music because it’s actually just doing basic math.

          • glizzyguzzler@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            1
            ·
            15 hours ago

            I was channeling the Interstellar docking computer (“improper contact” in such a sassy voice) ;)

            There is a distinction between data and an action you perform on data (matrix maths, codec algorithm, etc.). It’s literally completely different.

            An audio codec (not a pipeline) is just actually doing math - just like the workings of an LLM. There’s plenty of work to be done after the audio codec decodes the m4a to get to tunes in your ears. Same for an LLM, sandwiching those matrix multiplications that make the magic happen are layers that crunch the prompts and assemble the tokens you see it spit out.

            LLMs can’t think, that’s just the fact of how they work. The problem is that AI companies are happy to describe them in terms that make you think they can think to sell their product! I literally cannot be wrong that LLMs cannot think or reason, there’s no room for debate, it’s settled long ago. AI companies will string the LLMs together and let them chew for a while to try make themselves catch when they’re dropping bullshit. It’s still not thinking and reasoning though. They can be useful tools, but LLMs are just tools not sentient or verging on sentient

            • theunknownmuncher@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              9 hours ago

              There is a distinction between data and an action you perform on data (matrix maths, codec algorithm, etc.). It’s literally completely different.

              Incorrect. You might want to take an information theory class before speaking on subjects like this.

              I literally cannot be wrong that LLMs cannot think or reason, there’s no room for debate, it’s settled long ago.

              Lmao yup totally, it’s not like this type of research currently gets huge funding at universities and institutions or anything like that 😂 it’s a dead research field because it’s already “settled”. (You’re wrong 🤭)

              LLMs are just tools not sentient or verging on sentient

              Correct. No one claimed they are “sentient” (you actually mean “sapient”, not “sentient”, but it’s fine because people commonly mix these terms up. Sentience is about the physical senses. If you can respond to stimuli from your environment, you’re sentient, if you can “I think, therefore I am”, you’re sapient). And no, LLMs are not sapient either, and sapience has nothing to do with neural networks’ ability to mathematically reasoning or logic, you’re just moving the goalpost)

    • whaleross@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 days ago

      People that can not do Matrix multiplication do not possess the basic concepts of intelligence now? Or is software that can do matrix multiplication intelligent?

      • glizzyguzzler@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 hours ago

        So close, LLMs work via matrix multiplication, which is well understood by many meat bags and matrix math can’t think. If a meat bag can’t do matrix math, that’s ok, because the meat bag doesn’t work via matrix multiplication. lol imagine forgetting how to do matrix multiplication and disappearing into a singularity or something

        • whaleross@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          14 hours ago

          Well, on the other hand. Meat bags can’t really do neuron stuff either, despite that is essential for any meat bag operation. Humans are still here though and so are dogs.