I found the aeticle in a post on the fediverse, and I can’t find it anymore.

The reaserchers asked a simple mathematical question to an LLM ( like 7+4) and then could see how internally it worked by finding similar paths, but nothing like performing mathematical reasoning, even if the final answer was correct.

Then they asked the LLM to explain how it found the result, what was it’s internal reasoning. The answer was detailed step by step mathematical logic, like a human explaining how to perform an addition.

This showed 2 things:

  • LLM don’t “know” how they work

  • the second answer was a rephrasing of original text used for training that explain how math works, so LLM just used that as an explanation

I think it was a very interesting an meaningful analysis

Can anyone help me find this?

EDIT: thanks to @theunknownmuncher @lemmy.world https://www.anthropic.com/research/tracing-thoughts-language-model its this one

EDIT2: I’m aware LLM dont “know” anything and don’t reason, and it’s exactly why I wanted to find the article. Some more details here: https://feddit.it/post/18191686/13815095

  • patatahooligan@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    3 days ago

    They work the exact same way we do.

    Two things being difficult to understand does not mean that they are the exact same.

    • Voldemort@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      3 days ago

      Maybe work is the wrong word, same output. Just as a belt and chain drive does the same thing, or how fluorescent, incandescent or LED lights produce light even though they’re completely different mechanisms.

      What I was saying is that one is based on the other, so similar problems like irrational thought even if the right answer is conjured shouldn’t be surprising. Although an animal brain and nural network are not the same, the broad concept of how they work is.