I found the aeticle in a post on the fediverse, and I can’t find it anymore.

The reaserchers asked a simple mathematical question to an LLM ( like 7+4) and then could see how internally it worked by finding similar paths, but nothing like performing mathematical reasoning, even if the final answer was correct.

Then they asked the LLM to explain how it found the result, what was it’s internal reasoning. The answer was detailed step by step mathematical logic, like a human explaining how to perform an addition.

This showed 2 things:

  • LLM don’t “know” how they work

  • the second answer was a rephrasing of original text used for training that explain how math works, so LLM just used that as an explanation

I think it was a very interesting an meaningful analysis

Can anyone help me find this?

EDIT: thanks to @theunknownmuncher @lemmy.world https://www.anthropic.com/research/tracing-thoughts-language-model its this one

EDIT2: I’m aware LLM dont “know” anything and don’t reason, and it’s exactly why I wanted to find the article. Some more details here: https://feddit.it/post/18191686/13815095

  • ipkpjersi@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 days ago

    “the ability to satisfy goals in a wide range of environments”

    That was not the definition of AGI even back before LLMs were a thing.

    Wether we’ll ever have thinking, rationalised and possibly conscious AGI is beyond the question. But I do think current AI is similar to existing brains today.

    That’s doing a disservice to AGI.

    Do you not agree that animal brains are just prediction machines?

    That’s doing a disservice to human brains. Humans are sentient, LLMs are not sentient.

    I don’t really agree with you.

    LLMs are damn impressive, but they are very clearly not AGI, and I think that’s always worth pointing out.

    • Voldemort@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      3 days ago

      The first person to be recorded talking about AGI was Mark Gubrud. He made that quote above, here’s another:

      The major theme of the book was to develop a mathematical foundation of artificial intelligence. This is not an easy task since intelligence has many (often ill-defined) faces. More specifically, our goal was to develop a theory for rational agents acting optimally in any environment. Thereby we touched various scientific areas, including reinforcement learning, algorithmic information theory, Kolmogorov complexity, computational complexity theory, information theory and statistics, Solomonoff induction, Levin search, sequential decision theory, adaptive control theory, and many more. Page 232 8.1.1 Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability

      As UGI largely encompasses AGI we could easily argue that if modern LLMs are beginning to fit the description of UGI then it’s fullfilling AGI too. Although AGI’s definition in more recent times has become more nuanced to replicating a human brain instead, I’d argue that that would degrade the AI trying to replicate biology.

      I don’t beleive it’s a disservice to AGI because AGI’s goal is to create machines with human-level intelligence. But current AI is set to surpase collective human intelligence supposedly by the end of the decade.

      And it’s not a disservice to biological brains to summarise them to prediction machines. They work, very clearly. Sentience or not if you simulated every atom in the brain it will likely do the same job, soul or no soul. It just brings the philosophical question of “do we have free will or not?” And “is physics deterministic or not”. So much text exists on the brain being prediction machines and the only time it has recently been debated is when someone tries differing us from AI.

      I don’t believe LLMs are AGI yet either, I think we’re very far away from AGI. In a lot of ways I suspect we’ll skip AGI and go for UGI instead. My firm opinion is that biological brains are just not effective enough. Our brains developed to survive the natural world and I don’t think AI needs that to surpass us. I think UGI will be the equivalent of our intelligence with the fat cut off. I believe it only resembles our irrational thought patterns now because the fat hasn’t been striped yet but if something truely intelligent emerges, we’ll probably see these irrational patterns cease to exist.