Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • MangoCats@feddit.it
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 hour ago

    I feel that a lot of what is improving in the recent batch of model releases is the vetting of their training data - basically the opposite of model collapse.

    Nothing requires an LLM to train on the entire internet.