Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.
Also includes outtakes on the ‘reasoning’ models.
Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.
Also includes outtakes on the ‘reasoning’ models.
I feel that a lot of what is improving in the recent batch of model releases is the vetting of their training data - basically the opposite of model collapse.
Nothing requires an LLM to train on the entire internet.