I was just looking for a name of a historical figure associated with the Declaration of Independence but not involved in the writing of it. Elizabeth Powel. Once I knew the name, I went through the ai to see how fast they’d get it. Duck.ai confidently gave me 9 different names, including people who were born on 1776 or soon thereafter and could not have been historically involved in any of it. I even said not married to any of the writers and kept getting Abagail Adams and the journalist, Goddard. It was continually distracted by “prominent woman” and would give Elizabeth Cady Stanton instead. Twice.
Finally, I gave the ai a portrait. It took the ai three tries to get the name from the portrait, and the portrait is the most used one under the images tab.
It was very sad. I strongly encourage everyone to test the ai. Easy to grab wikis that would be top of the search anyway are making the ai look good.
If you understand how LLMs work, that’s not surprising.
LLMs generate a sequence of words that makes sense in that context. It’s trained on trillions(?) of words from books, Wikipedia, etc. In most of the training material, when someone asks “what’s the name of the person who did X?” there’s an answer, and that answer isn’t “I have no fucking clue”.
Now, if it were trained on a whole new corpus of data that had “I have no fucking clue” a lot more often, it would see that as a reasonable thing to print sometimes so you’d get that answer a lot more often. However, it doesn’t actually understand anything. It just generates sequences of believable words. So, it wouldn’t generate “I have no fucking clue” when it doesn’t know, it would just generate it occasionally when it seemed like it was an appropriate time. So, you’d ask “Who was the first president of the USA?” and it would sometimes say “I have no fucking clue” because that’s sometimes what the training data says a response might look like when someone asks a question of that form.
LOL Maybe AI will be the next big job creator. The AI solves a task super fast, but a human has to sort out the mistakes, and spend twice the time doing that, than it would have taken to just do it yourself.
Thus demonstrating the crux of the issue.
I was just looking for a name of a historical figure associated with the Declaration of Independence but not involved in the writing of it. Elizabeth Powel. Once I knew the name, I went through the ai to see how fast they’d get it. Duck.ai confidently gave me 9 different names, including people who were born on 1776 or soon thereafter and could not have been historically involved in any of it. I even said not married to any of the writers and kept getting Abagail Adams and the journalist, Goddard. It was continually distracted by “prominent woman” and would give Elizabeth Cady Stanton instead. Twice.
Finally, I gave the ai a portrait. It took the ai three tries to get the name from the portrait, and the portrait is the most used one under the images tab.
It was very sad. I strongly encourage everyone to test the ai. Easy to grab wikis that would be top of the search anyway are making the ai look good.
If you understand how LLMs work, that’s not surprising.
LLMs generate a sequence of words that makes sense in that context. It’s trained on trillions(?) of words from books, Wikipedia, etc. In most of the training material, when someone asks “what’s the name of the person who did X?” there’s an answer, and that answer isn’t “I have no fucking clue”.
Now, if it were trained on a whole new corpus of data that had “I have no fucking clue” a lot more often, it would see that as a reasonable thing to print sometimes so you’d get that answer a lot more often. However, it doesn’t actually understand anything. It just generates sequences of believable words. So, it wouldn’t generate “I have no fucking clue” when it doesn’t know, it would just generate it occasionally when it seemed like it was an appropriate time. So, you’d ask “Who was the first president of the USA?” and it would sometimes say “I have no fucking clue” because that’s sometimes what the training data says a response might look like when someone asks a question of that form.
LOL Maybe AI will be the next big job creator. The AI solves a task super fast, but a human has to sort out the mistakes, and spend twice the time doing that, than it would have taken to just do it yourself.
This what’s happening in computer programming. The booming subfield is apparently slop cleaners.