• 0 Posts
  • 6 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle




  • So, I will grant that right now humans are better writers than LLMs. And fundamentally, I don’t think the way that LLMs work right now is capable of mimicking actual human writing, especially as the complexity of the topic increases. But I have trouble with some of these kinds of distinctions.

    So, not to be pedantic, but:

    AI can’t create something all on its own from scratch like a human. It can only mimic the data it has been trained on.

    Couldn’t you say the same thing about a person? A person couldn’t write something without having learned to read first. And without having read things similar to what they want to write.

    LLMs like ChatGP operate on probability. They don’t actually understand anything and aren’t intelligent.

    This is kind of the classic chinese room philosophical question, though, right? Can you prove to someone that you are intelligent, and that you think? As LLMs improve and become better at sounding like a real, thinking person, does there come a point at which we’d say that the LLM is actually thinking? And if you say no, the LLM is just an algorithm, generating probabilities based on training data or whatever techniques might be used in the future, how can you show that your own thoughts aren’t just some algorithm, formed out of neurons that have been trained based on data passed to them over the course of your lifetime?

    And when they start hallucinating, it’s because they don’t understand how they sound…

    People do this too, though… It’s just that LLMs do it more frequently right now.

    I guess I’m a bit wary about drawing a line in the sand between what humans do and what LLMs do. As I see it, the difference is how good the results are.