“Surprising” is perhaps the wrong word. If you have even a vague understanding of how these work, then nothing is really surprising. However, a bot day to day and learning how to integrate it into your workflow, you get used to a certain level of quality, but occasionally (regularly?) run into something that doesn’t meet your expectations.
I agree that the way that some people are interacting with these LLMs is… odd. However, people are engaging in so many odd behaviors I have to say if they’re not harming anyone then have at it.
I agree for the most part.
“Surprising” is perhaps the wrong word. If you have even a vague understanding of how these work, then nothing is really surprising. However, a bot day to day and learning how to integrate it into your workflow, you get used to a certain level of quality, but occasionally (regularly?) run into something that doesn’t meet your expectations.
I agree that the way that some people are interacting with these LLMs is… odd. However, people are engaging in so many odd behaviors I have to say if they’re not harming anyone then have at it.
Don’t gell-mann yourself.
If it spits out plausible looking but incorrect things you notice with high frequency, how much do you not notice?
I’m just not using Gen AI that way.
Like I don’t ask it to provide me with technical details, rather I provide details and ask it to re-phrase.