If you really want to secure your computer, encase that puppy in concrete (after disconnecting it from power),
If you really want to secure your computer, encase that puppy in concrete (after disconnecting it from power),


Agreed 110℅.


I am not.


They’re really good.*
Personally, I think the term “AI” is an extreme misnomer. I am calling ChatGPT “next-token prediction.” This notion that it’s intelligent is absurd. Like, is a dictionary good at words now???


Lol. “I came to break some necks and chew some bubblegum – and I’m all out of bubblegum.”


In fairness, crumby books can hardly be blamed on AI. To quote my mother, “That train’s left the station.”
Like, the AI slop ones will probably have better writing, sadly.


I really don’t have this experience with ChatGPT. Every once in a while, ChatGPT returns an answer that doesn’t seem legitimate, so I ask, “Really?” And then it returns, “No, that is incorrect.” Which… I really hope the robots responsible for eliminating humans are not so hapless. But the stories about AI encouraging kids to kill themselves or mentioning books that don’t exist seem a little made up. And, like, don’t get me wrong: I want to believe ChatGPT listed glue as a good ingredient for making pizza crust thicker… I just require a bit more evidence.
I agree with you, but something jumped out at me while reading this thread. To a degree, the fear of “breaking something” is completely legitimate, but it’s based on not getting quick feedback from systems. For instance, if you are walking in a direction that you think is east, but the sun is setting ahead of you, you know you’re headed in the wrong direction. Computers often don’t provide such useful feedback, often leading users to “break things.”