Some college students are beginning to limit their use of artificial intelligence, so as not to hinder their own creativity, discipline and critical thinking
Lol. Almost no one who has an opinion on AI is a technophobe. I’d argue it’s the opposite. To be informed enough to have an opinion means you must like technology. However, LLMs have been proven to be confidently inaccurate and misleading, and it creates situations where people believe they’re correct when it just made shit up.
Sure, it help you get an answer pretty quickly, but then you have to check it for accuracy (if you aren’t a fucking idiot who just trusts the thing innately). It doesn’t actually save any time most of the time if you actually learn how to do it yourself. For example, I sometimes use a local model to write boilerplate code for me, but if I ask it to write anything that solves a problem it’s almost never correct. Then I have to parse it and figure out what it was doing to fix it, when I could have just written it myself and been done.
Yeah, it’s great if you’re an idiot and just want to sound smart. It’ll give you an answer that seems reasonable, and you can move on with your day. However, there’s very good odds it isn’t correct if it’s anything complex and/or niche. (I think I saw something not long ago saying 70% chance of being wrong.) If it isn’t complex or is common, you didn’t need a fucking LLM to solve it.
Lol. Almost no one who has an opinion on AI is a technophobe. I’d argue it’s the opposite. To be informed enough to have an opinion means you must like technology. However, LLMs have been proven to be confidently inaccurate and misleading, and it creates situations where people believe they’re correct when it just made shit up.
Sure, it help you get an answer pretty quickly, but then you have to check it for accuracy (if you aren’t a fucking idiot who just trusts the thing innately). It doesn’t actually save any time most of the time if you actually learn how to do it yourself. For example, I sometimes use a local model to write boilerplate code for me, but if I ask it to write anything that solves a problem it’s almost never correct. Then I have to parse it and figure out what it was doing to fix it, when I could have just written it myself and been done.
Yeah, it’s great if you’re an idiot and just want to sound smart. It’ll give you an answer that seems reasonable, and you can move on with your day. However, there’s very good odds it isn’t correct if it’s anything complex and/or niche. (I think I saw something not long ago saying 70% chance of being wrong.) If it isn’t complex or is common, you didn’t need a fucking LLM to solve it.