AI has been around for years and we all utilize the results of that research.
Remember that at one time a compiler was seen as AI.
It’s the curse of AI: once a problem is solved, it’s no longer AI. It just becomes a tool, and we adjust what “intelligence” means to exclude the new abilities of computers and code.
Even LLMs have value, just not how they’re being used. If you carefully curate the training materials, you could have a useful tool.
I’d love to see an LLM trained exclusively on medical records of patients who were successfully diagnosed and treated. I wouldn’t want to give it a medical license, but it could be a useful tool in the hands of a competent physician. It might turn out to be useless, but we need to try it.
It was kind of OK to refer to enemies as AI back then because not a single human (or investor) truly believed that a bot in CS can write text, paint an image, or replace you at your job. Now misusing the term leads to an unnecessary dangerous confusion.
No, no they weren’t, not even close. Unless you’re using AI in the loosest way possible, including machine learning algorithms that we were calling AI as a joke in the 90s.
Or you can use AI and FOSS - optimization of both worlds!
I don’t want hallucinations and lies in my open source
It wouldn’t be used to fact check you know.
I rather like AI to fold my proteins.
AI is for lying to investors that your company is going to turn a profit in a few years. FOSS projects don’t need that.
Try to separate the AI hype from AI.
AI has been around for years and we all utilize the results of that research.
Remember that at one time a compiler was seen as AI.
It’s the curse of AI: once a problem is solved, it’s no longer AI. It just becomes a tool, and we adjust what “intelligence” means to exclude the new abilities of computers and code.
Even LLMs have value, just not how they’re being used. If you carefully curate the training materials, you could have a useful tool.
I’d love to see an LLM trained exclusively on medical records of patients who were successfully diagnosed and treated. I wouldn’t want to give it a medical license, but it could be a useful tool in the hands of a competent physician. It might turn out to be useless, but we need to try it.
Stop referring to LLMs as AI for starters
Why? On what basis?
LLMs are AI as much as the enemies in a game are AI. It’s not General AI though, which companies really seem to want people to believe it is.
It was kind of OK to refer to enemies as AI back then because not a single human (or investor) truly believed that a bot in CS can write text, paint an image, or replace you at your job. Now misusing the term leads to an unnecessary dangerous confusion.
Robotic Arms with AI have been taking over jobs well before LLMs were a thing.
No, no they weren’t, not even close. Unless you’re using AI in the loosest way possible, including machine learning algorithms that we were calling AI as a joke in the 90s.