How an artificial intelligence (as in large language model based generative AI) could be better for information access and retrieval than an encyclopedia with a clean classification model and a search engine?
If we add a step of processing – where a genAI “digests” perfectly structured data and tries, as bad as it can, to regurgitate things it doesn’t understand – aren’t we just adding noise?
I’m talking about the specific use-case of “draw me a picture explaining how a pressure regulator works”, or “can you explain to me how to code a recursive pattern matching algorithm, please”.
I also understand how it can help people who do not want or cannot make the effort to learn an encyclopedia’s classification plan, or how a search engine’s syntax work.
But on a fundamental level, aren’t we just adding an incontrolable step of noise injection in a decent time-tested information flow?
To me the value has come mostly from “ok, so it sounds to me you are saying that…” and the ability to confirm that I haven’t misunderstood something (of course with current LLMs both the original answer and the verification have to be taken with a heaping of salt). And the ability to adapt it on the go to a concrete example. So, kind of like a having a teacher or an expert friend, and not just search engine.
Like the last time I relied heavily on a LLM to help/teach me with something it was to explain the PC boot process and BIOS/UEFI to me, and how it applied step by step on how successfully deal with USB and bootloader issues on an “eccentric” HP laptop when installing Linux. The combination of explaining and doing and answering questions was way better than an encyclopedia. No doubt it could have been done with blog posts and textbooks, and I did have to make “educated guesses” on occasion, but all in all it was a great experience.