We have all seen AI-based searches available on the web like Copilot, Perplexity, DuckAssist etc, which scour the web for information, present them in a summarized form, and also cite sources in support of the summary.
But how do they know which sources are legitimate and which are simple BS ? Do they exercise judgement while crawling, or do they have some kind of filter list around the “trustworthyness” of various web sources ?
LLMs can’t describe themselves or their internal layers. You can’t ask ChatGPT to describe it’s censorship.
Instead, you’re getting a reply based on how other sources in the training set described how LLMs work, plus the tone appropriate to your chat.
the illusion is STRONG. i just typed up two draft replies before i realized what actually you’re saying here.