This is a bad faith argument.
Search engines are notoriously bad to find rare specialized information and usually return empty search results for too specific requests. Moreover you need the exact keywords while LLMs use embeddings to find similar meanings
Search engines haven’t worked reliably for several years now, the top results for almost any search are from social media pages that you can’t even read without an account. The Internet is broken.
No they didn’t and they still don’t really do that.
There are too many things (nowadays?) where you have to literally write a question on reddit, stack overflow or Lemmy or the likes and explain your situation in minute detail, because what you find online through search engines is only the standard case which just so happens to not work for you for some odd reason.
Believe me, when I say that, because I always try search engines first, second and third, before even thinking of using some bs-spitting AI, but it really helped me with two very special problems in the last month.
what you find online through search engines is only the standard case which just so happens to not work for you for some odd reason
Usually because the highest-rated solution is half-assed bullshit proposed by an overconfident newbie (or an LLM regurgitating it). I mainly use Stack Overflow as a way to become pissed off enough that I’ll go solve the problem myself, like I should have done in the first place. Indignation As A Service.
This is also in part true.
Today I was searching for multiple things regarding jinja2 and was always recommended a site that no longer exists, as top result, mind you.
And LLM’s aren’t gamed? Like Grok constantly being tweaked to not say anything inconvenient about Musk? Or ChatGPT citing absurd Reddit posts deliberately made by users to make AI responses wrong?
AI is built from the ground up to do what they want, and they’re no better than those crappy info-scraper sites like wearethewindoezproz dot com that scrape basic info off every other site and offer it as a solution to your problem with [SOLVED] in the result title. “Did you turn it off and on again?”
The “people learned how to game it” is called SEO, and you’re right, they did.
Guess what, there’s GEO to game the results of LLMs. It works just as well, is harder to spot, and traditional SEO platforms like Ahrefs and SEMRush are already training users on how to do it.
So congrats, the argument that using LLMs for search is s good solution because people learned how to game search engines makes no sense.
This is primarily because search engines have become so unreliable and enshittified that they are useless. It’s not a mark in favor of AI as much as a reminder of how bad search engines have become.
For the record I do the same thing after failing to find anything on DuckDuckGo after multiple attempts. Maybe I should give Kagi a try, but AI is making the entire internet worse, so I feel pessimistic about that, too.
Its not a search engine, its a data digester. Dont use it as a search engine. Despite what alphabet, micro-shit, and DDG think, AI chatbots do not now, nor will they ever make good search engines.
This is a prime example of why access to these tools should be restricted to computer scientists and research labs. The average person doesn’t know how to use them effectively (resulting in enormous power wasted by ‘prompt engineering’), and the standard available models aren’t good at digesting non-linguistic data.
I’m not gonna downvote you, or be like all “AI is the devil and its gonna kill us all” but people need to use it correctly or we ARE going to kill ourselves with its waste heat.
It works wonderfully well as a search engine, when I have to find obscure specialized info. Can always criss check once I have a idea.
Because companies destroyed actual search engines in the race for billions of dollars.
Kagi, searx are fricken awesome and much like the web in mid 2000s before corporations destroyed it.
Regular search engines did that 20 years ago, without blowing out the power grid.
This is a bad faith argument. Search engines are notoriously bad to find rare specialized information and usually return empty search results for too specific requests. Moreover you need the exact keywords while LLMs use embeddings to find similar meanings
Search engines haven’t worked reliably for several years now, the top results for almost any search are from social media pages that you can’t even read without an account. The Internet is broken.
No they didn’t and they still don’t really do that.
There are too many things (nowadays?) where you have to literally write a question on reddit, stack overflow or Lemmy or the likes and explain your situation in minute detail, because what you find online through search engines is only the standard case which just so happens to not work for you for some odd reason.
Believe me, when I say that, because I always try search engines first, second and third, before even thinking of using some bs-spitting AI, but it really helped me with two very special problems in the last month.
Usually because the highest-rated solution is half-assed bullshit proposed by an overconfident newbie (or an LLM regurgitating it). I mainly use Stack Overflow as a way to become pissed off enough that I’ll go solve the problem myself, like I should have done in the first place. Indignation As A Service.
This is also in part true.
Today I was searching for multiple things regarding jinja2 and was always recommended a site that no longer exists, as top result, mind you.
Ah, but you see, they don’t do it now.
Yeah then people.learned how to game it and its shit now. Pointing how something worked 20 years ago does shit all for how it works now
Speak of the devil. Just a few stories down
https://www.theverge.com/ai-artificial-intelligence/835839/google-discover-ai-headlines-clickbait-nonsense
And LLM’s aren’t gamed? Like Grok constantly being tweaked to not say anything inconvenient about Musk? Or ChatGPT citing absurd Reddit posts deliberately made by users to make AI responses wrong?
AI is built from the ground up to do what they want, and they’re no better than those crappy info-scraper sites like wearethewindoezproz dot com that scrape basic info off every other site and offer it as a solution to your problem with [SOLVED] in the result title. “Did you turn it off and on again?”
The “people learned how to game it” is called SEO, and you’re right, they did.
Guess what, there’s GEO to game the results of LLMs. It works just as well, is harder to spot, and traditional SEO platforms like Ahrefs and SEMRush are already training users on how to do it.
So congrats, the argument that using LLMs for search is s good solution because people learned how to game search engines makes no sense.
And now we have something better. I’m all for a better grid running on renewables though, which is the actual problem.
This is primarily because search engines have become so unreliable and enshittified that they are useless. It’s not a mark in favor of AI as much as a reminder of how bad search engines have become.
For the record I do the same thing after failing to find anything on DuckDuckGo after multiple attempts. Maybe I should give Kagi a try, but AI is making the entire internet worse, so I feel pessimistic about that, too.
Its not a search engine, its a data digester. Dont use it as a search engine. Despite what alphabet, micro-shit, and DDG think, AI chatbots do not now, nor will they ever make good search engines.
This is a prime example of why access to these tools should be restricted to computer scientists and research labs. The average person doesn’t know how to use them effectively (resulting in enormous power wasted by ‘prompt engineering’), and the standard available models aren’t good at digesting non-linguistic data.
I’m not gonna downvote you, or be like all “AI is the devil and its gonna kill us all” but people need to use it correctly or we ARE going to kill ourselves with its waste heat.
Edit: ficksed an werd