• sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    7 hours ago

    There’s a difference between healthy skepticism and invalid, knee-jerk opposition.

    LLMs are a useful tool sometimes, and I use them for refining general ideas into specific things to research, and they’re pretty good at that. Sure, what they output isn’t trustworthy on its own, but I can pretty easily verify most of what it spits out, and it does a great job of spitting out a lot of stuff that’s related to what I asked.

    For example, I’m a SW dev, so I’ll often ask it stuff like, “compare and contrast popular projects that do X”, and it’ll find a few for me and give easily-verifiable details about each one. Sometimes it’s wrong on one or two details, but it gives me enough to decide which ones I want to look more deeply into. Or I’ll do some greenfield research into a topic I’m not familiar with, and it does a fantastic job of pulling out keywords and other domain-specific stuff that help refine what I search for.

    LLMs do a lot less than their proponents claim, but they also do a lot more than detractors claim. They’re a useful tool if you understand the limitations and have a rough idea of how they work. They’re a terrible tool if you buy into the BS coming from the large corps pushing them. I will absolutely push back against people on both extremes.