• raspberriesareyummy@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    1
    ·
    3 days ago

    *LLM assistants

    And no shit, Sherlock. If you thought otherwise, you are living proof why there should be laws protecting people from such predatory snake oil vendors.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      3 days ago

      I wonder how close we are to getting our details leaked to LLMs because rubes with our contact info are just dumping them on those servers. Kind of like how Facebook maintains a social graph of you even if you don’t use it.

  • 9tr6gyp3@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 days ago

    Thats the risk of centralized services. If you are that worried about it, you would be installing a local model to run offline on your PC instead.

    • P03 Locke@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      What’s this, you say? Does it involve running LM Studio with a GLM-4.7 Flash model? And it only costs the kilowatts for my video card? And it can only go out to the internet if I allow it to?

      • 9tr6gyp3@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        Your PC likely wont have a million context-size capability, but thats the price you pay for being in the peasant class instead of the billionaire class.

  • Jiral@lemmy.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 days ago

    I doub it that mine does. I am running GPT-OSS-120B and Gemma4 31B on a local llama.cpp server on my PC.