If an LLM can’t be trusted with a fast food order, I can’t imagine what it is reliable enough for. I really was expecting this was the easy use case for the things.

It sounds like most orders still worked, so I guess we’ll see if other chains come to the same conclusion.

  • Bronzebeard@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    Sure, but how do you distill this into a rule a computer can follow? “Suspicious” is not an objectively measurable thing that a program can just check against

    • TheRagingGeek@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      2 days ago

      Think the easiest way would be to collect order data for at least a good number of months if not a couple years and feed it in and use that as a baseline of what a typical human order looks like, anything that deviates too far from that baseline needs to be handled by a human until someone can validate it as a good order, though I imagine you could get false positives for new menu items unless you set a reasonable instruction for items that have never appeared in the dataset before.