• AA5B@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    The problem is this is the way it’s being pushed. This is how it’s being sold. There are no guardrails.

    …… and that’s the biggest problem. I’m frustrated as hell on the commits I’ve had to unwind because someone doesn’t know how to check the changes before committing, then has it try to fix itself, again without checking on the changes , then again. It’s horrible.

    …… and I’ve seen it too. Trying to have it do only code reviews - the ai points out useful things but then wants to commit a crapload of changes without going over it with me first.

    …… and people are playing with mcp agents, which are really great for letting the ai get data from systems and integrate with those systems . But with few to no guardrails. There’s no no review, the user doesn’t necessarily follow what’s changing, it just gets done. Sometime badly very badly

    We’re all focused on whether the ai works, and it does do a pretty good job with coding but the tools don’t keep the human in the loop, or humans don’t know how to stay on the loop

    • jaykrown@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      There are no guardrails.

      We CAN set the guardrails, I do it constantly. This technology is very powerful, it’s up to us to use good practices, it’s up to business leaders and developers to ensure that precautions are taken.

      My main recommendation, and hard limit that will never change: Do not let the AI make core file changes without human-in-the-loop permission every time.

      If you let an AI agent delete files outside of your project directory on your computer without you needing to click “I approve” with the ability to review it, you’re setting yourself up for a huge mistake. Never give AI agents access to anything outside project scope, and keep project scope tight.