• wjs018@piefed.social
      link
      fedilink
      English
      arrow-up
      75
      arrow-down
      1
      ·
      1 day ago

      The theory that the lead maintainer had (he is an actual software developer, I just dabble), is that it might be a type of reinforcement learning:

      • Get your LLM to create what it thinks are valid bug reports/issues
      • Monitor the outcome of those issues (closed immediately, discussion, eventual pull request)
      • Use those outcomes to assign how “good” or “bad” that generated issue was
      • Use that scoring as a way to feed back into the model to influence it to create more “good” issues

      If this is what’s happening, then it’s essentially offloading your LLM’s reinforcement learning scoring to open source maintainers.

      • SabinStargem@lemmy.today
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        4 hours ago

        Honestly, I would be alright with this if the AI companies paid Github so that the server infrastructure can be upgraded. Having AI that can figure out bugs and error reports could be really useful for our society. For example, your computer rebooting for no apparent reason? The AI can check the diagnostic reports, combine them with online reports, and narrow down the possibilities.

        In the long run, this could also help maintainers as well. If they can have AI for testing programs, the maintainers won’t have to hope for volunteers or rely on paid QA for detecting issues.

        What Github & AI companies should do, is an opt-in program for maintainers. If they allow the AI to officially make reports, Github should offer an reward of some kind to their users. Allocate to each maintainer a number of credits so that they can discuss the report with the AI in realtime, plus $10 bucks for each hour spent on resolving the issue.

        Sadly, I have the feeling that malignant capitalism would demand maintainers to sacrifice their time for nothing but irritation.

      • HubertManne@piefed.social
        link
        fedilink
        English
        arrow-up
        40
        arrow-down
        1
        ·
        1 day ago

        Thats wild. I don’t have much hope for llm’s if things like this is how they are doing things and I would not be surprised given how well they don’t work. Too much quantity over quality in training.