• Grimy@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    4 days ago

    I think it will be punished, but not how we hope. The laws will end up rewarding the big data holders (Getty, record labels, publishers) while locking out open source tools. The paywalls will stay and grow. It’ll just formalize a monopoly.

    • mspencer712@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      I think this might be hypocritical of me, but in one sense I think I prefer that outcome. Let those existing trained models become the most vile and untouchable of copyright infringing works. Send those ill-gotten corporate gains back to the rights holders.

      What, me? Of course I’ve erased all my copies of those evil, evil models. There’s no way I’m keeping my own copies to run, illicitly, on my own hardware.

      (This probably has terrible consequences I haven’t thought far enough ahead on.)

      • Grimy@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 days ago

        I understand the sentiment but I think it’s foolhardy.

        • The job losses still occur
        • The handful of companies able to pay for the data have a defecto monopoly (Google, OpenAI)
        • That monopoly is used to keep the price tag of state of the art AI tools above consumer levels (your boss can afford to replace you but you can’t afford to compete against him with the same tools).

        And all that mostly benefiting the data holders and big ai companies. Most image data is on platforms like Getty, Deviant Art, Instagram, etc. It’s even worse for music and lit, where three record labels and five publishers own most of it.

        If we don’t get a proper music model before the lawsuits pass, we will never be able to generate music without being told what is or isn’t okay to write about.