Guess we can always rely on the good old fashioned ways to make money…

Honestly, I think its pretty awful but im not surprised.

  • Halcyon@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    There’s loads of hi-res ultra HD 4k porn available. If someone professional wants to train on that it’s not hard to find. If someone wants to play a leading role in the field of AI training, then of course they invest the necessary money and don’t use the shady material from the peer-to-peer network.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      There’s loads of hi-res ultra HD 4k porn available.

      It’s still gonna have compression artifacts. Like, the point of lossy compression having psychoacoustic and psychovisual models is to degrade the stuff as far as you can without it being noticeable. That doesn’t impact you if you’re viewing the content without transformation, but it does become a factor if you don’t. Like, you’re viewing something in a reduced colorspace with blocks and color shifts and stuff.

      I can go dig up a couple of diffusion models finetuned off SDXL that generate images with visible JPEG artifacts, because they were trained on a corpus that included a lot of said material and didn’t have some kind of preprocessing to deal with it.

      I’m not saying that it’s technically-impossible to build something that can learn to process and compensate for all that. I (unsuccessfully) spent some time, about 20 years back, on a personal project to add neural net postprocessing to reduce visibility of lossy compression artifacts, which is one part of how one might mitigate that. Just that it adds complexity to the problem to be solved.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 day ago

        It’s easy to get rid of that with prefiltering/culling and some preprocessing. I like BM3D+deblock, but you could even run them though light GAN or diffusion passes.

        A lot of the amateur lora makers aren’t careful about that, but I’d hope someone shelling out for a major fine tune would.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Also “minor” compression from high quality material isn’t so bad, especially if starting with a pre trained model. A light denoising step will mix it into nothing.