The National Center for Missing and Exploited Children said it received more than 1 million reports of AI-related child sexual abuse material in 2025, with “the vast majority” stemming from Amazon.

  • ImgurRefugee114@reddthat.com
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    2
    ·
    edit-2
    3 days ago

    Unlikely IMO. Maybe some… But if they scraped social media sites like blogs, Facebook, or Twitter, or their own CDNs, they would end up with dumptrucks full. Ask any one who has to deal with UGC: it pollutes every corner of the net and it’s damn near everywhere. The proliferation of local models capable of generating photorealistic materials has only made the situation worse. It was rare to uncover actionable cases before, but the signal to noise ratio is garbage now and it’s overwhelming most agencies (who were already underwater previously)

      • ImgurRefugee114@reddthat.com
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        3 days ago

        This is why I use the word ‘proliferation,’ in the nuclear sense. Though contamination may be more apt… Since the days of SD1, these illegal capabilities have become more and more prevalent in the local image model space. The advent of model merging, mixing, and retraining/finetunes, have caused a significant increase in the proportion of model releases that have been contaminated.

        What you’re saying is ultimately true, but it was more true in the early days. Animated, drawn, and CGI content has always been a problem, but photorealistic capability was very limited and rare, often coming from homebrewed proprietary finetunes published on shady forums. Since then, they’ve become much more prolific. It’s estimated that roughly between a fourth and a third of photorealistic SDXL-based NSFW models released on civit.ai during 2025 have some degree of capability. (Speaking purely in a boolean metric… I don’t think anyone has done a study on the perceptual quality of these capability for obvious reasons.)

        Just as LLM benchmark test answers have contaminated open source models, illegal capabilities gained from illegal datasets have also contaminated image models; to the point where there are plenty of well-intentioned authors unknowingly contributing to the problem. There are some who go out of their way to poison models (usually with false association training on specific keywords) but few bother, or even known, to do so.

    • ColeSloth@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 days ago

      They wouldn’t be bothered to try and hide that they were pulled from those public services.

      They 100% know that if they revealed that they used everyone’s private photos backed up to Amazon cloud as fodder for their AI that it would puss people off and they’d lose some business out of the deal.

      • ImgurRefugee114@reddthat.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        3 days ago

        Well another factor is providence: they don’t keep around exactly where they got their data from (for several reasons, plausible deniability likely among them). Sometimes on a set level, but almost never on an individual sample. “We found csam somewhere on maybe reddit or imgur or pinterest” is a functionally worthless report