The National Center for Missing and Exploited Children said it received more than 1 million reports of AI-related child sexual abuse material in 2025, with “the vast majority” stemming from Amazon.

    • AmbitiousProcess (they/them)@piefed.social
      link
      fedilink
      English
      arrow-up
      69
      ·
      3 days ago

      Yep. They are allowed to use your photos to “improve the service,” which AI training would totally qualify under in terms of legality. No notice to you required if they rip your entire album of family photos so an AI model can get 0.00000000001% better at generating pictures of fake family photos.

    • phx@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      Yeah, a lot of people seem to think that these companies built these AI’s by buying or building some sort of special training set/data, when in reality no such thing really existed.

      They’ve basically just scraped every bit of data they can. When it comes to big corps, at least some of that data is likely from scraping customer’s data. There’s also scraping of the Internet in general, including sites such as Reddit (which is a big reason why they locked down their API, they wanted to sell that data) but many have also been caught with a ton of 'pirated ’ data from torrents etc.

      I’m sure there was a certain amount of sludge in customers’ synced files, and sites like Reddit, but I’d also hazard a guess that the stuff grabbed from torrents etc likely had some truly heinous materials that they simply added to what was getting force-fed to AI, especially the early ones

    • ImgurRefugee114@reddthat.com
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      2
      ·
      edit-2
      3 days ago

      Unlikely IMO. Maybe some… But if they scraped social media sites like blogs, Facebook, or Twitter, or their own CDNs, they would end up with dumptrucks full. Ask any one who has to deal with UGC: it pollutes every corner of the net and it’s damn near everywhere. The proliferation of local models capable of generating photorealistic materials has only made the situation worse. It was rare to uncover actionable cases before, but the signal to noise ratio is garbage now and it’s overwhelming most agencies (who were already underwater previously)

        • ImgurRefugee114@reddthat.com
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          3 days ago

          This is why I use the word ‘proliferation,’ in the nuclear sense. Though contamination may be more apt… Since the days of SD1, these illegal capabilities have become more and more prevalent in the local image model space. The advent of model merging, mixing, and retraining/finetunes, have caused a significant increase in the proportion of model releases that have been contaminated.

          What you’re saying is ultimately true, but it was more true in the early days. Animated, drawn, and CGI content has always been a problem, but photorealistic capability was very limited and rare, often coming from homebrewed proprietary finetunes published on shady forums. Since then, they’ve become much more prolific. It’s estimated that roughly between a fourth and a third of photorealistic SDXL-based NSFW models released on civit.ai during 2025 have some degree of capability. (Speaking purely in a boolean metric… I don’t think anyone has done a study on the perceptual quality of these capability for obvious reasons.)

          Just as LLM benchmark test answers have contaminated open source models, illegal capabilities gained from illegal datasets have also contaminated image models; to the point where there are plenty of well-intentioned authors unknowingly contributing to the problem. There are some who go out of their way to poison models (usually with false association training on specific keywords) but few bother, or even known, to do so.

      • ColeSloth@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        8
        ·
        3 days ago

        They wouldn’t be bothered to try and hide that they were pulled from those public services.

        They 100% know that if they revealed that they used everyone’s private photos backed up to Amazon cloud as fodder for their AI that it would puss people off and they’d lose some business out of the deal.

        • ImgurRefugee114@reddthat.com
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          3 days ago

          Well another factor is providence: they don’t keep around exactly where they got their data from (for several reasons, plausible deniability likely among them). Sometimes on a set level, but almost never on an individual sample. “We found csam somewhere on maybe reddit or imgur or pinterest” is a functionally worthless report

    • TheLeadenSea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      9
      ·
      3 days ago

      We usually have “innocent until proven guilty”, not the other way around. He’s already guilty of being a billionaire, no need to add charges unnecessarily.

        • Lvxferre [he/him]@mander.xyz
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          1
          ·
          3 days ago

          “Innocent until proved guilty” is also a rather important moral principle, because it prevents witch hunts.

          Plus we don’t even need to claim he got CSAM in his laptop — the fact that he leads a company covering child abusers is more than enough.

          • gustofwind@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            8
            ·
            3 days ago

            Witch hunts? I think you are misguided here

            It’s a completely reasonable belief given everything we know about him that he has access to and consumes csam if he so desires.

            That is a reasonable belief based on his actions and character but not provable court.

            The real legal principle you’re looking for here is defamation and even then it doesn’t protect him because it’s totally reasonable to conclude he does such a thing

            • Lvxferre [he/him]@mander.xyz
              link
              fedilink
              English
              arrow-up
              10
              ·
              edit-2
              3 days ago

              About principles:

              I am talking about presumption of innocence = innocent until proved guilty. Not defamation. More specifically, I’m contradicting what you said in the other comment:

              Innocent until proven guilty is for a court of law not public opinion

              If presumption of innocence is also a moral principle, it should also matter for the public opinion. The public (everyone, including you and me) should not accuse anyone based on assumptions, “trust me”, or similar; we should only do it when there’s some evidence backing it up.

              Not even if the target was Hitler. Because, even if the target is filth incarnated, that principle is still damn important.


              Now, specifically about Bezos:

              I am not aware of evidence that would back up the claim that Bezos has CSAM in his personal laptop. If you have it, please, share it. Because it’s yet another thing to accuse that disgusting filth of. (Besides, you know… being a psychopathic money hoarder, practically a slaver, and his company shielding child abusers?)

              EDIT: let me guess. Epstein files?

              • gustofwind@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                5
                ·
                3 days ago

                The evidence is circumstantial, but this is in fact evidence

                1. Amazon has access to csam
                2. Bezos has access on his personal laptop to whatever he wants from Amazon

                If that’s not good enough for you then you have more faith in his character than I do

                • Lvxferre [he/him]@mander.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  7
                  ·
                  3 days ago

                  The evidence is circumstantial, but this is in fact evidence

                  No, not really. “He could do it” is not the same as “he did it”.

                  If that’s not good enough for you then you have more faith in his character than I do

                  That would be the case if I said “he didn’t do it”. However that is not what I’m saying, what I’m saying is more like “dunno”.

                  …I edited the earlier comment mentioning the Epstein files. There might be some actual evidence there.

  • smeg@infosec.pub
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    1
    ·
    3 days ago

    All of the AI tools know how to make CP somehow - probably because their creators fed it to them.

    • Grimy@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      edit-2
      3 days ago

      If it knows what children looks like and knows what sex looks like, it can extrapolate. That being said, I think all photos of children should be removed from the datasets, regardless of the sexual content, because of this.

        • Grimy@lemmy.world
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          1
          ·
          3 days ago

          Thank you, I almost forgot. I was busy explaining to someone else how their phone isn’t actually smart.

    • phx@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      They fed them on the Internet including libraries of pirated material. It’s like drinking from a fountain at a sewage plant

    • stoly@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      6
      ·
      3 days ago

      There will be a lot of medical literature with photos of children’s bodies to demonstrate conditions, illnesses, etc.

      • Phoenixz@lemmy.ca
        link
        fedilink
        English
        arrow-up
        13
        ·
        3 days ago

        Yeah, press X to doubt that AI is generating child pornography from medical literature.

        These fuckers have fed AI anything and everything to train them. They’ve stolen everything they could without repercussions, I wouldn’t be surprised if some of them fed their AIs child porn because “data is data” or something like that.

        • vaultdweller013@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          3 days ago

          Depending on how they scraped data they may have just let their rovers run wild. Eventually they wouldve ran into child porn, which is also yet another reason why this tech is utterly shit. If you can’t control your tech you shouldn’t have it and frankly speaking curation is a major portion of any data processing.

  • FalschgeldFurkan@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    but isn’t saying where it came from

    Isn’t that already grounds for legal punishment? This shit really shouldn’t fly

  • TheSlad@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 days ago

    When i hear stuff like this, it always makes me wonder if the material is actual explicit exploitation of a minor, or just gross anime art scraped from 4chan and sketchy image boards.

    • InFerNo@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 days ago

      And also innocent personal pictures of people photographing their kids without thinking of the implications. Dressing at the beach/pool, bath time as a toddler. People don’t always think it through. They get uploaded to a cloud service and then scraped for AI that way, is my guess.

      • furry toaster@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        2 days ago

        remembed when a farther took pictures of his child during covid because the doctor asked for since they were keeping physical visits to a minium because of the pandemic and google’s automated systrm flagged it as CSAM and the poor farther lost his gmail and google account which ended up fucking his life because that was his work email and hjs phone number got black listed (google accounts require phone number verification)

      • TheSlad@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Yea that too. I read the article after making that comment wondering if they clarified…

        Amazon stated that their detection/moderation has very low tolerance so there was a lot of borderline/false positives in their reports…

        In the end though, it seems like all of Amazon’s reports were completely inactionable anyways because Amazon couldn’t even tell them the source of the scraped images.