I’m not the person you were replying to so i wasn’t really arguing any of these points, i just a saw the request and knew of an example, so i provided it.
Just in case this was for me specifically I’ll answer:
Yea I have zero issue with the fact that accounts with pictures of children’s genitals on them should be referred to the the authorities.
Pictures of children’s genitals aren’t inherently CSAM, there are plenty of parents and family members with entirely innocent pictures of their kids on their phones.
There are examples of this in the reported cases of false positives leading to bad outcomes, this is easily searchable.
I’m not saying to not do anything, I’m saying blanket reporting is an ineffective brute-force approach.
If people want privacy, host the pictures locally.
In theory yes, in practice, not so much.
on-device scanning exists and is in use/has been in use on phones, examples of this are also easily searchable.
When you’re storing images with a cloud provider. They become responsible for the images that they store. If it’s a photo of a child’s genitals and that’s illegal for them to have those images on their servers and they need to protect themselves.
The need for legal protection is valid, scanning cloud uploaded photo’s is a user privacy nightmare, but expected.
End to end encryption (where only the users device can decrypt and see the photo) would probably stand up legally but then they wouldn’t be able to use the cloud photo’s to make money.
The problem comes with the recognition of illegal and the way it’s handled.
Ah, this is probably my fault.
I’m not the person you were replying to so i wasn’t really arguing any of these points, i just a saw the request and knew of an example, so i provided it.
Just in case this was for me specifically I’ll answer:
Pictures of children’s genitals aren’t inherently CSAM, there are plenty of parents and family members with entirely innocent pictures of their kids on their phones.
There are examples of this in the reported cases of false positives leading to bad outcomes, this is easily searchable.
I’m not saying to not do anything, I’m saying blanket reporting is an ineffective brute-force approach.
In theory yes, in practice, not so much.
on-device scanning exists and is in use/has been in use on phones, examples of this are also easily searchable.
The need for legal protection is valid, scanning cloud uploaded photo’s is a user privacy nightmare, but expected.
End to end encryption (where only the users device can decrypt and see the photo) would probably stand up legally but then they wouldn’t be able to use the cloud photo’s to make money.
The problem comes with the recognition of illegal and the way it’s handled.