Apple has confirmed that it scans user images in an effort to detect evidence of child abuse, but the company has revealed little about how the scans work, piquing concerns about data privacy and the reach of intrusive tech firms.
While it’s unclear when the image scans started, Apple’s chief privacy officer Jane Horvath confirmed at an event in Las Vegas this week that the company is now “utilizing some technologies to help screen for child sexual abuse material.”
Apple initially suggested it might inspect images for abuse material last year – and only this week added a disclaimer to its website acknowledging the practice – but Horvath’s remarks come as the first confirmation the company has gone ahead with the scans.
A number of tech giants, including Facebook, Twitter and Google, already employ an image-scanning tool known as PhotoDNA, which cross-checks photos with a database of known abuse images. It is unknown whether Apple’s scanning tool uses similar technology.