TechScape: Is Apple taking a dangerous step into the unknown?

Posted at

Apple made waves on Friday, with an announcement that the company would begin scanning photo libraries stored on iPhones in the US to find and flag known instances of child sexual abuse material.

From our story:

Apple’s tool, called neuralMatch, will scan images before they are uploaded to the company’s iCloud Photos online storage, comparing them against a database of known child abuse imagery. If a strong enough match is flagged, then Apple staff will be able to manually review the reported images, and, if child abuse is confirmed, the user’s account will be disabled and the National Center for Missing and Exploited Children (NCMEC) notified.

This is a huge deal.

But it’s also worth spending a bit of time talking about what isn’t new here, because the context is key to understanding where Apple’s breaking new ground – and where it’s actually playing catch-up.

Sign up to Alex Hern’s weekly technology newsletter, TechScape.

The first thing to note is that the basic scanning idea isn’t new at all. Facebook, Google and Microsoft, to name just three, all do almost exactly this on any image uploaded to their servers. The technology is slightly different (a Microsoft tool called PhotoDNA is used), but the idea is the same: compare uploaded images with a vast database of previously seen child abuse imagery, and if there’s a match, block the upload, flag the account, and call in law enforcement.

The scale is astronomical, and deeply depressing. In 2018, Facebook alone was detecting about 17m uploads every month from a database of about 700,000 images.

These scanning tools are not in any way “smart”. They are designed to only recognise images that have already been found and catalogued, with a bit of leeway for matching simple transformations such as cropping, colour changes, and the like. They won’t catch pictures of your kids in the bath, any more than using the word “brucewayne” will give you access to the files of someone with the password “batman”.

Nonetheless, Apple is taking a major step into the unknown. That’s because its version of this approach will, for the first time from any major platform, scan photos on the users’ hardware, rather than waiting for them to be uploaded to the company’s servers.

That’s what’s sparked outrage, for a number of reasons. Almost all focus on the fact that the program crosses a rubicon, rather than objecting to the specifics of the issue per se.

By normalising on-device scanning for CSAM, critics worry, Apple has taken a dangerous step. From here, they argue, it is simply a matter of degree for our digital life to be surveilled, online and off. It is a small step in one direction to expand scanning beyond CSAM; it is a small step in another to expand it beyond simple photo libraries; it is a small step in yet another to expand beyond perfect matches of known images.

Apple is emphatic that it will not take those steps. “Apple will refuse any such demands” to expand the service beyond CSAM, the company says. “We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands.”

It had better get used to fighting, because those demands are highly likely to be coming. In the UK, for instance, a blacklist of websites, maintained by the Internet Watch Foundation, the British sibling of America’s NCMEC, blocks access to known CSAM. But in 2014, a high court injunction forced internet service providers to add a new set of URLs to the list – sites that infringed on the copyright of the luxury watch manufacturer Cartier.

Elsewhere, there are security concerns about the practice. Any system that involves taking action that the owner of a device doesn’t consent to could, critics fear, ultimately be used to harm them. Whether that’s a conventional security vulnerability, potentially using the system to hack phones, or a subtle way of misusing the actual scanning apparatus to cause harm directly, they worry that the system opens up a new “attack surface”, for little benefit over doing the same scanning on Apple’s own servers.

That is the oddest thing about the news as it stands: Apple will only be scanning material that is about to be uploaded to its iCloud Photo Library service. If the company simply waited until the files were already uploaded, it would be able to scan them without crossing any dangerous lines. Instead, it’s taken this unprecedented action instead.

The reason, Apple says, is privacy. The company, it seems, simply values the rhetorical victory: the ability to say “we never scan files you’ve uploaded”, in contrast to, say, Google, who relentlessly mine user data for any possible advantage.

Some wonder if this is a prelude to a more aggressive move that Apple could make: encrypting iCloud libraries so that it can’t scan them. The company reportedly ditched plans to do just that in 2018, after the FBI intervened.

Parental controls

The decision to scan photo libraries for CSAM was only one of the two changes Apple announced on Friday. The other is, in some ways, more concerning, although the initial effects of it will be limited.

This autumn, the company will begin to scan the texts sent using the Messages app from and to users under 17. Unlike the CSAM scanning, this won’t be looking for matches with anything: instead, it’ll be applying machine learning to try to spot explicit images. If one is sent or received, the user will be given a notification.

For teens, the warning will be a simple “are you sure?” banner, with the option to click through and ignore; but for kids under 13, it’ll be somewhat stronger, warning them that if they view the message, their parents will be notified, and a copy of the image will be saved on their phone so their parents can check.

Both features will be opt-in on the part of parents, and turned off by default. Nothing sent through the feature makes its way to Apple.

But, again, some are concerned. Normalising this sort of surveillance, they fear, effectively undoes the protections that end-to-end encryption offers users: if your phone snoops on your messages, then encryption is moot.

Doing it better

It’s not just campaigners making these points. Will Cathcart, the head of WhatsApp, has argued against the moves, writing “I think this is the wrong approach and a setback for people’s privacy all over the world. People have asked if we’ll adopt this system for WhatsApp. The answer is no.”

But at the same time, there’s a growing chorus of support for Apple – and not just from the child protection groups that have been pushing for features like this for years. Even people from the tech side of the discussion are accepting that there are real trade-offs here, and no simple answers. “I find myself constantly torn between wanting everybody to have access to cryptographic privacy and the reality of the scale and depth of harm that has been enabled by modern comms technologies,” wrote Alex Stamos, once Facebook’s head of security.

Whatever the right answer, however, one thing seems clear: Apple could have entered this debate more carefully. The company’s plans leaked out sloppily on Thursday morning, followed by a spartan announcement on Friday, and a five-page FAQ on Monday. In the meantime, everyone involved in the debate had already hardened to the most extreme versions of their positions, with the Electronic Frontier Foundation calling it an attack on end-to-end encryption, and NCMEC dismissing the “shrieking voices of the minority” who opposed the call.

“One of the basic problems with Apple’s approach is that they seem desperate to avoid building a real trust and safety function for their communications products,” Stamos added. “There is no mechanism to report spam, death threats, hate speech […] or any other kinds of abuse on iMessage.

“In any case, coming out of the gate with non-consensual scanning of local photos, and creating client-side ML that won’t provide a lot of real harm prevention, means that Apple might have just poisoned the well against any use of client-side classifiers to protect users.”

If you want to read the complete version of this newsletter please subscribe to receive TechScape in your inbox every Wednesday.

Add your Ad HERE Add your Website HERE