News broke last week that Apple had unveiled new capabilities to a set of US-based academics, tools that will be used to help curb the distribution of “child pornography” and related anti-abuse materials.
Apple will be installing software on a coming iteration of the iPhone operating system (iOS) that will scan for child sexual abuse material (CSAM) and protect children from receiving and sharing sexually explicit materials, and could be used to flag offenders to authorities.
I first saw the news in a financial Times article that attributed the news to an August 5 tweet from Johns Hopkins University academic Matthew Green. Green is a security and cryptography professor, and his take on this information was one of profound concern, warning in a long thread of tweets: “This will break the dam — governments will demand it from everyone. And by the time we find out it was a mistake, it will be way too late.”
It’s a fascinating debate that’s been sparked, or rather a fascinating perpetual debate that’s enjoying its brief return to the main stage. Where does the intention to prevent harm conflict with our rights to privacy on our phones, especially when this tool may be designed for one function (detecting CSAM) but could theoretically be used for many others (including wholesale state surveillance)?
On August 6 Apple confirmed, and since then more details and hot takes have been finding their way across my screen. These are two different tools, to be clear. The arguably less offensive one is the iMessage and Messages functionality. Here, if you have a child on a family iCloud account and the parent turns on the functionality, if that child then receives or sends explicit materials they will be warned that this could be flagged and reviewed by the parent. The functionality does have a slightly different set of criteria for children older than 12. This is all managed on-device, and feeds nothing like message content back to Apple.
The one that really upset the Apple cart — sorry, I couldn’t resist — is the neuralMatch tool, which will be able to scan for CSAM on your device if you opt into iCloud photo storage. In the simplest terms, it will use tech called NeuralHash to convert your images into a hash key (a series of numbers or code representation of the images) and this will be compared to databases from a selection of institutions like and including the US National Centre for Missing and Exploited Children (NCMEC).
There are further barriers to overcome before you’re flagged and potentially flogged. There has to be a certain number of these offending images, which this triggers an Apple-internal manual review. If the reviewer finds there is still an issue and not a technical error at play, this information could be handed over to NCMEC and law-enforcement agencies they work with. Due to jurisdictional matters, this is to be rolled out on a country by country basis.
The privacy crowd is quick to point out how this same tech could be used to peer into devices for other materials, and that is not a small claim. Even for those of us who are desperately concerned with the proliferation of child abuse materials, as we all should be but sadly apparently aren’t. In 2020, the BBC reports, Internet Watch Foundation (IWF) received 300,000 reports and 153,000 were verified to be new CSAM content.
I’ve tried to sit with this for a few days to muddle my way out of the quandary. I’m pretty sure I should not be able to simultaneously hold two diametrically opposed views, but here I am, doing just that until further notice. The conversation in my head has been like watching the world’s slowest tennis players gently lobbing “yes, but ...” at each other endlessly: we have a right to privacy on our own devices. Yes, but child abuse materials are an obvious and grotesque harm. Yes, but it’s a slippery slope from this to other privacy violations. Yes, but there are multiple safeguards in place to prevent that slide. Yes, but these all rest on the say-so of a rather large and profit-driven machine that needs to play nice with the governments of its various markets. Yes, but ... and on and on.
This is just one of a mixed bag of tech policy debates coming to a country, phone and gaming console near you, including the debate raging in China now about using facial recognition to prevent children from gaming for “too many” hours. The matters are murky and the trade-offs are neither simple nor clear cut. We will need a proactive, responsive and informed approach from the public sector. This is the new normal that new communication & digital technology minister Khumbudzo Ntshavheni will have to wade into and direct. The time for dithering endlessly over such implementations and digital policy is long, long gone.
• Thompson Davy, a freelance journalist, is an impactAFRICA fellow and WanaData member.






Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.