Apple introduces several new tools that provide protection for children in iOS 15. These tools include: communication safety in Messages, on-device CSAM detection, Siri and Search guidance.

Communication safety in Messages

Messages app will warn children (under 13 years old) and their parents upon receiving inappropriate content. Content detection is run on device and content itself will be blurred. Then child will be presented with dialog with additional resources targeted to make sure that child could avoid watching the image. Similar warning will be shown if child attempts to send such content.

If child decides to view such content, notification will be sent to parents.

On-device CSAM detection

Child Sexual Abuse Material (CSAM) refers to content that depicts sexually explicit activities involving children.

Apple adds detection of known CSAM before uploading photo to iCloud via specially created image hashes. When account is gets several hash hits it images are marked for human verification and then reported to authorities.

Siri and Search guidance

Siri and Search will also intervene when user searches for content related to CSAM. Siri will provide guidance that content is problematic and adds content that will help user with this issue. Also, additional resources are added to help users report CSAM content.

Controversy

These protection tools brought a lot of discussion in professional community. While Apple tries to maintain privacy-first approach, many suggested that there are issues with these tools.

Messages app might see users fleeing to other platforms (as their private, end-to-end encrypted messages are now analyzed and reported). However, need to mention that message analysis is performed only on child accounts (under 13 years old).

CSAM detection seem to be more privacy-proof, as images are scanned via hashes against known database. And only marked entries are subject for decryption. However, researches suggest that this might open Apple for more requests from governments across the world where new types of content could be marked as inappropriate.

Also, these image hashes could be spoofed, and user might get legitimate images which could trigger CSAM detection and therefore his account might be checked too often. Apple partially acknowledges this issue and suggest that hash collision would not be typical.

The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.

Expanded Protections for Children

Moreover, it is worth noting that iCloud backups are currently not end-to-end encrypted and these new measures might actually help Apple bring encryption to backups too.

More information: Expanded Protections for Children (article includes technical specs and assessments of CSAM detection technology)

Reference: Apple announces new protections for child safety: iMessage features, iCloud Photo scanning, more, Security Researchers Express Alarm Over Apple’s Plans to Scan iCloud Images, But Practice Already Widespread, Apple Confirms Detection of Child Sexual Abuse Material is Disabled When iCloud Photos is Turned Off, Apple’s New Feature That Scans Messages for Nude Photos is Only for Children, Parental Notifications Limited to Kids Under 13, Apple expanding child safety features across iMessage, Siri, iCloud Photos

Recommendations

Developer:

Check technology summary and assessments of Apple’s approach. Consider following the ideas, if needed.

QA engineer:

Business as usual.

PM/DM:

Consider following similar approaches for on-device content detection together with end-to-end encryption to protect user’s privacy.

Leave a comment

Leave a Reply