Chat Control

It’s more than two years now since the European Commission introduced its draft Child Sexual Abuse Regulation. It proposes sweeping measures to scan private and public communications for child sexual abuse material (“CSAM”), universally and without judicial warrant. These would be scanned for known and unknown CSAM, and for attempts to groom children by child-abusers. It would require messaging services to verify the age of their users, and app stores would have to ban access to messaging apps for under 16-year-olds. The draft has gone back and forth between the Commission, the European Parliament and the European Council, and at the time of writing there is still no version that can form the basis for final negotiation between the three bodies.

The proposal has received a lot of criticism from both privacy activists and anti-child sexual abuse campaigners, who have dubbed it “chat control”; looked at in detail its problems start to become clear. First of all, it wouldn’t catch most CSAM: this is typically exchanged through sharing of links and passwords for encrypted archives on dedicated message boards. Individual files aren’t usually shared, so there’s no point in scanning message attachments.

Moreover, what is proposed would be hugely impractical. Some automated scanning for CSAM already takes place, but it turns out that about 80% of the reported content is irrelevant once checked by human beings. If that was scaled up to include every message sent to or from the European Union, the torrent of false positives would be totally unmanageable. As Edward Snowden pointed out, mass surveillance doesn’t make the world a safer place: it only distracts the agencies tasked with sifting through the information.

It gets worse. Most messaging apps these days provide end-to-end encryption. The content is encrypted on the sender’s device and can only be decrypted on the receiver’s device: the provider’s servers see only encrypted gobbledygook. Scanning messages for CSAM would require the developers to include the scanning software in their apps, so that messages are scanned on the device before they are encrypted. That might work if the scanning code was simply comparing a file against a list of hashes of known CSAM files, but the legislation also includes requirements to scan hitherto unknown material, and to identify grooming activity. That would take some pretty sophisticated AI code, and I am sceptical that current smartphones would be up to the task. The only practical way of implementing this without dedicated AI hardware in the phone would be to transmit the data to centralised servers for scanning, thus negating the privacy that end-to-end encryption is meant to provide.

It would be national governments who would be responsible for enforcing this legislation, and it’s an iron law of civil liberties that once governments acquire legal powers for one purpose, they will extend those powers to cover others. The scan lists or AI code for the messaging apps could be easily extended to search for other data without oversight by either users or messaging developers. We already have the example of the Pegasus scandal, where spyware supposedly for hunting terrorists was deployed against opposition politicians and journalists. The Hungarian government was implicated in this, so it’s no surprise then that Hungary has used its presidency of the European Council to press the case for the Child Sexual Abuse Regulation vigorously. The threat wouldn’t only come from European governments: the latest news about how China used a private network intended for U.S. government evesdropping to infiltrate U.S. telecommunications shows how such “back doors” can be exploited by hostile powers.

The revelations about Pegasus caused an uproar three years ago; now the European Commission is proposing to install spyware on every phone in Europe. It is exploiting the emotive subject of child sexual abuse to attack our civil liberties. The European Union should be spending its time on genuinely effective and targeted measures against child sexual abuse.

After Privacy Shield

Most of the big Internet companies are US-based, and it’s likely that a lot of “PII” (personally identifiable information) about Europeans crosses the Atlantic for storage and processing in the US. European data controllers (whether a big company or just an ordinary blogger like me) used to be able to rely on the Privacy Shield agreement between the EU and the US to ensure that they were transferring data to US processors in a way that complied with the European GDPR. US data processing companies could self-certify that they complied with the principles in the Data Shield agreement and then be considered safe to handle the personal data of European citizens.

The Schrems II judgement of the Court of Justice of the European Union changed all that. Max Schrems, an Austrian privacy campaigner, brought a case against Facebook for transferring his data to the US. Since US privacy laws were much weaker than European ones (in particular the powers of the American intelligence agencies were much greater), the data of Europeans was not adequately protected: the CJEU accepted this argument. This caused a great deal of confusion: in principle, Standard Contractual Clauses were still a valid alternative, but this just passed the buck to individual data controllers, and were open to legal challenge on the same grounds. In order to try and clarify the situation, the European Data Protection Board issued recommendations for the way that data transfers to non-EU countries should be carried out. The recommendations reflect the outcome of Schrems II and have far-reaching implications. In summary, if the third country does not provide protection of privacy equivalent to European law then the data should be encrypted using strong encryption, both in transit and at rest. That effectively means that without an EU adequacy decision recognising that a third country provides sufficient privacy protection, European data can only be exported to passive storage providers in that country, who do not need to interact with the content of the data. In reality it means that the safest solution to acquiring higher-level services is to use EU-based data processors. So, no US hosting, no Mailchimp, no US-based content security etc. etc.

Since Schrems II the EU and US have negotiated a new Data Privacy Framework as a replacement for the Privacy Shield agreement. However this doesn’t fix the fundamental problems with US privacy law and a third Schrems case is expected. As a consequence of all this uncertainty I’ll be reviewing all the data processors that this blog uses and gradually migrating to EU replacements where necessary and possible.