I have been a paid up member of the EFF for years, and this was called out when it first reared its head as the Online Harms White Paper.
At its core, the OSA embodies a fundamentally broken approach to online governance: rather than addressing the systemic failures of Big Tech, it attacks encryption, undermines privacy, and empowers corporate platforms to dictate what constitutes "safety." The government’s proposals to force companies to weaken end-to-end encryption are among the most extreme anti-privacy measures ever attempted in a democracy. The mere idea that "client-side scanning" or government-mandated backdoors can exist without compromising security is a delusional fantasy pushed by officials with no understanding of cryptography and a deep hostility toward an unmonitored public sphere.
Encryption is not some sinister tool for criminals - it is the bedrock of digital security. It protects everything from personal conversations to financial transactions, from whistleblowers exposing corruption to journalists reporting from authoritarian regimes. The UK government, much like its counterparts in the United States, refuses to accept that breaking encryption for the "good guys" means breaking it for everyone. Once a backdoor exists, it is only a matter of time before it is exploited, whether by malicious actors, hostile states, or even the very governments demanding its creation.
The supposed "balance" that lawmakers propose - where encryption can remain "safe" while still allowing government-mandated access is basically a fiction. Security is binary.
Either everyone is protected, or no one is. The push to break encryption under the OSA is not about safety; it is about expanding state surveillance and increasing corporate control over user data.
The irony is that the UK’s Online Safety Act, for all its posturing, does nothing to address the actual failures of platform governance. It focuses on forcing compliance through mass surveillance, rather than holding platforms accountable for the design choices that foster onlime harm. The problem was never encryption - it was never the inability to monitor users. The problem is that companies like Meta, X, and Google prioritise engagement over well-being.
These platforms are fundamentally incapable of self-regulation because their profit model depends on amplifying harm. Our government's solution? Instead of taking on the trillion-dollar tech empires that deliberately allow misinformation, extremism, and abuse to flourish, it is demanding universal surveillance of private communications.
Instead of requiring platforms to be transparent about their algorithms, it is forcing companies to weaken the strongest protections users have against authoritarian overreach, cybercrime, and corporate exploitation.
It's just maddening.