TL;DR
A January 8, 2026 amendment to the UK’s Online Safety Act designates cyberflashing and encouraging or assisting serious self-harm as priority offences, forcing platforms to detect and block such content before users see it. The regulations push firms toward large-scale automated scanning and risk expanding routine surveillance into private communications.
What happened
On January 8, 2026 the UK put into force the Online Safety Act 2023 (Priority Offences) (Amendment) Regulations 2025, creating new ‘priority offence’ categories that trigger the strictest duties under the Online Safety Act. The amendment specifically lists cyberflashing and the encouragement or assistance of serious self-harm as priority offences. Services that permit user interaction — including messaging apps, forums and search engines — are now legally required to take proactive steps to prevent that material reaching users. The government says platforms must prevent this content before it is seen, and expects firms to deploy automated scanning, content-detection algorithms, and AI models to evaluate text, images and video in real time. The Department for Science, Innovation and Technology illustrated the requirement with a video showing a smartphone detecting an unwanted nude. Noncompliant companies face fines up to 10% of global turnover or £18 million (whichever is greater) and potential blocking of services in the UK.
Why it matters
- Shifts the legal burden onto platforms to detect and stop specified harms before users encounter them, changing the operational model for many services.
- Increases routine, automated scanning of user communications, including spaces traditionally regarded as private.
- Creates risks for lawful speech and user privacy because automated filters can misclassify content and lack full context.
- Exposes platforms to significant financial penalties and the possibility of being blocked if they fail to meet compliance duties.
Key facts
- The amendment took effect on January 8, 2026.
- Named law: Online Safety Act 2023 (Priority Offences) (Amendment) Regulations 2025.
- Designated priority offences include cyberflashing and encouraging or assisting serious self-harm.
- Covered services include messaging apps, forums and search engines that allow user interaction.
- Platforms are expected to use automated scanning, detection algorithms and AI to evaluate content in real time.
- DSIT released a video demonstrating a smartphone detecting an 'unwanted nude' to illustrate the requirement.
- Enforcement measures include fines up to 10% of global turnover or £18 million (whichever is greater) and possible blocking of services in the UK.
- Advocates warn the rules embed proactive monitoring at the infrastructure level, which may capture lawful communications.
What to watch next
- How major platforms design and disclose their technical approaches to meet the preemptive scanning requirement.
- Accuracy and error rates of automated systems used to detect priority offences, and independent assessments of false positives and negatives.
- Legal challenges to the regulations or their implementation — not confirmed in the source.
- Specific enforcement actions by regulators, such as fines or blocking notices, and how those are applied in practice.
Quick glossary
- Online Safety Act (OSA): A UK law setting duties for online services to protect users from harmful content; the 2025 amendment creates higher obligations for designated priority offences.
- Cyberflashing: The unsolicited sending of explicit images to another person, typically over digital channels.
- Preemptive scanning: The automated inspection of communications and uploads to detect and block flagged material before it is delivered to users.
- Automated content detection: Use of algorithms or machine learning models to analyze text, images or video and classify content according to set criteria.
- Priority offence: A category of harm designated by regulation that triggers the most stringent compliance duties under a law or code.
Reader FAQ
When did the new rules come into force?
They came into force on January 8, 2026.
Which offences are designated as priority offences?
The amendment names cyberflashing and encouraging or assisting serious self-harm as priority offences.
Do platforms have to scan private messages?
The source says compliance will require mass scanning of messages, images and uploads, including in spaces traditionally regarded as private.
What penalties can platforms face for noncompliance?
Fines up to 10% of global turnover or £18 million (whichever is greater) and potential blocking of services in the UK.
Will companies have to use AI to comply?
The source indicates firms are expected to rely heavily on automated scanning systems, detection algorithms and AI models, but it does not mandate specific technologies.

Open Subscribe Fight censorship and surveillance. Reclaim your digital freedom. Get news updates, features, and alternative tech explorations to defend your digital rights. Join January 8, 2026 7:16 PM ET…
Sources
- UK Expands Online Safety Act to Mandate Preemptive Scanning
- Briefing: Online Safety Act 2023
- UK Online Safety Act: What does it mean for …
- Online Safety Act: explainer
Related posts
- Trump Declared a Space Race With China — and U.S. Is Falling Behind
- Justice Delayed Is Justice Denied: The Legal Principle and Its Origins
- FOSS Under Pressure: War, Scarcity and the Threat of Adversarial AI