Key Moments
- Britain teams up with Microsoft, academics, and experts to create a standardized deepfake detection system.
- The framework will test detection tools against real-world risks, including sexual abuse, fraud, and impersonation.
- Government data shows deepfakes shared online rose from 500,000 in 2023 to 8 million in 2025.
UK Moves to Standardize Deepfake Detection
Britain will collaborate with Microsoft, academics, and industry experts to develop a system that identifies deepfake content online. The government aims to set clear standards for detecting harmful AI-generated material.
Manipulated media has circulated online for years. However, the rise of AI tools such as ChatGPT has increased both the volume and realism of deepfakes. Authorities are concerned about the potential harms these tools create.
The UK recently criminalized non-consensual intimate images. Now, it is building a deepfake detection evaluation framework to provide consistent benchmarks for testing detection technologies.
“Deepfakes are being weaponized by criminals to defraud the public, exploit women and girls, and undermine trust in what we see and hear,” said Technology Minister Liz Kendall.
Framework Targets Real-World Threats
The evaluation framework will assess how technology can detect and interpret harmful deepfake content, regardless of its source. It will specifically target real-world risks such as sexual abuse, fraud, and impersonation.
Additionally, the framework will help policymakers and law enforcement understand weaknesses in current detection tools. It will also guide industries on proper deepfake detection standards.
Deepfake Volumes Rising Rapidly
Government data shows the number of deepfakes shared online jumped to 8 million in 2025 from just 500,000 in 2023. This surge underscores the growing scale of the problem.
| Year | Estimated Deepfakes Shared |
|---|---|
| 2023 | 500,000 |
| 2025 | 8,000,000 |
Regulatory Actions and AI Risks
Governments worldwide struggle to keep up with rapid AI developments. In 2026, concerns grew after Elon Musk’s Grok chatbot reportedly produced non-consensual sexualized images, including those involving children.
The UK communications watchdog and privacy regulator are conducting investigations into Grok. These moves reflect a broader effort to ensure AI technologies are used safely and responsibly.





