More than 70 million warnings were sent to people searching for child sexual abuse material (CSAM) online in the last two years, according to new data.
That is in excess of 95,000 alerts triggered a day, the figures from the Lucy Faithfull Foundation - the British charity that set up the system - shows.
The alerts are sent when someone looks for child abuse content on platforms like TikTok, Meta products, ChatGPT, Google and pornography sites.
The warning includes four key messages: viewing sexual images of children and having online sexual conversations with children is a crime, it causes harm to children, there are consequences for the offender, but there is help available to stop and change.
"Tens of millions of people have been reached through just 22 targeted interventions on tech platforms," said Deborah Denis, chief executive at Lucy Faithfull Foundation.
"That makes one thing clear - the potential to scale this approach is enormous.
"By placing more warnings across more online spaces, we can disrupt harmful behaviour at the moment it's happening and prevent countless children from being harmed.
"The need has never been more urgent, particularly as new AI technologies accelerate the spread of online child sexual abuse."
From the tens of millions of alerts, just under 700,000 people followed links to further support, where they are encouraged to address their behaviour through online learning modules.
One pornography site user said he found the modules after his searches triggered were useful.
"I found the modules on addiction and pornography very helpful," he said. "About two months ago, I gave up those sites. I want to keep my mind occupied and more productive."
Read more from Sky News:
ChatGPT boss hits back at Elon Musk
Bodies of three women recovered from sea off Brighton
There are ongoing efforts to tackle the growing problem of child sexual abuse crimes.
Organisations like the Internet Watch Foundation track down and remove abusive imagery, and then tag the images so they can't be reuploaded, while more VPN companies are announcing they'll stop people accessing websites containing child sexual abuse material (CSAM).
End-to-end encrypted services, where only the sender and recipient can see what is being sent, are often seen as a particularly difficult area to tackle when it comes to CSAM.
However, the chief technology officer of Mega, an encrypted cloud storage provider that uses the alert system, said companies like his could take "meaningful action".
"We recognised that it isn't enough to reactively or even proactively remove material, we also
needed to intervene earlier in the path towards harmful behaviour, before patterns become entrenched," said Andre Meister.
"Through our work with Project Intercept, we are delivering well-timed deterrence messaging and self-help resourcing that interrupts harmful behaviour right at the point of intent, and we are pleased with the level of engagement the intervention has been driving.
"It is a more complete, science-backed approach, and we are grateful for the partnership in our continued fight against CSAM."
(c) Sky News 2026: More than 70 million warnings sent to people searching for child sexual abuse content
Leading infertility cause - polycystic ovary syndrome - renamed in push for better care
Switching from weight loss jabs to daily pill could help keep off pounds
OpenAI trial: Sam Altman insists he's trustworthy in riposte to Elon Musk