
Over the years, digital platforms and social media unlocked powerful new possibilities for connection, learning, and creativity. But as these technologies developed, and as their use expanded, a darker reality emerged.
Offenders learned to exploit the same tools designed for good, using online spaces to coordinate, commercialize, and conceal child sexual exploitation in ways that were previously unimaginable. The very features that enabled global community and innovation also created environments where abuse could scale rapidly and remain hidden from view.
Over time, it spread, making abuse easier to organize, easier to monetize, and far harder to detect in real time. Features built for connection and creativity were also co-opted by offenders, often faster than safeguards were built to protect the most vulnerable users.
Big Tech has come to recognize this reality. Platforms now work with NGOs, governments, and advocates to spread prevention messaging, support survivor-centered campaigns, and disrupt known abuse networks. These efforts matter, and they represent genuine progress.
Yet one of the most severe and urgent forms of harm remains insufficiently addressed: live online child sexual exploitation and abuse via video calls and livestreaming. In these moments, the abuse happens in real time and is invisible to anyone outside the call. For years, it escaped detection because platforms did not have built-in detection to recognize or stop this kind of harm, especially in spaces designed to feel private and lightly governed.
Yet one of the most severe and urgent forms of harm remains insufficiently addressed: live online child sexual exploitation and abuse via video calls and livestreaming.
Today's situation calls for a layered online safety approach, where responsibility is shared among tech industry players. Every layer of technology—manufacturers, operating systems, app stores, messaging platforms, and livestreaming services—has a duty of care to the children using their products.
Device manufacturers, in particular, are uniquely positioned to implement on device CSAM prevention as a baseline safety layer built directly into phones, tablets, and computers. By integrating privacy preserving detection at the device level, tech companies can help interrupt abuse no matter which platform offenders try to exploit. This complements, rather than replaces, platform level interventions.
On device safety must be part of the global solution, not an afterthought. The industry has an opportunity and obligation to build child protection into the foundation of the digital ecosystem.
On device safety must be part of the global solution, not an afterthought. The industry has an opportunity and obligation to build child protection into the foundation of the digital ecosystem.
If technology is effectively harnessed in this way, it could dramatically change the landscape, cutting off offenders’ access, disrupting the crime in real time, and protecting children before the harm ever occurs.
Companies in the digital ecosystem already have the resources and the technical ability. The real question is whether they will use the power they have to protect children.
In its most recent report on child sexual exploitation and abuse material and activity, eSafety highlighted that while most service providers showed some improvement in online safety measures compared to the previous reporting period, significant gaps remain in how platforms address child sexual exploitation and abuse (referred to in Australia as CSEA, in the Philippines OSEC or OSAEC). Providers have invested in better tools to detect harmful content, yet eSafety found critical areas where industry action is still lacking, particularly in the proactive detection of new CSEA material, preventing livestreamed CSEA in video calls, and tackling sexual extortion involving children and adults.
Although some companies have implemented tools to detect live CSEA in public livestreams—such as Meta on Facebook and Instagram Live, and Google on YouTube—none of these measures extend to their private or one-to-one video calling services, including Messenger and Google Meet. Likewise, Apple (FaceTime), Discord, Microsoft (Teams), Snapchat, and WhatsApp have not developed or deployed tools to detect CSEA in video calls. eSafety emphasized that any service enabling live video—whether one-to-many broadcasts or one-to-one calls—must address the risk of live online CSEA and take stronger steps to prevent harm.
Research and regulatory analysis increasingly point to a clear reality: live online child sexual exploitation is not happening everywhere online, but it is happening consistently within specific platform features, particularly video calling and livestreaming.
Live abuse often occurs in one-to-one video calls, where perpetrators can control the interaction, evade detection, and exploit the immediacy of real time harm. These environments are attractive precisely because they are perceived as private, exclusive spaces.
Importantly, this trend is not new or disputed. It has been documented by law enforcement, researchers, and regulators alike. The risk is well understood. What remains unresolved is how platforms translate that understanding into consistent product-level safeguards.
Platforms have already demonstrated that privacy respecting safety interventions can be deployed without constant monitoring or recording of user conversations. On device nudity classifiers, anomaly detection in child accounts, and live-video safety prompts are already operational and widely used. These tools show that safety protections can be designed in a way that safeguards both children’s rights and user privacy.
Applying these proven, risk based approaches to live child sexual exploitation would not represent a departure from existing practice. It would simply require tech companies to apply the same seriousness, resourcing, and innovation to the most urgent and severe form of online harm.
The challenge, then, is not whether Big Tech can act. It is how risk is assessed and prioritized across different product spaces.
One of the most persistent barriers to progress is the assumption that proactive detection of live abuse inevitably undermines user privacy. This framing has stalled innovation and polarized debate.
Today, we know that privacy preserving child-safety technology already exists and is already in use by the world’s largest tech companies. Meta, Google, and Apple all deploy on device nudity detection tools that protect minors by identifying risky content in real time on child accounts, including during live video calls. These models run locally on devices, meaning the content never leaves the user’s phone or tablet. This demonstrates that real time, privacy preserving detection is not theoretical. It is already happening at scale.
The same companies also use real time classifiers to detect child sexual exploitation and abuse (CSEA) on public platforms, including image, video, and livestream detection. Microsoft has similarly piloted CSEA detection models in Teams, proving that abuse occurring during live video can be identified and disrupted while the call is happening.
These examples show what is technically possible: live online sexual exploitation of children can be prevented before the abuse begins, if technology companies choose to apply the tools they already possess. This is no longer a question of capability. It is a question of will.
Protecting children from live abuse and respecting user privacy are not competing goals, as explained at the Virtual Tech Safety Roundtable.
Investing in safeguards for video calling and livestreaming is not only a moral imperative; it is also strategically sound.
First, building trust in the brand. Parents, educators, and regulators increasingly view child safety as a baseline expectation. Platforms that can demonstrate proactive, transparent action strengthen long-term trust.
Second, regulatory readiness. Authorities such as Australia’s eSafety Commissioner are moving from voluntary guidance toward clearer expectations of duty of care. Early leadership allows platforms to shape standards, rather than scramble to meet them under pressure.
Third, product sustainability. Video calling is now a core digital infrastructure. Ensuring it is safe by design future-proofs these services against reputational harm and abrupt regulatory intervention.
And finally, meaningful impact. Few areas of technology offer such a direct opportunity to prevent real-time harm to children. This is a space where platform decisions can have immediate, measurable human outcomes.
Ending OSEC will not be achieved by any single sector acting alone. Governments, civil society, law enforcement, and industry must work together — as many already do in awareness and prevention efforts. The next step is extending that collaboration into product design and safety architecture:
Big Tech companies have the scale, expertise, and influence to lead this shift. Many are already part of the solution. The opportunity now is to apply that leadership consistently, including in the digital spaces where harm is hardest to see, but most urgent to stop.
The power to significantly reduce live online child abuse already exists.
What remains is the collective decision to fully use it.