
Over the years, digital platforms and social media unlocked powerful new possibilities for connection, learning, and creativity. But as these technologies developed, and as their use expanded, a darker reality emerged. Offenders learned to exploit the same tools designed for good, using online spaces to coordinate, commercialize, and conceal child sexual exploitation in ways that were previously unimaginable. The very features that enabled global community and innovation also created environments where abuse could scale rapidly and remain hidden from view.
Over time, it spread, making abuse easier to organize, easier to monetize, and far harder to detect in real time. Features built for connection and creativity were also co-opted by offenders, often faster than safeguards were built to protect the most vulnerable users.
Big Tech has come to recognize this reality. Platforms now work with NGOs, governments, and advocates to spread prevention messaging, support survivor-centered campaigns, and disrupt known abuse networks. These efforts matter — and they represent genuine progress.
Yet one of the most severe and urgent forms of harm remains insufficiently addressed: live online child sexual exploitation and abuse via video calls and livestreaming. In these moments, the abuse happens in real time and is invisible to anyone outside the call. For years, it escaped detection because platforms did not have built-in detection to recognize or stop this kind of harm, especially in spaces designed to feel private and lightly governed. Today, this technology exists, and tech companies are very much capable of building detection into their products to help prevent this abuse before it happens.
Yet one of the most severe and urgent forms of harm remains insufficiently addressed: live online child sexual exploitation and abuse via video calls and livestreaming.
If technology is effectively harnessed by these tech companies for the purpose of stopping abuse from happening, it could dramatically change the landscape, cutting off offenders’ access, disrupting the crime in real time, and protecting children before the harm ever occurs.
Platforms already have the resources and the technical ability — the real question is whether they will use the power they have to protect children.
In its most recent report on child sexual exploitation and abuse material and activity on online service, eSafety highlighted that while most service providers showed some improvement in online safety measures compared to the previous reporting period, significant gaps remain in how platforms address child sexual exploitation and abuse (referred to in Australia as CSEA, in the Philippines OSEC or OSAEC). Providers have invested in better tools to detect harmful content, yet eSafety found critical areas where industry action is still lacking, particularly in the proactive detection of new CSEA material, preventing livestreamed CSEA in video calls, and tackling sexual extortion involving children and adults.
Although some companies have implemented tools to detect live CSEA in public livestreams—such as Meta on Facebook and Instagram Live, and Google on YouTube—none of these measures extend to their private or one-to-one video calling services, including Messenger and Google Meet. Likewise, Apple (FaceTime), Discord, Microsoft (Teams), Snapchat, and WhatsApp have not developed or deployed tools to detect CSEA in video calls. eSafety emphasized that any service enabling live video—whether one-to-many broadcasts or one-to-one calls—must address the risk of live online CSEA and take stronger steps to prevent harm.
Research and regulatory analysis increasingly point to a clear reality: live online child sexual exploitation is not happening everywhere online, but it is happening consistently within specific platform features — particularly video calling and livestreaming.
Live abuse often occurs in one-to-one video calls, where perpetrators can control the interaction, evade detection, and exploit the immediacy of real time harm. These environments are attractive precisely because they are perceived as private, exclusive spaces.
Importantly, this trend is not new or disputed. It has been documented by law enforcement, researchers, and regulators alike. The risk is well understood. What remains unresolved is how platforms translate that understanding into consistent product-level safeguards.
Major technology companies have demonstrated the ability to develop and operate sophisticated safety systems at scale, with capabilities to detect and take down hate speech, extremist content, and harmful content.
Visual moderation is also enforced, as violence, nudity, graphic content is moderated by these companies, as well as child abuse material in static formats.
In some cases, platforms also deploy tools to detect live abuse in public livestreams. However, similar protections are largely absent in private video calling services, even when operated by the same companies.
This disparity allows harm to migrate. Offenders take advantage of the lack of protection as these platforms unintentionally reinforce the very conditions that allow abuse to continue undetected. The challenge, then, is not whether Big Tech can act — it is how risk is assessed and prioritized across different product spaces.
One of the most persistent barriers to progress is the assumption that proactive detection of live abuse inevitably undermines user privacy. This framing has stalled innovation and polarized debate.
But privacy-preserving safety has already been demonstrated. Platforms already rely on non-content signals — such as behavioral patterns, usage anomalies, and network indicators — to manage other forms of harm. These approaches do not require persistent recording or surveillance of conversations.
Applying similar, risk-based methodologies to child safety would not represent a fundamental departure from existing practice. It would represent a shift in where those practices are deemed necessary.
Protecting children from live abuse and respecting user privacy are not competing goals, as explained at the Virtual Tech Safety Roundtable.
Investing in safeguards for video calling and livestreaming is not only a moral imperative; it is also strategically sound.
First, building trust in the brand. Parents, educators, and regulators increasingly view child safety as a baseline expectation. Platforms that can demonstrate proactive, transparent action strengthen long-term trust.
Second, regulatory readiness. Authorities such as Australia’s eSafety Commissioner are moving from voluntary guidance toward clearer expectations of duty of care. Early leadership allows platforms to shape standards, rather than scramble to meet them under pressure.
Third, product sustainability. Video calling is now a core digital infrastructure. Ensuring it is safe by design future-proofs these services against reputational harm and abrupt regulatory intervention. And finally, meaningful impact. Few areas of technology offer such a direct opportunity to prevent real-time harm to children. This is a space where platform decisions can have immediate, measurable human outcomes.
Ending OSEC will not be achieved by any single sector acting alone. Governments, civil society, law enforcement, and industry must work together — as many already do in awareness and prevention efforts. The next step is extending that collaboration into product design and safety architecture:
Big Tech companies have the scale, expertise, and influence to lead this shift. Many are already part of the solution. The opportunity now is to apply that leadership consistently — including in the digital spaces where harm is hardest to see, but most urgent to stop.
The power to significantly reduce live online child abuse already exists.
What remains is the collective decision to fully use it.