About Us

In November 2020, the United Nations Interregional Crime and Justice Research Institute (UNICRI), through its Centre for Artificial Intelligence and Robotics, and the Ministry of Interior of the United Arab Emirates launched the joint initiative known as ‘AI for Safer Children’.

This initiative seeks to explore the positive potential of artificial intelligence (AI) to support law enforcement and related authorities to prevent a wide range of forms of violence, exploitation and abuse against children in the digital environment. In addition to network-building, awareness-raising and advocacy, AI for Safer Children is developing this AI for Safer Children Global Hub, a platform designed to support law enforcement in leveraging AI to combat child sexual exploitation and abuse. The platform is being developed under the umbrella of the United Nations.

While the full scale of online child sexual exploitation and abuse remains unknown, existing evidence suggests considerable cause for concern. According to international data from the National Center for Missing and Exploited Children (NCMEC), the number of reports of suspected child exploitation keeps increasing at an exponential rate. From an average 100,000 reports arriving at NCMEC’s CyberTipline in 2010, it has reached over 36 million reports as of 2023. Again, it is important to underscore that these figures only capture the reported child sexual abuse materials in circulation, while the full scope and extent of the threat is unknown. ​​​​​​​

Not only has the scale of abuse increased, but also its severity and complexity. For instance, the exploitation now includes new forms of large-scale abuse such as online grooming and sexual extortion, and is further exacerbated by the perpetrators’ increased avenues for anonymization (e.g., VPNs and the dark web) and lower barriers to creating AI-generated child sexual abuse material. The global scale of the internet also presents added challenges, with children suffering re-victimization every time the content is shared, and crime investigations facing increased complexity due to the transnational nature of the crimes.

With investigators in law enforcement grappling with burgeoning caseloads and growing backlogs, attention is increasingly turning to new tools and technologies that can help turn the tide in the fight to protect children from sexual exploitation and abuse. As one of the definitive emerging technologies of the times, artificial intelligence (AI) is at the very core of this fight. AI is a powerful technology that is rapidly reshaping how traditional problems are approached, shedding new light on possible solutions, including:

  • Speeding up investigations
  • Triaging devices
  • Prioritizing files for review
  • Translating and transcribing files
  • Case management and visualization
  • Safeguarding officers’ wellbeing

The potential of AI to support law enforcement and related authorities to prevent crimes against children has, in fact, already been seen. For instance, facial recognition technology has already enabled the identification of numerous missing children and national authorities are actively exploring how they can use machine learning to, for example, identify child abuse images on confiscated devices or to rapidly analyze the vast number of reports of potential child sexual abuse material online in order to swiftly identify children in real danger. Notwithstanding this potential, AI equally presents a plethora of challenges not only from a technical perspective, but also from a legal, ethical and societal perspective, which must be addressed in order for the positive potential of the technology to outweigh its risks.​​​​​​​

In 2020, the United Nations Interregional Crime and Justice Research Institute (UNICRI) and the Ministry of Interior of the United Arab Emirates (UAE) launched the AI for Safer Children initiative with the aim of tackling child sexual exploitation and abuse through the exploration of new technological solutions, specifically AI. To do so, UNICRI and the UAE Ministry of Interior sought to support law enforcement agencies to tap into the potential of AI through a unique, centralized platform containing information on various AI tools that could be used at each stage of investigations into child sexual exploitation and guidance on how to navigate AI’s evolving legal and ethical challenges. Ultimately, the AI for Safer Children initiative seeks to contribute to realizing Target 2 of Goal 16 of the 2030 Agenda for Sustainable Development, which envisages an end to abuse, exploitation, trafficking and all forms of violence and torture against children.

Beyond this platform, the initiative seeks to explore the positive potential of AI to prevent child sexual exploitation and abuse in the digital environment through network-building, awareness-raising and advocacy activities, also organized under the umbrella of the United Nations. ​​​​​​

The AI for Safer Children initiative envisions a future where law enforcement agencies and related organizations build the capacity required and collaborate effectively to better prevent, detect and prosecute child sexual exploitation and abuse with the use of relevant AI tools. In this sense, the proposed solution was to create this online Global Hub displaying available AI tools and other resources for combatting child sexual exploitation and abuse.

The Global Hub is accessible via a secure log-in for law enforcement agencies and has the following goals:

  • Provide law enforcement agencies with information on the range of AI tools that exist;
  • Help law enforcement agencies identify potential AI tools that meet their specific needs;
  • Enable law enforcement agencies to learn about and share their experiences in leveraging AI tools;
  • Strengthen communication and networking for using AI to combat child sexual exploitation and abuse among law enforcement agencies​​​​​​​.

In other words, the Global Hub aims to foster a community of practical use for AI tools to prevent, detect and prosecute child sexual exploitation and abuse.

The target audience of the Global Hub is law enforcement officers from any United Nations member state interested in or working with AI tools for combatting child sexual exploitation and abuse, including but not limited to investigators, forensics experts, procurement officers or information technology (IT) experts and senior police officers – no matter the level of technical skill or prior experience with AI tools they may possess.

The Global Hub aims to be user-friendly, secure, and tailored to the needs of law enforcement agencies. The following core features are currently available:

  • Tools Catalogue: a database of existing AI tools;
  • Tools filter: a selection feature to help choose the appropriate AI tool for specific contexts and needs;
  • Learning Centre: learning resources on AI tools to fight child sexual exploitation and abuse and guidance on how to use this technology responsibly;
  • Meet Other Officers: networking platform to put law enforcement agencies interested in AI tools in contact.​​​​​​

The AI for Safer Children initiative counts on a select group of expert individuals from specialized organizations to provide advice and guidance to the project partners on a regular and ongoing basis throughout the implementation of the project, known as the Advisory Board. 

The Advisory Board is composed of global leaders in child protection, law enforcement and AI, including representatives from SafeToNet; UNICEF; INTERPOL; Europol; the Virtual Global Taskforce; Safe Online; Red Papaz; ECPAT; International Justice Mission (IJM); the National Center for Missing and Exploited Children (NCMEC); Thorn; Magnet Forensics; World Childhood Foundation; Bracket Foundation; RATI Foundation; Prerana; the Canadian Center for Child Protection; WePROTECT Global Alliance; the European Commission; the Gucci Children’s Foundation; Project VIC International; International Centre for Missing & Exploited Children (ICMEC); the Organisation for Economic Co-operation and Development (OECD); and Bodhini.

The insights provided by this multi-stakeholder Advisory Board are highly valued in bolstering the initiative’s development in the right direction. Child safety, protection and dignity have been identified as top priorities in the project partner UAE – a country already well-known for embracing technology and exploring how it can serve the safety of communities. In this regard, the partnership between the UAE and UNICRI on AI for Safer Children channels this vision and builds upon the work of UNICRI through its Centre for AI and Robotics to continue enhancing the effectiveness and efficiency of law enforcement agencies to safeguard children by leveraging AI responsibly. ​​​​​​​

As an initiative under the auspices of the United Nations, the AI for Safer Children initiative strives to uphold the principles and obligations set out in the Charter of the United Nations and the Universal Declaration of Human Rights and to fulfil the United Nations’ purpose to promote and encourage respect for human rights and fundamental freedoms for all. 

The ethical and legal process of the AI for Safer Children initiative was therefore established to ensure that the initiative is consistent with all the applicable ethical and legal principles while engaging with what can be considered highly sensitive and complex topics, namely the use of AI in law enforcement and online child sexual exploitation and abuse.

​​​​​​​To find out more about the ethical and legal process of the initiative, click here.

WordPress Appliance - Powered by TurnKey Linux