ssrn-4986870

Download as pdf or txt
Download as pdf or txt
You are on page 1of 59

TECHNOLOGICAL SOLUTIONS FOR PROTECTING

CHILDREN FROM ONLINE PREDATORS: CURRENT


TRENDS AND FUTURE DIRECTIONS.

BY

QUEENNETTE ESSE ODUDU LLB, LLM, BL


ABSTRACT

The rapid proliferation of digital technologies has fundamentally altered how children interact
with the world, offering unprecedented opportunities for learning, socializing, and
entertainment. However, this digital landscape also harbors significant risks, particularly the
threat of online predators. This research paper explores current trends and future directions
in technological solutions designed to protect children from online predators, addressing a
critical gap in the literature regarding the efficacy and ethical implications of these protective
measures.

The study employs a comprehensive literature review methodology, examining various


technological interventions, including artificial intelligence (AI) and machine learning,
parental control software, blockchain technology, biometric authentication, and augmented
and virtual reality (AR/VR) applications. By synthesizing findings from diverse fields such as
computer science, child psychology, and cybersecurity, this research provides a holistic
understanding of the complex challenges in safeguarding children online.

Key findings reveal that while AI and machine learning offer promising capabilities in
detecting and mitigating online threats, they face limitations such as false positives and
negatives and privacy concerns. Parental control software, while widely adopted, often
struggles to keep pace with tech-savvy children and may inadvertently infringe on children's
autonomy. Emerging technologies like blockchain and biometric authentication present novel
approaches to identity verification and data security but raise ethical questions regarding data
privacy and the long-term implications of collecting biometric data from minors.

The research also highlights the potential of gamification in online safety education, noting its
effectiveness in engaging children but cautioning against oversimplification of complex issues.
AR and VR technologies emerge as double-edged swords, offering immersive educational
experiences while introducing new exploitation vectors. This study underscores the need for a
multi-faceted approach to online child protection, integrating technological solutions with
robust academic programs and policy frameworks. It emphasizes the importance of balancing
security measures with children's rights to privacy and autonomy, calling for greater
collaboration between technologists, policymakers, educators, and child protection advocates.

The paper concludes by proposing future research directions, including developing more
transparent AI systems, exploring blockchain applications in age verification, and creating
ethical guidelines for using immersive technologies in child-centric environments. It advocates
for a proactive stance in anticipating and mitigating emerging online threats, ensuring that
protective measures evolve with technological advancements. As online predators adapt their
methods, the proactive identification and development of advanced protective technologies
remain crucial in mitigating risks and ensuring the well-being of young internet users.

Keywords: Online child protection, Artificial intelligence (AI), Machine learning,


Blockchain technology, Biometric authentication, Augmented and virtual reality (AR/VR),
Gamification, Cybersecurity, Digital literacy, Sextortion, Grooming, Privacy, Online
predators.
INTRODUCTION
The rapid evolution of digital technologies has significantly altered how children interact with
the world (Keeley & Little, 2017). The internet, social media platforms, and online gaming
environments provide unparalleled learning, socializing, and entertaining opportunities.
However, it has been observed that these same technologies have also increased the possibility
of serious risks. For example, the introduction of the dark web—a segment of the internet
distinguished by its high level of encryption and anonymity—has allowed individuals with
predatory tendencies to interact covertly. With the internet and technology, these people can
now exchange indecent images, seek advice, and recruit others for abusive activities with
previously unattainable anonymity. These spaces and opportunities validate deviant behaviors
and provide resources that enhance offenders' sophistication in abuse tactics and security
measures.
The advancement of technology, as well as global incidents and developments that emphasize
and promote reliance on internet technology, have highlighted the critical necessity for solid
protection measures. For example, the COVID-19 pandemic exacerbated the situation by
forcing youngsters to rely more on digital platforms for education and social contact
(Amankwah-Amoah et al., 2021). As educational institutions moved to virtual classrooms and
children's screen time increased, their vulnerability to digital threats grew. The pandemic-
induced shift to online engagement exacerbated children's vulnerabilities, making them more
vulnerable to exploitation by online predators who take advantage of their increased online
presence through deceptive and harmful tactics (Sharifi et al., 2021).
ONLINE PREDATION AS TECHNOLOGY-FACILITATED VIOLENCE AND ABUSE
(TFVA)
Technology-Facilitated Violence and Abuse (TFVA) refers to a wide range of digital threats
and abuses, such as technology-facilitated sexual assault, image-based sexual abuse,
cyberstalking, unwanted sexual solicitations, image-based harassment, hate speech, and
various forms of technological coercion and isolation (Bailey, Henry, & Flynn, 2021). TFVA
can appear as text, photos, and sophisticated surveillance technologies ranging from basic
digital communication tools to complex systems such as artificial intelligence (AI), GPS
tracking, and drones (Flynn, 2019; Henry et al., 2018; Wong, 2019; Thomasen, 2018). TFVA
occurs in both public and private settings, influencing a wide range of child-adult relationships,
including strangers, acquaintances, friends, family members, and intimate partners (Citron,
2014). For example, AI-driven algorithms might generate deepfake photos or videos, which
can be exploited for malicious purposes, such as spreading false information or manipulating
images to harass individuals. Similarly, drones can be employed for invasive surveillance,
capturing private moments without consent.
While these modern technologies provide significant benefits, they create new opportunities
for misuse and control. For example, the FBI documented nearly 3,000 cases of sextortion
against juveniles in 2022 (U.S. Attorney's Office, Southern District of Indiana, 2023).
Sextortion entails coercing victims into sending graphic photographs and then extorting them
for additional material or money. Tragically, there have been occasions where victims,
overcome by pressure and shame, have committed themselves (Wang, 2024). While TFVA
affects people of all ages, genders, races, and socioeconomic statuses, it is more than just a
series of isolated hostile acts. Instead, research has shown that it reflects deep-rooted structural
and systemic inequalities such as misogyny, homophobia, transphobia, racism, colonialism,
and ableism (Southern & Harmer, 2019; Henry et al., 2020; Green, 2019; Colliver, Coyle, &
Silvestri, 2019; Kerrigan, 2019; Carlson, 2019). While TFVA affects people of all ages,
genders, races, and socioeconomic backgrounds, it is more than just a collection of isolated
hostile behaviours (Sheikh & Rogers, 2024). Research has revealed that it reflects deeply
ingrained structural and systemic inequities such as misogyny, homophobia, transphobia,
racism, colonialism, and ableism (Makinde et al., 2021). Predators frequently use these
systemic inequities to target vulnerable children and teenagers. For example, online sexual
harassment and exploitation disproportionately harm girls and young women, reflecting social
misogyny. The National Centre for Missing and Exploited Children (NCMEC) reports that 98%
of internet enticement victims are girls (Equality Now, 2023). LGBTQ+ youth are likewise at
a greater risk. According to a Human Rights Campaign survey, LGBTQ+ youth are more likely
to face online harassment than their straight counterparts (HRC Foundation, 2024). This
reflects the persistent homophobia and transphobia in society, which predators utilize to
deceive and abuse these young people. Children with impairments are an additional vulnerable
group. Predators may take advantage of the children's perceived fragility and ableist views
(Fang et al., 2022). With these realities, there is a need for a new dimension to the ecology of
child protection, which should be a focal point for practitioners, including law enforcement,
when addressing digital media's role in the online predation of minors (Quayle & Koukopoulos,
2019). Although arguments may exist as to whether online abuse and exploitation are
inherently more severe than offline crimes, and this research asserts that neither is more or less
severe as the consequences of both can be equally devastating, the unique dynamics of social
media and online interactions present distinct risk factors for children. The anonymity and
extensive reach of the internet complicate efforts to safeguard children from predators. In
addition, researchers such as Dombrowski et al. (2004) have seen that despite the development
of various technological solutions designed to protect minors, predators continuously adapt
their strategies, challenging the effectiveness of existing measures. This ongoing adaptation
highlights the urgent need for innovative and evolving technological solutions to counteract
online threats. Addressing these challenges demands a thorough understanding of current
technologies, their effectiveness, and the potential for future advancements in protective
measures.
SIGNIFICANCE OF THE STUDY
The significance of this study is multifaceted, reflecting its potential impact on enhancing child
safety in the rapidly evolving digital landscape. This research aims to provide a general analysis
of current technological solutions and explore emerging trends, offering insights that can drive
meaningful progress in the field.
Firstly, this study sheds light on existing technologies designed to protect children online by
critically examining their effectiveness and limitations. Understanding these aspects is essential
for stakeholders—including technology developers, educators, parents, and policymakers—to
refine and improve current protective measures. For technology developers, this research offers
an assessment of existing tools, identifying gaps and areas for improvement that can guide the
creation of more sophisticated and practical solutions.
The findings offer educators and parents practical insights into the benefits and drawbacks of
current protective technology. This information allows them to make informed choices
regarding the tools and techniques they use to protect children. By emphasizing effective
practices and standard errors, the study helps build evidence-based educational programs and
parental counsel sensitive to increasing digital risks.
Policymakers will benefit from a clearer understanding of the current state of technology and
emerging trends, which can inform the development of more robust and adaptive policies. By
incorporating the study's findings, policymakers can create regulations and guidelines that
address the specific challenges identified, ensuring that protective measures remain relevant
and effective in the face of new and evolving threats.
Furthermore, this research contributes to the broader discourse on child safety in the digital age
by identifying emerging trends and future directions. By highlighting these trends, the study
offers a forward-looking perspective that can anticipate and mitigate potential risks before they
become widespread issues. This proactive approach is crucial for avoiding online predators and
ensuring that protective measures evolve with technological advancements.
Finally, the study's significance stems from its ability to improve the overall safety of the online
environment for children. It helps to design more effective strategies and policies to safeguard
children from digital risks, resulting in a safer and more secure online experience.

LITERATURE REVIEW
FORMS OF ONLINE PREDATIONS
Online predation of minors encompasses a range of harmful activities. These include the
production, dissemination, and possession of child sexual abuse materials; online grooming;
'sexting'; 'sextortion'; revenge pornography; commercial sexual exploitation; online
prostitution; and live streaming of sexual abuse using voice-over-internet protocols such as
Skype, trafficking, bullying, to mention a few. Many of these abuses involve sexual imagery
of children, obtained either through direct abuse or by persuading the child to create and share
such images. While these forms of abuse existed before the internet, technological
advancements have profoundly shaped and amplified their manifestation and global reach.
Stakeholders must understand these various forms of online predations, and many researchers
have concentrated on these multiple forms in their research works. Research by Whittle et al.
(2013) on grooming as a form of online predation reveals that online predators employ a variety
of deceptive tactics, such as grooming, to build trust with their victims. According to the author,
grooming involves manipulating a child's emotions to gain their confidence, ultimately leading
to exploitation. This process is complex and multifaceted. Finkelhor & Hotaling (1984) and
Craven et al. (2006) have developed theories on the stages of sexual offending and grooming.
Craven et al. (2006) adapted Finkelhor & Hotaling (1984) preconditions for sexual offending
to outline a threefold grooming process: grooming the self, grooming the surroundings
(including significant others), and grooming the child. These stages facilitate the predator’s
control over the child and their environment (Whittle et al., 2013). O’Connell (2003) proposed
a five-stage model of online grooming: 1) Friendship forming, 2) Relationship forming, 3) Risk
assessment, 4) Exclusivity, and 5) Sexual stages.
This model is widely used to explain how offenders manipulate children online (Black et al.,
2015). Each stage involves specific strategies to engage and exploit the child. The initial stage
of friendship building often involves casual conversation and questions about the child's life,
establishing a foundation of trust crucial for progressing to offline meetings. In 2016,
Barnardo’s conducted a survey of its sexual exploitation services in the UK, which included
702 children who had received support in the previous six months. Of these, 297 disclosed
being groomed online, with two-thirds having met the perpetrator and being sexually exploited.
The majority of these children were female, aged 14-17, and over half reported involvement
with multiple perpetrators. While this survey is not representative of all online abuse cases, it
highlights the pervasive nature of online grooming and exploitation, underscoring the need to
reconsider the contexts in which abuse occurs due to the omnipresence of technology.
Martin & Alaggia (2013) argue that 'cyberspace' introduces a new dimension to child protection
and should be a key consideration for practitioners and law enforcement addressing digital
media's role in child sexual abuse. The proliferation of the internet since the early 1990s has
transformed the dynamics of grooming. Traditionally, offenders would groom children in
familial, workplace, or care settings. Today, the internet and social media have made it easier
for offenders to access and target youths. Social media platforms allow offenders to select
potential victims based on their online profiles, facilitating more targeted and effective
grooming (Quayle et al., 2014). The relationship-building stage involves making the child feel
unique and offering gifts, a critical aspect of online grooming. This stage helps establish
exclusivity, where the predator creates a sense of special connection with the child, often
leading to offline meetings. Rapport building is facilitated through instant communication tools
and chat rooms, enhancing the predator's ability to form a bond with the child. A distinguishing
feature of grooming is the introduction of sexualized content in communications with the child.
This process normalizes inappropriate behaviour and prepares the child for physical contact.
Sexualization may manifest as flirting, discussions about sexual activity, or enacting sexual
fantasies. The rate at which sexualization occurs varies depending on the offender’s
motivations and strategies. Although the speed of grooming makes it challenging to quantify
the number of children solicited online at any given time, repeated and sustained contact is
crucial for the success of the grooming process.
According to O'Connell (2003), risk assessment is an important part of the grooming process.
This assessment includes considering risks associated with both the chance of detection online
and the potential dangers of meeting the youngster offline. According to Williams et al., (2013),
risk assessment is a continuous process throughout the grooming cycle rather than a single
occurrence. Predators frequently use covert tactics to ensure their safety, such as searching for
information about the child's computer setup or the presence of carers. Although O’Connell
(2003) initially described the stages of grooming as largely sequential, subsequent research has
demonstrated that these stages can be non-sequential, depending on the offender's
characteristics.
Açar (2016), research explores sextortion as a form of online predation of minors. According
to the author, sextortion is a form of sexual exploitation where perpetrators coerce victims into
providing explicit images or sexual favors by threatening to release existing private material.
Wittes, et.al., (2016), similarly stated that sextortion is a subset of broader cyber-enabled
crimes, such as child exploitation, that typically involves the threat of releasing sensitive or
compromising material unless the victim complies with demands, which can range from
sending more explicit content to engaging in offline sexual encounters. The International
Centre for Missing and Exploited Children (ICMEC) defines sextortion as a form of sexual
exploitation that relies on coercion rather than force (Baker, 2022). This coercion is typically
achieved through blackmail, where offenders threaten to distribute intimate images or videos
unless additional sexual material is provided, or sexual favours are granted.
Researchers have noted this as a growing threat, particularly to children and adolescents in the
digital age. Studies show that perpetrators often use social media platforms, gaming sites, and
instant messaging services to initiate contact with children (Faraz, et al., 2022). In some cases,
the initial images may have been shared voluntarily by the child during online interactions, or
they may have been obtained through hacking, impersonation, or other forms of online
deception. Research into sextortion cases highlights the increasing prevalence of this crime
against children. According to a report from the NCMEC, there was a significant rise in
sextortion cases reported in recent years, with most victims being minors, particularly teenage
girls (Henry & Umbach, 2024). According to the body's report, financial "sextortion" schemes
have increased since 2020, with offenders primarily targeting teenage boys via Instagram and
other social media platforms, threatening victims with compromising imagery in exchange for
cash. From 2020 to 2023, the organisation examined more than 15 million reports to the
NCMEC's hotline. The investigation discovered that sextortion instances had considerably
increased in recent years, with reports of online enticement growing by 82% between 2021 and
2022, when the NCMEC's hotline received over 80,500 reports. According to statistics
reviewed between August 2022 and August 2023, the number of events increased over the
previous year, with an average of 812 weekly complaints. According to Thorn CEO Julie
Cordua, financial sextortion is a serious and growing menace to children, particularly
adolescent boys (Nguyen, 2024). Unlike classic types of sextortion, these offenders use fear
and the threat of publishing intimate photographs to extort victims before they can seek help.
Girls have always been the most common targets of juvenile sextortion scams, according to the
survey. These schemes frequently included requests for intimate imagery, sexual activities, or
being in a love relationship. However, with financial sextortion, most victims were teenage
boys, with 90% falling between the ages of 14 and 17. The psychological and emotional toll
on victims of sextortion is profound, often leading to anxiety, depression, social withdrawal,
and even suicide (O’Malley, 2023). The combination of sexual exploitation, fear of exposure,
and shame can push victims into a state of mental health crisis, where they feel trapped and
powerless to seek help (Wolak & Finkelhor, 2016). As sextortion cases frequently go
unreported due to fear and embarrassment, many children endure this abuse in silence, with
long-term psychological consequences.
Researchers like Howard (2019), Wittes, et.al., (2016) recognised that children are particularly
susceptible to sextortion due to various factors, including their developmental stage, naivety
about online risks, and the desire for social validation. The anonymity provided by the internet
enables offenders to misrepresent themselves and establish trust with their victims by posing
as peers or influential figures (Quayle & Taylor, 2001). Research shows that children are often
groomed over time and lured into sending explicit content before realizing the gravity of the
situation. According to Ortega-Barón et al. (2022), adolescents who are particularly sensitive
to peer pressure and societal expectations are more likely to comply with requests to send
images or videos, unaware of the potential for abuse. Various countries have sought to combat
sextortion through legislative reforms and cybercrime laws aimed at protecting children from
exploitation. For instance, the United States' Protecting Against Child Exploitation Act of 2017
was introduced to address the growing issue of online sexual coercion. International efforts,
including those led by organizations like Interpol and Europol, have also resulted in the
identification and prosecution of perpetrators involved in child sextortion. Despite these efforts,
a significant challenge remains in policing the digital space. The anonymity of the internet and
the ability for offenders to operate across borders complicate law enforcement efforts to track
and apprehend offenders. The lack of a unified global legal framework and the variations in
child protection laws across different jurisdictions further hinder the ability to effectively
combat sextortion on a large scale (Flynn, 2019). Technology companies have been urged to
develop more robust safeguards, including privacy protections, AI-driven content monitoring,
and more secure platforms to reduce children's risk exposure. Organizations such as the
National Cyber Security Alliance should partner with schools and communities to deliver age-
appropriate content on online safety (Smith, 2018). Law enforcement authorities, in partnership
with non-profit organizations and the business sector, have been trying to strengthen victim
reporting systems and provide prompt assistance via helplines and counseling programs
(Immigration & Customs Enforcement, 2023). Despite these attempts, many academics, like
Henry et al., (2018), believe that better cross-sector coordination, more resources for victim
care, and a holistic approach to combating sextortion are still required (Henry & Umbach,
2024).
Cyberbullying, another prevalent form of online predation, has been defined by Sezer & Tunçer
(2021) as intentional and repeated harm inflicted through digital platforms, which has emerged
as a significant issue affecting children and adolescents in the digital age. Cyberbullying
involves the use of digital technologies to harass, threaten, or humiliate others. According to
Smith et al. (2008), it encompasses spreading rumors, sending threatening messages, sharing
embarrassing images or videos, and exclusion from online groups. What distinguishes
cyberbullying from traditional bullying is its pervasive nature: it can occur 24/7, reach a large
audience quickly, and often remains permanently accessible online. Children and adolescents
are particularly susceptible to cyberbullying due to their frequent use of digital platforms and
their limited understanding of the risks associated with online interactions. The anonymity
afforded by the internet can embolden perpetrators, as they are less likely to face immediate
consequences for their actions. Moreover, the lack of physical proximity between the bully and
the victim can make it difficult for parents, teachers, and guardians to detect cyberbullying in
real-time.
Numerous studies have sought to quantify the prevalence of cyberbullying among children and
adolescents. In a large-scale study by the Cyberbullying Research Centre, about 30% of the
surveyed teens over the last 12 years reported having experienced cyberbullying at least once
in their lifetime (Patchin & Hinduja, 2024). Other research indicates that girls are more likely
to be victims of cyberbullying than boys, particularly about appearance-based insults and social
exclusion. However, boys are more likely to engage in direct forms of cyberbullying, such as
physical threats and harassment in online gaming environments. Cyberbullying can have
profound psychological, emotional, and social consequences for children. Research shows that
victims of cyberbullying often experience higher levels of anxiety, depression, low self-esteem,
and suicidal ideation compared to their non-bullied peers. The continuous nature of
cyberbullying—where victims may be subjected to harassment even within the perceived
safety of their homes—intensifies feelings of hopelessness and isolation. Moreover, the public
nature of cyberbullying, where hurtful content can be shared widely and rapidly, exacerbates
the emotional distress felt by victims. In some cases, the fear of social rejection or retaliation
prevents victims from reporting incidents of cyberbullying, leading to prolonged suffering.
Research has shown that these emotional scars can persist long after the bullying stops,
affecting children’s academic performance, social relationships, and mental health into
adulthood (Kowalski et al., 2014).
According to Alleva (2019), social media platforms are the primary arenas for cyberbullying
among children, with sites such as Instagram, TikTok, Snapchat, and Facebook frequently
implicated in research. As these platforms facilitate sharing and interaction among peers, they
are also venues for harmful behaviors like exclusion, public shaming, and harassment. Studies
have found that cyberbullying often takes place on platforms where children can create
anonymous or pseudonymous profiles, allowing perpetrators to engage in bullying without fear
of identification (Cassidy et al., 2013). Platforms that feature "likes," comments, or direct
messages also make it easy for bullies to target victims and for harmful content to gain viral
momentum. The design of these platforms, which encourages constant engagement and the
curation of one’s digital image, has been criticized for creating an environment where bullying
can thrive (George & Odgers, 2015). Addressing cyberbullying through legal and policy
frameworks has been a complex challenge. While many countries have enacted laws against
traditional bullying, the enforcement of cyberbullying laws has lagged due to difficulties in
defining jurisdiction, issues with anonymity, and the cross-border nature of online platforms.
Some regions, such as the European Union and certain U.S. states, have enacted specific laws
that criminalize online harassment, but enforcement is still uneven. Schools are progressively
implementing anti-cyberbullying rules, recognizing that the effects of online bullying extend
into the classroom. According to research, schools that encourage digital literacy and anti-
bullying education programs saw fewer cases of cyberbullying among kids (Marzano, 2021).
However, successfully adopting these policies necessitates collaboration among schools,
parents, and law enforcement, as well as a thorough understanding of the cyberbullying
dynamics in each community.
Child trafficking is another heinous crime that exploits vulnerable children for various forms
of labor, sexual exploitation, and abuse. Researchers have identified online child trafficking to
be the use of digital platforms, social media, and the dark web to facilitate illegal activities
related to the exploitation of children. According to Sarkar, (2015), online child trafficking
refers to the use of the internet and digital technologies by traffickers to recruit, exploit, and
traffic children. According to the United Nations Office on Drugs and Crime (UNODC),
trafficking in children through online platforms can occur for various reasons, including sexual
exploitation, forced labor, and illegal adoption. The International Labour Organization (ILO)
estimates that more than 3 million children are being exploited in sex and labor trafficking,
with an increasing number being lured and exploited through online means (ILO, 2022; Uitts,
2022). The internet allows traffickers to not only groom victims but also distribute child sexual
abuse materials (CSAM), further commodifying the exploitation of children. Research shows
that online child trafficking often begins with grooming, where traffickers establish
relationships with children through online platforms (Winters et al., 2022). Social media, chat
rooms, and online gaming platforms are shared spaces where traffickers can pose as peers or
trusted adults to build rapport with children. Once trust is established, traffickers can
manipulate, coerce, or threaten children into exploitative situations. Traffickers use
sophisticated methods to avoid detection, often employing encryption technologies, dark web
forums, and anonymous payment systems like cryptocurrencies to conduct illegal activities
(Adel & Norouzifard, 2024). Online classified advertisements, fake job offers, and modeling
scams are commonly used tactics to lure children into trafficking networks. A study by ECPAT
International (2021) highlighted how traffickers leverage digital platforms to recruit children
for both local and cross-border trafficking. Promises of work, education, or romantic
relationships often deceive children. Once recruited, traffickers may exploit them for
pornography, prostitution, or forced labor. The anonymity provided by the internet enables
traffickers to operate across international borders, making the problem difficult to regulate or
control.
Measuring the exact prevalence of online child trafficking has been seen to be difficult due to
the hidden nature of the crime and the evolving tactics used by traffickers. However, studies
indicate that the problem is widespread and growing. A report by Europol found that reports of
child exploitation online surged during the COVID-19 pandemic, with an increase in both the
production and dissemination of CSAM (Europol, 2020). The report found that online
platforms are increasingly being used to traffic children, particularly for sexual exploitation. In
addition, data from Interpol and Europol show an increasing number of online cases related to
child trafficking and exploitation across multiple regions, including Europe, North America,
and Southeast Asia (Europol, 2020; INTERPOL, 2023). These cases often involve international
trafficking rings that use online platforms to distribute explicit content and coordinate the
movement of trafficked children.
The psychological and physical impact of online child trafficking on victims has been found to
be profound and long-lasting. The experience of children trafficked for sexual exploitation is
that of repeated abuse, coercion, and violence; due to this, the victims of trafficking experience
significant mental health issues, including depression, anxiety, PTSD, and suicidal ideation
(Hopper & Hidalgo, 2006). Trafficked children also suffer from social isolation, trust issues,
and difficulties forming healthy relationships in the future. Moreover, the digital nature of
online trafficking means that explicit content involving trafficked children can be shared and
circulated indefinitely, causing ongoing harm to the victims long after their immediate
exploitation ends (Raines, 2022). This "digital permanence" exacerbates the trauma and
humiliation experienced by victims, who may fear re-victimization every time images or videos
resurface online.
There has been growing recognition of the need for a multi-faceted approach to combat online
child trafficking, involving prevention, intervention, and collaboration between various
stakeholders. For instance, Todres (2010) advocates that prevention efforts should focus on
educating children, parents, and communities about the risks of online exploitation and
grooming. Schools and organizations have developed digital literacy programs to help children
recognize and avoid online threats. Similarly, international organizations like ECPAT
International and UNICEF have advocated for stronger laws and policies to protect children
from online trafficking and exploitation (Rebhi, 2023). Writer Williams (2013), also noted and
commended that many countries have passed legislation aimed at criminalizing online child
trafficking and imposing stricter regulations on internet service providers (ISPs) and social
media platforms. However, as the author noted, the enforcement of these laws remains
inconsistent, and traffickers often exploit legal loopholes and jurisdictional issues to evade
prosecution. Another critical aspect of intervention is providing support and rehabilitation for
child victims. Specialized trauma-informed care, including counselling and mental health
services, is essential for helping victims recover from the emotional and psychological damage
caused by trafficking. Long-term support, such as access to school, healthcare, and social
services, is required to reintegrate trafficked children back into society. Furthermore, Williams
(2013) and Gezinski & Gonzalez-Pons (2024) argued that future studies on online child
trafficking should focus on understanding traffickers' developing techniques, particularly
considering technical improvements. Research into the intersections of human trafficking,
technology, and international law is critical for building more effective policies and
interventions. Furthermore, more data on the frequency and trends of online child trafficking
are needed, particularly in areas where internet connection is quickly rising. Comparative
research between nations with diverse legal regimes could provide insights into the most
effective measures for preventing and prosecuting online trafficking crimes (Ambagtsheer,
2021).
The various literature reviewed reveals that online predators utilize sophisticated psychological
manipulation techniques, including flattery, gift-giving, and threats, to isolate their victims
from protective social networks and coerce them into exploitative situations. Wortley (2013)
critiques the notion that the Internet merely serves as a platform for offending, proposing
instead that it fundamentally engenders these crimes. He argues that relying solely on
traditional tertiary prevention strategies—such as arrest and rehabilitation—fails to address the
environmental and situational factors that underpin these offenses. Wortley (2013) also
highlights that while most individuals are not inherently attracted to children, situational factors
can induce exploitative behavior. To effectively combat abuse and exploitation of children,
Wortley (2013) advocates for a paradigm shift towards addressing environmental cues that
facilitate offending, rather than solely focusing on offender rehabilitation. Public health
models, which emphasize altering environmental conditions to prevent crime, offer a more
comprehensive approach. This approach acknowledges the role of prosecution and treatment
but prioritizes the removal of hazards to reduce risk.
Generally, the review of various literature reveals a predominance of quantitative and
qualitative research on online child sexual exploitation (e.g., Karayianni et.al., 2017). The
studies also include recommendations for responding to TFVA, encompassing legal,
technological, and educational approaches and support initiatives for victims. Various studies
focus on specific forms of TFVA, such as image-based sexual abuse (e.g., Citron & Franks,
2014), online hate speech (e.g., Bailey, 2010; Citron, 2014), online harassment and trolling
(e.g., Bailey, 2017; Pavan, 2017), etc. Each study contributes to a broader understanding of
how to address and mitigate the impacts of TFVA and online predation of minors.
CURRENT TECHNOLOGICAL SOLUTIONS
Researchers have identified various technological solutions aimed at mitigating the risks posed
by online predators, ranging from parental control software to advanced AI-driven systems
(Mwijage & Ghosh, 2024). According to the developers of these tools and researchers, these
tools are designed to protect children from harmful content and predatory behavior, but their
effectiveness remains a subject of ongoing debate.
PARENTAL CONTROL SOFTWARE
As children increasingly rely on the internet for education, social interaction, and
entertainment, the risks associated with online exposure, such as cyberbullying, exposure to
inappropriate content, and online predation, have also risen. Parental control software has
emerged as a technological tool designed to safeguard children from these threats by enabling
parents to monitor, filter, and limit their children's online activities. Tools such as Net Nanny,
Qustodio, and Norton Family offer features, including content filtering, screen time
management, and location tracking. A large body of evidence indicates that these tools are
primarily intended to block access to hazardous or inappropriate content, limit screen time, and
track online interactions in order to detect potentially risky behavior. Livingstone et al. (2017)
contend that these methods can effectively reduce children's exposure to hazardous online
information. According to a 2019 study by Green et al., many parents consider these
technologies to be vital in today's highly digitalized environment, where children face a wide
range of online risks. Parental control software is frequently viewed as a prophylactic strategy
that allows parents to maintain control over their children's digital behaviors and intervene
when necessary. However, while such software can block explicit content and limit harmful
exposure, this research has found no proof that it is foolproof, and its actual effectiveness is a
subject of ongoing debate. Several studies suggest that parental control software can help
reduce the risk of children encountering harmful online content. For example, Dombrowski et
al., (2007) concluded that children whose parents used filtering and monitoring software were
less likely to access sexually explicit content or be exposed to online grooming. Moreover,
these tools can provide peace of mind to parents who struggle to keep up with the evolving
nature of online platforms and the increasing amount of time children spend online (Helsper et
al., 2024).
However, the effectiveness of parental control software in safeguarding children is not
universally supported. Research has shown that children often find ways to circumvent these
controls, especially older children and teenagers who are more digitally savvy (Sun et.al.,
2021). Children may use VPNs, proxy servers, or alternate devices to bypass restrictions, thus
rendering the software ineffective. Yu et al. (2024) argues that while parental control software
can be helpful for younger children, it is less successful in addressing the online behavior of
adolescents, who are more likely to find loopholes or view restrictions as a challenge. Another
significant criticism of parental control software, stated by researchers, is its potential to
infringe on children's privacy and autonomy (Erickson et.al., 2016). Surveillance features, such
as monitoring browsing history, recording keystrokes, and tracking location, have sparked
ethical concerns about the balance between protection and invasion of privacy (Livingstone et
al., 2019). Critics argue that constant surveillance can undermine trust between parents and
children, potentially leading to secrecy, rebellion, or strained family relationships. Moreover,
the literature suggests that overly restrictive use of parental control tools can stifle children's
ability to explore and learn independently in the digital world. A study by Livingstone et al.,
(2019) found that while filtering and monitoring can protect children from harmful content, it
can also limit their opportunities to develop critical digital skills and literacy. Children who are
overly shielded by parental controls may be less prepared to navigate the online world
independently and may lack the resilience needed to deal with online risks. The tension
between protecting children and allowing them the freedom to explore the internet has been a
central theme in much of the research on parental control software. Scholars such as
Livingstone & Blum-Ross (2020) argue that when used excessively or without dialogue,
parental control tools can undermine children's agency and sense of responsibility. Instead, they
suggest a more balanced approach that incorporates both technological safeguards and open
communication between parents and children.
Studies have also called into question the reliance on parental control software as a stand-alone
solution for online safety. For instance, the literature of Ali et al. (2020) suggests that the
effectiveness of these tools can be significantly enhanced when combined with active parental
mediation and digital literacy education. This criticism also notes the point that teaching
children how to navigate the internet safely, recognize online risks, and practice responsible
digital behavior is crucial in preparing them for independent internet use. Parental control
software can be helpful in providing immediate protection, but it does little to promote long-
term digital literacy or critical thinking about online content. As Finkelhor et al. (2021)
emphasize, digital parenting should go beyond setting up software to include regular
conversations about online experiences, risks, and responsibilities. Parents who engage in
active mediation are more likely to raise digitally resilient children capable of handling online
challenges, such as cyberbullying or exposure to harmful content. Hence, much literature
argues that digital literacy programs within schools and communities should complement
parental efforts to safeguard children. Wang et.al., corroborated this in their study in 2021,
when they noted that children who participated in digital safety education programs were better
equipped to identify online risks and employ self-regulation strategies, regardless of whether
parental control software was in place. Therefore, education's role at home and in formal
settings is crucial in fostering safer online environments for children.
AI-DRIVEN CONTENT FILTERING
AI-driven content filtering is a technology that utilizes AI to identify, monitor, and block
harmful content in real time (Marsoof et al., 2023). This tool is used across platforms such as
social media, search engines, and educational websites to prevent children from accessing
inappropriate materials, including violence, explicit sexual content, and other forms of online
harm. AI-driven content filtering uses machine learning algorithms, natural language
processing (NLP), and computer vision technologies to analyze and classify content based on
its risk to children (Muthazhagu et al., 2024). Unlike traditional rule-based filtering systems,
AI filtering models adapt and learn from vast amounts of data, improving their ability to detect
harmful content in various forms, including text, images, videos, and live streams. This
dynamic adaptability allows AI tools to keep pace with the constant evolution of online threats.
The literature on AI-driven content filtering shows promising results regarding its ability to
shield children from a wide range of online harms. Jordan (2024) argues that AI systems are
particularly effective in large-scale platforms like YouTube, Facebook, and TikTok, where
human moderation alone would be inadequate due to the sheer volume of content generated.
AI tools can process vast quantities of user-generated material in real time, identifying potential
threats before they reach children. This level of automation allows for faster and more
comprehensive coverage than manual monitoring. A study by Singh, and Nambiar (2024)
highlights that AI-driven filtering systems can significantly reduce children's exposure to
explicit content, especially on platforms frequented by younger users. For example, Google's
Safe Search (Okeh, 2023) and YouTube’s (Moxley, 2023) restricted mode relies heavily on AI
to block inappropriate search results or videos, reducing the likelihood of children accidentally
encountering harmful materials.
Despite these successes, the reviewed literature also points to several limitations of AI-driven
filtering in practice. One issue is over-blocking, where legitimate content is mistakenly flagged
and filtered out. For instance, Marsoof et al., (2023), pointed out that educational content
related to sexual health or social justice movements may be censored, as AI algorithms
sometimes struggle to interpret nuanced contexts. This can limit children's access to valuable
information, particularly in school settings or research environments. In addition, Udupa et al.,
(2023), noted the inability of AI systems to account for all cultural and linguistic nuances. AI
filters are often trained on datasets that reflect dominant cultural norms and languages, which
can lead to bias or inaccuracies in detecting harmful content in non-Western contexts. The
reviewed literature increasingly recognizes that AI-driven content filtering, while effective,
cannot operate in isolation. Many scholars advocate for a hybrid approach that combines AI
tools with human moderators to ensure better accuracy and context sensitivity. AI systems are
highly efficient at identifying large volumes of problematic content quickly, but they often
require human input to assess edge cases or content that requires nuanced judgment, such as
satire, irony, or artistic expression. (Gorwa et al., 2020) argue that the integration of human
oversight is essential in mitigating issues like over-blocking and false positives. Human
moderators can review flagged content and make informed decisions about whether it
genuinely poses a threat to children's safety. This collaborative approach leverages the strengths
of AI—speed, scale, and pattern recognition—and human moderators' capacity for
understanding complex social and cultural contexts. Nevertheless, commentators such as
Manne et al., (2022) and Gillespie (2018), note that the reliance on human moderators presents
its challenges. Given the overwhelming amount of content generated daily, human moderation
can be costly and resource-intensive, making it difficult for smaller platforms or schools to
implement this approach effectively. Furthermore, human moderators are susceptible to
psychological harm from being exposed to disturbing content, a problem exacerbated by the
sheer volume of material that requires review.
EDUCATION-BASED SOLUTIONS
While technological solutions like monitoring systems and AI-driven content filtering provide
some level of protection, education-based solutions have gained attention as a critical tool for
safeguarding children online. Education-based solutions focus on equipping children with the
knowledge, skills, and behaviors needed to protect themselves from online risks. Various
literature suggests that education-based solutions are vital to online safety strategies. According
to authors such as Weru et.al., (2017), they foster critical thinking, self-regulation, and the
capacity for independent decision-making—skills that children need to navigate the
complexities of the digital world. Staksrud & Livingstone, (2009) state that children who
participated in digital literacy programs were more likely to avoid risky behaviours, such as
sharing personal information or engaging in conversations with strangers online. Furthermore,
these children were better equipped to recognize online threats and respond appropriately, such
as reporting suspicious behavior or seeking help from a trusted adult; hence unlike passive
forms of protection, such as filtering or blocking content, these approaches emphasize
proactive engagement. Researchers state that digital literacy is a cornerstone of education-
based solutions. According to Livingstone et al. (2017), digital literacy goes beyond teaching
children how to use digital devices; it includes understanding online privacy, identifying
credible sources, and recognizing inappropriate content or interactions. Key components
include digital literacy programs, online safety curricula, and awareness-raising initiatives that
teach children how to recognize threats, avoid risky behaviors, and seek help when needed.
While education-based solutions focus on behavioural change, technology has been recognized
to play a critical role in delivering and enhancing these programs. Interactive platforms, e-
learning modules, and gamified learning experiences can engage children in a way that
traditional education methods might not. Tools like Be Internet Awesome by Google (Seale &
Schoenberger, 2018) and Common-Sense Media (Wiggers, 2024) use digital games and
interactive modules to teach children how to manage online risks effectively. These tools
provide real-life simulations where children can practice safe behaviors in a controlled
environment, helping them develop the necessary skills to apply in actual online interactions.
Polak et al., (2022) indicated that education-based solutions delivered through technology are
more likely to capture children's attention, leading to better retention of key safety concepts.
Children are more engaged when lessons are interactive, relevant, and integrated with the
technology they already use daily (Blackwell, 2013.). Additionally, the use of AI-driven
educational platforms can tailor content to different age groups and learning styles, making
these solutions more accessible and effective for diverse audiences.
Despite their potential, education-based solutions face several challenges. One significant issue
identified by researchers is accessibility and equity. Not all children have access to high-quality
digital literacy programs, particularly in low-income or underserved communities. According
to Dodel & Mesch, (2018), the digital divide exacerbates this issue, as children without regular
internet access or technological resources are less likely to benefit from online safety education.
As a result, these children may be more vulnerable to online risks compared to their more
digitally literate peers. Additionally, the rapid pace of technological change makes it difficult
for education-based solutions to stay current. As new platforms, apps, and online behaviours
emerge, digital literacy programs must continuously evolve to address these developments.
Livingstone & Stoilova (2019) emphasize that education-based solutions need to be flexible
and regularly updated to remain relevant to the latest online trends and risks. However,
developing and distributing up-to-date educational content can be resource-intensive, and
many schools or educational programs may lack the necessary funding or expertise to keep
pace with these changes. Another challenge is the variability in program quality and content.
Falloon, (2020) noted that not all digital literacy programs are created equally, and there is no
universal standard for what constitutes adequate online safety education. Some programs may
focus primarily on basic internet usage, neglecting more nuanced topics such as online privacy,
and ethical online behaviour, or recognizing sophisticated forms of manipulation, such as
online grooming. This variability in content and quality can lead to gaps in children's
understanding of online risks.
The role of parents in supporting education-based solutions is discussed as being crucial.
According to commentators, parents can reinforce the lessons learned through digital literacy
programs by discussing online safety with their children, monitoring their internet usage, and
setting clear guidelines for acceptable online behaviour. However, research by Altuna et al.,
(2020) shows that many parents feel ill-equipped to guide their children in navigating the
digital world, either due to a lack of technical knowledge or because they underestimate the
risks their children face online. This gap in parental involvement underscores the importance
of providing digital literacy education for both children and parents. Parents should be educated
on how to recognize online risks, how to talk to their children about internet safety, and how to
use available tools, such as parental controls, to create a safer digital environment at home
(Livingstone & Blum-Ross, 2020). Programs that incorporate parental education alongside
children's digital literacy training tend to have better outcomes in terms of overall online safety.
GAPS IN THE LITERATURE
Despite the growing body of research on safeguarding children online, this research notes that
there is a notable scarcity of studies specifically focused on technological solutions designed
to protect minors. Existing research often emphasizes legal aspects, policy frameworks, or
behavioural interventions, while technological solutions receive comparatively less attention.
This lack of focus is significant because understanding the efficacy of these technologies is
crucial for ensuring they are genuinely effective in safeguarding children from online threats.
It is noticed, however, that the few studies that do examine technological solutions frequently
fall short in several areas. For instance, there is a gap in understanding the long-term
effectiveness of these technologies, as many studies focus on their immediate impacts rather
than their sustainability over time. Additionally, as online predators continuously adapt their
tactics, the existing research does not sufficiently address how well these technologies evolve
to counter new and emerging threats.
Furthermore, there is limited exploration of the impact of advanced technologies such as
blockchain and biometric authentication on child safety. While these technologies hold promise
for enhancing security, their specific applications and effectiveness in the context of child
protection remain underexplored. Blockchain, for instance, could offer novel ways to verify
identities and ensure data integrity, but its potential benefits and limitations in this field have
not been thoroughly investigated (Alotaibi, 2019). Similarly, biometric authentication could
provide more secure access controls, yet its implications for child safety are still unclear.
Moreover, there is a lack of comprehensive studies on the global applicability of technological
solutions. Research often concentrates on developed regions with well-established
technological infrastructures and regulatory frameworks. However, studies that address how
these solutions can be adapted or scaled for use in developing regions, where access to
technology and the enforcement of regulatory measures may be inconsistent, are needed.
Understanding these differences is crucial for creating effective, universally applicable
solutions that can protect children regardless of their geographic location.
Addressing these gaps is essential for developing a more nuanced understanding of how
technological solutions can enhance child safety online. Future research should focus on
evaluating the long-term effectiveness of existing technologies, exploring emerging
advancements, and considering the global applicability of protective measures to ensure a
comprehensive approach to safeguarding children in the digital age.

CURRENT TRENDS IN THE TECHNOLOGICAL SOLUTIONS TO ONLINE


PREDATION OF MINORS

ADVANCEMENTS IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING


AI and machine learning (ML) have become pivotal in developing tools to protect children in
the increasingly digitalized world. These technologies are used to detect, predict, and mitigate
online risks, such as exposure to inappropriate content, cyberbullying, grooming, and even
online child trafficking. AI and ML technologies function by analyzing large amounts of data
to detect patterns, make decisions, and predict outcomes (Jordan & Mitchell, 2015). In
safeguarding children online, these technologies are applied to various tasks, such as content
filtering, detecting harmful behaviour, and providing real-time alerts. AI-driven tools can
identify risks more efficiently and at a scale unattainable by human moderators, making them
essential in environments with high user volumes, such as social media platforms, gaming
environments, and educational websites (Marsoof et.al, 2023).
Content Filtering is one of the primary applications of AI for protecting children. These systems
can automatically detect and block harmful content, such as violent images, sexual material,
and hate speech, before children are exposed. Machine learning algorithms can recognize new
threats by learning from vast datasets, continuously improving their accuracy in identifying
harmful content. A study by Gorwa et al., (2020) highlights the importance of AI-driven content
moderation on platforms like YouTube, where millions of videos are uploaded daily.
Algorithms can swiftly remove or flag content that violates child protection policies, ensuring
a safer digital environment. AI and ML are also used for behavioural analysis. These
technologies can detect patterns of suspicious or harmful interactions, such as grooming or
cyberbullying (Perasso, 2020; Singh & Nambiar, 2024). By analysing communication patterns
and behaviour, AI systems can flag unusual activity, providing real-time alerts to parents,
moderators, or law enforcement agencies. Research by Ybarra et al., (2011) showed that
machine-learning models could successfully identify conversations where an adult was
attempting to groom a child online. These systems offer preventive measures by identifying
risks before they escalate.
Another significant development in online safety is the advent of real-time monitoring and alert
systems designed for parents, guardians, and authorities (Faraz, et.al., 2022). Unlike traditional
parental controls, which often rely on static filters and periodic checks, these advanced tools
provide immediate notifications about potentially harmful interactions. This real-time
capability is crucial in today's fast-paced digital world, where threats can emerge and escalate
rapidly. Applications such as Bark, Qustodio, and Net Nanny utilize sophisticated AI
algorithms to scrutinize digital communications—including text messages, social media posts,
and emails—for signs of predatory behaviour, cyberbullying, or suicidal ideation. These
systems analyse language patterns, context, and even emojis to detect potential risks. For
instance, if a child receives a message that contains threatening language or indications of self-
harm, the system can instantly alert the parent or guardian. The immediacy of these alerts
allows for swift intervention, which is particularly crucial in dynamic online environments
where harmful interactions can quickly spiral out of control. Parents and guardians can take
prompt action to protect their children, whether by initiating a conversation, blocking a contact,
or seeking professional help. This proactive approach contrasts with traditional methods, where
harmful content might go unnoticed until it's too late. Moreover, these real-time monitoring
systems often include comprehensive reporting features that provide insights into a child's
online activity. Parents can review detailed logs and summaries of interactions, helping them
understand their child's digital behaviour and identify any recurring issues. This holistic view
enables more informed decision-making and fosters open communication between parents and
children about online safety. In addition to protecting individual children, real-time monitoring
and alert systems can also benefit schools and communities. Educational institutions can use
these tools to safeguard students on school networks, ensuring a safer online environment for
learning. Authorities and child protection agencies can leverage these technologies to identify
and respond to broader patterns of online abuse, enhancing community-wide safety efforts.
Moreover, facial recognition and natural language processing (NLP) technologies powered by
AI help monitor children’s online presence (Pea, 2023). Facial recognition is often used to
detect whether a child is engaging with strangers through video platforms, while NLP can
analyze text interactions for signs of predatory behavior or harassment (Meurens et al., 2022).
These combined approaches ensure that potential threats are identified across various mediums,
from images to text.
The application of AI and ML to online child safety has proven effective in several areas.
Scalability is one of the key strengths of these technologies. AI-driven systems can monitor
and analyze vast amounts of data in real time, ensuring quick responses to potential threats and
thereby facilitating quicker content moderation (Yaseen, 2023). This is crucial for platforms
with millions of users, where manual moderation would be slow and insufficient. Gorwa et al.,
(2020) emphasize that AI-driven moderation has significantly reduced the amount of harmful
content seen by children on platforms like YouTube and Instagram. Another advantage is the
speed of intervention. AI-powered tools can analyze interactions and flag harmful content
much faster than human moderators. Real-time monitoring systems provide instant alerts when
risks are detected, allowing for immediate action. As hinted by Edwards et al. (2021), the
application of AI in identifying CSAM has led to a faster response from law enforcement and
service providers in removing illegal content. In addition to content moderation, AI has been
shown to be adaptive. As threats evolve, machine learning algorithms can learn from new
patterns and behaviours, improving over time. For instance, Facebook’s content moderation
system uses machine learning algorithms that continuously learn from flagged content and user
reports. As new forms of harmful content emerge, such as new slang or coded language used
by predators, the AI algorithms adjust to recognize and filter these threats. In 2021, Facebook's
AI was updated to better detect and moderate hate speech and misinformation, showing its
ability to adapt to new patterns of harmful behaviour. According to a study by the end of 2020,
Facebook’s AI was able to detect 97% of hate speech content before any human flagged it, a
substantial increase from previous years. Similarly, In the online gaming industry, adaptive AI
is used to detect and prevent toxic behaviour, including harassment and cheating. Games like
Fortnite and Roblox employ AI-driven systems that analyse player behaviour and interactions
to identify new forms of abusive language or cheating techniques. As players develop new
methods for evading detection, these AI systems adjust their algorithms to address these new
tactics. For instance, Epic Games, the developer of Fortnite, uses AI to monitor and respond to
emerging patterns of harassment and disruptive behaviour in real time. One notable feature
is voice reporting, which allows players to submit audio clips as evidence when reporting in-
game abuse. This system continuously records voice chats in 5-minute segments, enabling
players to provide moderators with concrete evidence of any harmful behaviour (Connellan,
2023). It is stated that this adaptability is especially crucial as online risks continue to change
with new digital trends and emerging platforms, and as Meurens et.al., (2022) believe, adaptive
AI systems are more likely to remain effective in combating online grooming and other
predatory behaviours.
Despite its effectiveness, AI and machine learning in safeguarding children online come with
several limitations. One significant limitation as seen by research, is false positives. AI-driven
systems, while designed to detect harmful content, can sometimes overreach, flagging innocent
interactions or safe content as dangerous. This can result in unintended consequences, such as
restricting access to legitimate content or censoring important discussions, which can frustrate
users and lead to a loss of trust in the technology. For example, an AI system designed to filter
out cyberbullying might mistakenly flag supportive or positive interactions as harmful if the
language used is similar to abusive terms. This could occur in scenarios where discussions
about sensitive issues, such as mental health, are incorrectly categorized as harmful due to the
presence of certain keywords or phrases. A person seeking help or offering support could find
their content removed or restricted, potentially stifling valuable conversations and access to
critical resources.
Gorwa et al., (2020) highlight that this challenge is particularly pronounced when dealing with
nuanced topics. For instance, content moderation systems may struggle with distinguishing
between genuine expressions of distress and benign discussions on mental health, leading to
the inadvertent suppression of essential support networks. This issue underscores the
limitations of current AI technology, which may not always accurately interpret the context or
intent behind user interactions. Additionally, false positives can result in a "chilling effect,"
where users become hesitant to share their thoughts or participate in online communities for
fear of being misinterpreted and penalized by the AI system (Vese, 2022). This can create an
environment where open communication is stifled, and users might be less likely to engage in
important discussions or seek help when needed. Addressing the challenge of false positives
requires ongoing refinement of AI algorithms and incorporating more sophisticated contextual
understanding (Llansó et.al., 2020). This involves not only improving the accuracy of content
filtering but also developing mechanisms to review and appeal decisions made by automated
systems. By addressing these limitations, AI and machine learning technologies can become
more effective tools for safeguarding children online without unduly restricting access to
valuable information and support.
On the other hand, false negatives pose a more severe risk in the realm of online safety for
children. False negatives occur when AI systems fail to identify and flag harmful content or
behavior that does not align with the patterns they were trained to recognize. This gap in
detection can leave children vulnerable to exposure to threats or exploitation, despite the
presence of sophisticated monitoring tools designed to protect them. For instance, AI systems
may not always catch subtle or nuanced forms of online abuse outside their training data.
Grooming behaviors by predators can be particularly challenging to detect. Predators often use
sophisticated tactics, such as embedding manipulative or coercive language within seemingly
innocuous conversations, to avoid detection. These tactics might include using coded language,
engaging in gradual desensitization, or creating scenarios that appear benign but are intended
to build trust and manipulate the child.
Meurens et al. (2022) highlight that machine learning models often struggle with contextual
analysis, making it difficult for these systems to identify such covert grooming behaviours.
Moreover, AI systems are typically trained on historical data and patterns, which means they
may not be equipped to handle emerging threats or new forms of exploitation that have not
been previously encountered. For example, a new trend in online predation or a novel method
of coercion might not be included in the training data, leading to a failure in detection. This
limitation means that AI systems may lag evolving tactics predators use, making it crucial for
continuous updates and improvements in these systems. False negatives also pose a significant
challenge in scenarios involving complex emotional manipulation. Predators often exploit the
emotional vulnerabilities of children, using tactics that are difficult to quantify and analyze.
For example, a predator might engage in long-term grooming that involves creating a false
sense of security and building an emotional bond with the child. This type of manipulation can
be subtle and gradual, and AI systems may not always recognize the gradual progression of
these behaviors as harmful.
The risk of false negatives also extends to various online platforms, including social media,
chat apps, and gaming environments. Each platform has its own set of communication styles
and interactions, which can complicate detecting harmful behaviors. For example, the way
predators interact with children in a gaming environment may differ significantly from their
behavior on social media, requiring AI systems to adapt and be trained across diverse contexts.
To address the risk of false negatives, it is essential to complement AI-driven tools with human
oversight and intervention (Karunamurthy et al., 2023). Human moderators and experts can
provide context and judgment that AI systems may lack, helping to identify and address threats
that automated systems might miss. Additionally, continuous updates to AI algorithms,
incorporating new data and emerging threat patterns, are necessary to improve detection
capabilities. Collaboration with experts in child safety and online behavior can further enhance
the effectiveness of AI systems in identifying and mitigating risks.
Another significant limitation is the lack of transparency in AI decision-making processes.
Many AI-driven systems operate as "black boxes," meaning the criteria they use to flag or block
content is not always clear. This opacity can lead to several issues, particularly in terms of
accountability and fairness. Researcher Nahmias & Perel, (2021) notes, that users often find
themselves in situations where they do not understand why certain content was removed or
why they were flagged, leading to frustration and mistrust. For instance, flagging or removing
a user's content without a clear explanation can feel arbitrary and unjust. This lack of clarity
can be especially problematic in sensitive areas such as social media moderation, where the
stakes are high and there is a significant impact on users' freedom of expression. Without
transparency, users cannot effectively challenge or appeal decisions, which can lead to a sense
of helplessness and disenfranchisement.
Mensah (2023) argues that the opacity of AI systems can significantly undermine trust in these
tools. This is particularly true for parents who may feel uncomfortable relying on automated
systems without understanding their inner workings. Parents are often concerned about the
safety and appropriateness of content their children are exposed to, and the lack of transparency
in AI decision-making can exacerbate these concerns. When parents do not understand how
decisions are made, they may be less likely to trust and use these systems, potentially limiting
AI's benefits in terms of content moderation and safety. Moreover, the lack of transparency can
also hinder the development and improvement of AI systems. When users do not understand
how decisions are made, they cannot provide meaningful feedback, which is essential for
refining and enhancing AI algorithms. Transparency is crucial for fostering a collaborative
relationship between AI developers and users, enabling continuous improvement and ensuring
that AI systems are aligned with users' needs and expectations.
Moreover, AI and machine learning technologies raise ethical concerns about privacy and
surveillance. Monitoring children's online activities can help protect them, but it can also
infringe on their privacy rights. The use of facial recognition, data collection, and real-time
monitoring can result in excessive surveillance, where children’s digital interactions are
constantly scrutinized (Wylde et.al., 2023). Striking a balance between effective protection and
respecting children’s privacy is a critical challenge for AI-driven solutions. AI systems often
rely on the collection of vast amounts of personal data, including browsing habits,
communication records, and sometimes biometric data. While these systems aim to protect
children, they can also compromise their privacy and autonomy by subjecting them to constant
surveillance. There are also concerns about data security. AI systems that process sensitive data
about children must ensure that this information is protected from breaches or misuse, or else
this can defeat the purpose when predators can hack or access the personal data of children
collected by the AI system. However, the growing reliance on cloud-based AI services
increases the risk of data exposure (Dhinakaran et.al., 2024).
In 2019, several high-profile data breaches demonstrated the vulnerabilities of AI systems to
hacking and exploitation. For instance, In May 2019, Canva, a popular graphic design tool
website, suffered a data breach that affected 139 million users. The exposed data included
usernames, real names, email addresses, and passwords (Henriquez, 2021). According to Risk
Based Security, 2019 saw a total of 5,183 data breaches, exposing 7.9 billion records (Hodge,
2019). This marked a 33% increase in the number of breaches compared to the previous
year. The breaches affected various sectors, including medical services, retailers, and public
entities.
Another significant legal concern is the potential for AI bias. Machine learning algorithms are
trained on datasets that may contain inherent biases, which can lead to unfair and
discriminatory outcomes. For instance, Tejani et.al. (2024) noted that if the training data
predominantly represents certain demographics, the AI system might disproportionately flag
content from those groups while overlooking similar content from underrepresented
demographics. This can result in certain groups being unfairly targeted or scrutinized, while
others may not receive adequate attention or protection. Such biases can have profound
implications, especially when protecting children online. If an AI system is biased, it might fail
to identify harmful content or risks for children from specific backgrounds, thereby providing
unequal protection. For example, children from minority communities might be more exposed
to inappropriate content or online predators if the AI system is not adequately trained to
recognize threats in diverse contexts. Hafner et.al., (2024) highlight that these biases are not
just technical issues but also legal and ethical concerns. The unequal treatment resulting from
biased AI systems can lead to violations of anti-discrimination laws and principles of fairness.
Moreover, it can erode trust in AI technologies, as users may perceive these systems as unjust
or unreliable. Addressing AI bias requires a multifaceted approach (Tejani et.al., 2024). It
ensures that training datasets are diverse and represent all user groups. Additionally, continuous
monitoring and auditing of AI systems are essential to identify and mitigate biases. Legal
frameworks and guidelines can also play a crucial role in enforcing standards for fairness and
accountability in AI systems.
Despite the limitations and ethical concerns, AI and machine learning hold immense potential
for safeguarding children online, particularly as these technologies continue to evolve.
Researchers suggest combining AI with human oversight may provide a more balanced
approach. By having AI systems flag potential risks for human moderators to review, platforms
can ensure a more nuanced and accurate response to online threats (Gorwa et al., 2020).
Additionally, collaborative AI systems—where multiple platforms share data and threat
detection models—could improve the scope and accuracy of protection measures. By sharing
insights across platforms, AI systems can detect cross-platform risks and mitigate them more
effectively. Also, developing more transparent AI systems that allow user input and feedback
could also improve trust and accountability in these tools. Platforms should aim to create
explainable AI-driven systems and allow parents, educators, and children to understand how
and why decisions are made (Wylde et.al., 2023). As digital environments evolve, AI and ML
offer sophisticated mechanisms for safeguarding children, leveraging pattern recognition,
automated moderation, and real-time monitoring.
INTEGRATION OF THE INTERNET OF THINGS (IOT) AND SMART DEVICES
The proliferation of the Internet of Things (IoT) and smart devices has transformed the
landscape of digital interactions, particularly in safeguarding children online. As these
interconnected technologies become increasingly integrated into homes, schools, and other
environments where children spend time, they present new opportunities and challenges for
ensuring children's online safety. IoT devices such as smart speakers, connected toys,
wearables, and home security systems offer a blend of convenience and monitoring capabilities
that could enhance protective measures. IoT devices can potentially create safer online
environments for children by providing real-time monitoring and control over digital
interaction. For example, smart home systems can alert parents to unusual activities, restrict
access to certain websites or platforms, and offer insights into their children's online behavior.
Wearable devices, such as smartwatches, can help monitor children's location and activity,
providing an additional layer of physical and digital protection. The integration of parental
control features in IoT devices, including setting time limits on device usage or restricting
access to inappropriate content, has been lauded as a practical solution for managing children's
screen time and online exposure. Connected toys and educational devices, often designed to
engage children in learning and play, can also serve as tools for monitoring. These devices
allow parents to control and monitor interactions, limiting the risk of exposure to inappropriate
content or online predators. For instance, certain smart toys may allow communication only
with approved contacts, adding an extra layer of security to children’s online experiences.
However, despite these advantages, it has been observed that using IoT and smart devices has
flaws and limitations.
The security of connected devices has been a significant concern, as many of these products
have been found to lack robust encryption and are vulnerable to hacking. Singh et.al. (2024)
commented that many IoT devices either use weak encryption algorithms or, in some cases, no
encryption. This makes it easier for attackers to intercept and read the transmitted data. In 2022,
researchers found that millions of IoT devices were using outdated encryption protocols,
making them susceptible to man-in-the-middle attacks where hackers could intercept and alter
communications. Also, devices often have default usernames and passwords, which users
rarely change. These default credentials are widely known and can be easily exploited by
attackers; for instance, the Mirai botnet attack in 2016 exploited default credentials on IoT
devices to create a massive botnet that launched distributed denial-of-service (DDoS) attacks,
disrupting significant websites and services (Padua, 2023). If a smart device is compromised,
personal information about the child, including location, habits, and preferences, could be
accessed by malicious actors.
The increased adoption of smart devices in households has led to a growing body of research
highlighting their vulnerability to cyberattacks. For example, hacked baby monitors or smart
speakers could be used to eavesdrop on private conversations or manipulate devices for
malicious purposes (Buil-Gil et al., 2023). Moreover, IoT devices often collect vast amounts
of data on children, raising ethical privacy concerns. Many devices require collecting personal
information for functionality, such as location tracking or voice recognition, which can lead to
sensitive data being shared with third-party companies. Parents might not always be fully aware
of the extent to which their children’s data is collected and shared, raising questions about
consent and the adequacy of data protection laws concerning minors (Livingstone, Stoilova &
Nandagiri, 2019.). For example, the Children’s Online Privacy Protection Act (COPPA) in the
United States provides guidelines for the collection of data from children under 13; however,
as identified by Lupton & Williamson (2017), the rapid growth of IoT devices has outpaced
regulatory measures in many regions. While some companies assure parents that their data
collection processes are secure, several incidents have demonstrated that vulnerabilities
remain. For example, security breaches in connected toys and devices have led to leaked data,
including personal information and recordings of children’s conversations. VTech, based in
Hong Kong, agreed to pay the United States Federal Trade Commission (FTC) $650,000 in
2018 for violations of the COPPA related to a 2015 cyber-attack and data breach that targeted
VTech's Learning Lodge Navigator online program, Kid Connect app, and Planet VTech
gaming and chat platform. These incidents highlight the tension between these devices'
perceived safety and the risks they may introduce.
In addition to concerns about privacy breaches, the ethical implications of constantly
monitoring children through smart devices warrant critical attention. Researchers such as
Westin (1966) point out that continuous surveillance can affect children's development, limiting
their sense of autonomy and privacy. Similarly, Hoge et al., (2017) noted that children who
grow up under constant digital observation may experience heightened anxiety or develop an
unhealthy relationship with technology and authority. Moreover, reliance on IoT devices to
monitor children’s online activities may reduce the emphasis on teaching children digital
literacy and safe online practices, essential skills for navigating the internet independently
(Zhu, 2024).
As IoT technology continues to evolve, the potential for these devices to safeguard children
online will likely increase. However, developers and regulators must address the existing
vulnerabilities and ethical concerns to create safe and practical tools. There is a growing need
for greater transparency from manufacturers regarding the data collected by these devices and
how it is used. Moreover, as advocated by some scholars, a more balanced approach is needed
to use IoT for child safeguarding. By taking into consideration the limitations and risks of IoT
as a protective tool, children can be empowered to avoid or be protected from various online
threats.
SOCIAL MEDIA AND PLATFORM POLICIES
The rapid expansion of social media has revolutionized communication, entertainment, and
education for children and young people, yet this digital evolution has also introduced many
risks. In response, platform tools and policies developed by social media companies have
emerged as one of the technological tools to protect children online. These tools and guidelines
aim to protect young users from exposure to harmful content, cyberbullying, exploitation, and
inappropriate interactions. Social media companies such as Facebook, Instagram, TikTok, and
YouTube have introduced various safety measures, including content filters, age verification
systems, and reporting tools for abusive behavior. However, studies have found these tools and
policies to be often inconsistent, insufficiently enforced, and limited in their scope to address
the unique risks children face adequately. According to researchers, these initiatives have often
been reactive rather than proactive, prompted by public outrage, governmental pressure, or
high-profile incidents rather than a foundational commitment to child protection (Redmiles et
al., 2019,). For instance, in 2017, a slew of problematic videos was found on the platform,
specifically marketed to children -- oftentimes directly on the YouTube Kids app. These videos
presented popular children’s characters with adult themes, sometimes in a sexually explicit
fashion. In response, YouTube overhauled its kids’ app and put new moderation policies in
place (Binder, 2019). Similarly, in 2018, the Cambridge Analytica scandal revealed that
Facebook had allowed the data of millions of users, including minors, to be harvested without
their consent (Brown, 2020). This led to a massive public outcry and governmental
investigations. In response, Facebook implemented stricter data privacy measures and
increased transparency about data usage. However, these changes were made only after the
scandal became public, highlighting a reactive approach.
Regardless of being reactive, the measures taken by social media companies can be taken to
represent progress. Yet, further criticisms exist, as many studies have criticized the
inconsistency in enforcement across platforms. According to Sas & Mühlberg (2024), these
tools can be easily bypassed by tech-savvy children, and age verification remains insufficiently
robust to prevent underage users from creating accounts. This vulnerability stems from several
factors, including the reliance on self-reported ages during the registration process. Many
platforms simply ask users to input their birth dates without implementing any rigorous checks
to verify the accuracy of this information. As a result, children who are determined to access
platforms despite age restrictions can easily lie about their age, creating accounts that they
should not legally be allowed to hold. Moreover, the technological literacy of younger users
often outpaces the measures put in place to protect them. Many children possess a sophisticated
understanding of digital tools and can navigate around barriers designed to limit their access to
certain platforms. For instance, they might employ VPNs or other anonymizing tools to
disguise their online activity, making it challenging for platforms to monitor and enforce age
restrictions effectively. This gap not only highlights the limitations of current verification
methods but also raises ethical questions about the responsibilities of social media companies
in ensuring that their platforms are safe for young users. Additionally, the implications of this
lack of robust age verification extend beyond simple compliance with regulations. When
underage users gain access to platforms, they become susceptible to various risks, including
exposure to inappropriate content, online predators, and cyberbullying (Karadimce &
Bukalevska, 2023). Studies indicate that younger users are often ill-equipped to handle such
interactions due to their developmental stage and lack of experience with the complexities of
online communication. Consequently, the failure to adequately verify user ages not only
undermines the efficacy of existing protective measures but also places a significant burden on
children, parents, and society. Considering these challenges, there is an urgent need for social
media platforms to adopt more sophisticated age verification technologies.
Reporting and support systems are other integral tools used by social media platforms to
empower users, particularly children, to act when encountering harmful or inappropriate
content. These systems typically allow users to flag content that violates a platform's terms of
service, such as cyberbullying, hate speech, or child exploitation material, which can then be
reviewed and acted upon by moderators. One of the key strengths of reporting systems is their
ability to give users a direct role in moderating content, particularly in spaces where algorithmic
monitoring may fail to detect harmful behavior (Shen et al., 2021). This user-generated
reporting can be crucial in identifying subtle forms of abuse, such as cyberstalking or
grooming, which automated tools may struggle to recognize. For example, platforms like
Facebook, Instagram, and TikTok have invested heavily in creating accessible reporting
mechanisms where users can flag inappropriate behaviour or content, leading to swift
takedowns or interventions (PEN America, 2024). These systems are often accompanied by
educational resources designed to inform children about online safety and how to use reporting
tools effectively. However, despite their potential, reporting systems face significant
challenges. Firstly, the delay in response between the time harmful content is reported and
when it is removed has been found to be problematic.
As cited by writers Schneider & Rizoiu (2023), children might be exposed to damaging
material for extended periods before the platform acts. Additionally, the effectiveness of these
systems is often undermined by a lack of human moderation resources. Social media companies
frequently rely on automated moderation tools or outsource moderation to contractors, leading
to uneven application of policies or missed instances of abuse, due to incidences of false
positives or false (Arora et.al., 2023). Moreover, the psychological burden placed on children
to report abuse is another limitation. Research has shown that children are often reluctant to
report harmful behavior due to fears of retaliation, embarrassment, or simply not recognizing
that they are experiencing abuse. This creates an additional layer of complexity, as even the
most advanced reporting systems are rendered ineffective if children do not feel comfortable
using them. Furthermore, the anonymity that many platforms afford their users can allow
perpetrators to continue their harmful behaviors unchecked, even after being reported
(Kavanagh et al., 2020).
End-to-end encryption (E2EE), as another tool is widely regarded as a robust security measure
that protects users’ data from interception or unauthorized access by third parties. For children,
this encryption can ensure that personal conversations, images, and other forms of
communication remain private. Platforms such as WhatsApp and Signal have adopted end-to-
end encryption as a standard feature, ensuring that even the platform itself cannot access the
contents of user messages (Duan & Grimmelmann, 2024). E2EE is crucial in safeguarding
against the increasing prevalence of data breaches, hacking, and surveillance, which could
expose children’s private information to predators or malicious actors. By ensuring that
communications remain encrypted, platforms can protect children from data exploitation and
unauthorized access to sensitive information (Song, 2020). However, end-to-end encryption is
not without controversy, particularly when it comes to balancing child safety and privacy. Law
enforcement agencies have raised concerns that E2EE can shield harmful activities, such as
grooming, child exploitation, or the distribution of illegal content, from detection. While
encryption prevents third-party interception, it also limits the ability of platform moderators or
law enforcement to monitor communications for illegal activities. This has led to a debate about
whether platforms should create backdoor access for authorities to monitor encrypted content
in cases where child safety is at risk, or whether such measures would undermine the very
privacy that encryption is designed to protect. This has spurred the development of alternative
technologies, such as client-side scanning, which allows platforms to scan content before it is
encrypted (Manthiramoorthy & Khan, 2024). However, these solutions raise additional
concerns about privacy violations, as they essentially involve monitoring users' devices—a
practice that companies or governments could easily abuse. In essence, while end-to-end
encryption is vital for protecting children’s privacy online, its implementation has been found
to pose significant challenges in the realm of child safety, particularly when harmful activities
are involved. Hence it has been advocated that social media platforms must find a delicate
balance between protecting privacy and ensuring that encryption does not inadvertently create
a haven for online predators.
In addition to the flaws and limitations of the various social media tools and policies, social
media platform policies have been criticized for their inconsistent enforcement. While
platforms often develop stringent guidelines on paper, enforcing these policies (fairly) across
millions or even billions of users has proven problematic. Research highlights that many social
media companies rely heavily on automated systems for content moderation, which are far
from perfect despite advances in AI and machine learning. These systems may fail to detect
harmful content or conversations accurately, particularly when contextual analysis is required.
This can result in either overreach (false positives), where harmless content is flagged, or
underreach (false negatives), where harmful content is missed. In the context of safeguarding
children, the consequences of such failures can be significant, with children potentially being
exposed to exploitation or harmful behaviors. This creates a dilemma where platforms must
balance the need for effective moderation with the risk of censoring safe content, a challenge
that has yet to be adequately addressed. Moreover, many platforms have been criticized for
their data-collection practices concerning children (Taylor & Pagliari, 2018).
The data collection practices of social media platforms concerning children present a
multifaceted challenge that raises significant ethical concerns. While regulations such as the
COPPA are designed to protect children by limiting data collection and requiring parental
consent for users under 13, these measures often fall short in their effectiveness (Reyes et.al.,
2018). Critics have pointed out that many companies exploit loopholes within these
regulations, allowing them to circumvent the spirit of the law while technically remaining
compliant (Lupton & Williamson, 2017). For instance, some platforms may use vague language
in their privacy policies, creating ambiguity that makes it difficult for parents to fully
understand what data is being collected and how it is used. Moreover, children may not possess
the cognitive skills necessary to grasp complex privacy policies, making them especially
vulnerable to manipulation. This lack of transparency leads to a significant power imbalance
between large corporations and young users, who are often unaware of how much their personal
information is harvested and monetized. Additionally, the sheer volume of data collected can
be staggering. Social media platforms often gather basic information such as age and location
and more sensitive data, including behavioral patterns, preferences, and even biometric data in
some cases. This comprehensive profiling allows companies to target children with
personalized advertisements, potentially exploiting their impressionability and naiveté. Critics
argue that this not only invades children's privacy but also subjects them to marketing practices
that they are ill-equipped to navigate. The ethical implications of these data collection practices
extend to the question of accountability. The consequences can be dire when data breaches
occur, or children’s information is misused. Children may become targets for online predators
or face cyberbullying based on the information shared. However, avenues for redress are often
limited. Parents may struggle to obtain clear answers from companies regarding how their
children's data was handled or what protections were in place (Johnson, 2024). This lack of
transparency can create a sense of frustration and helplessness for parents, who may feel that
they have little recourse in the face of potential data misuse or breaches. When a child’s data is
compromised—whether through unauthorized access, data breaches, or even misuse of
targeted advertising—parents often find themselves navigating a labyrinth of corporate policies
not designed with user-friendly communication in mind.
Moreover, when breaches occur, affected families often lack clear guidance on the next steps,
and social media companies may not provide timely notifications or effective remediation
options. This can leave parents uncertain about whether their child’s data is still at risk and
what measures they can take to mitigate it. The absence of a robust framework for addressing
data privacy issues further underscores the need for stronger regulations that compel companies
to prioritize transparency and accountability in their data handling practices. In addition, legal
avenues for redress can be limited. Lawsuits against large tech companies can be prohibitively
expensive and complex, often requiring specialized legal knowledge that many families may
not possess. Even when legal action is pursued, outcomes can be uncertain, and the burden of
proof may fall on parents to demonstrate that their child’s rights have been violated (Katsh &
Rabinovich-Einy, 2017). This asymmetry of power—where large corporations wield
significant influence and resources—often leaves families feeling powerless and inadequately
supported in their efforts to protect their children's digital privacy (Lindau, 2022). Ultimately,
parents' challenges in obtaining clear answers and effective remedies highlight a significant
gap in the accountability mechanisms for social media companies. This situation underscores
the urgent need for comprehensive reforms in data protection laws and corporate practices to
empower parents and safeguard children’s rights in an increasingly digital world. This creates
a situation where social media companies are held accountable primarily by market forces
rather than through stringent regulatory oversight, raising concerns about their commitment to
safeguarding vulnerable users.
Furthermore, this situation is exacerbated by the global nature of social media. Many platforms
operate internationally, navigating a complex web of differing regulations. While COPPA
serves as a model in the U.S., other countries may lack comparable protections, allowing
companies to exploit the weakest regulatory environments (Land, 2013). Moreover, the
fragmentation of regulatory standards leads to confusion for companies, parents, and children.
Users accessing social media platforms from different countries may be subject to varying
terms of service and privacy protections, often without clear communication about these
differences. This lack of clarity can result in significant gaps in understanding among parents
regarding what protections are in place for their children. Inconsistent regulatory landscapes
can also hinder parents’ ability to advocate for their children’s online safety, as they may not
know which laws apply or how to navigate the complexities of international digital policies.
The urgent need for a more unified global approach to online child protection is underscored
by the rising incidence of cyberbullying, exploitation, and data breaches. International
cooperation is essential for establishing robust, harmonized standards prioritizing children's
rights and well-being across borders. This could involve the creation of a global regulatory
framework that sets baseline protections for children, akin to the General Data Protection
Regulation (GDPR) in Europe, which offers strong privacy protections for all users. Such a
framework would enhance children's safety and hold companies accountable for adhering to
these standards, regardless of where they operate. Furthermore, fostering international
collaboration among governments, technology companies, and civil society organizations is
vital for addressing the challenges posed by the global nature of social media (Hoffmann,
2022). This collaborative approach could involve sharing best practices, resources, and
strategies for adequate child protection. Additionally, engaging children and parents in the
conversation can help tailor policies that meet the unique needs of diverse populations,
ensuring that children everywhere are afforded the same level of privacy and safety.
Ultimately, the critical examination of social media platforms’ data collection practices reveals
a troubling dynamic: while companies profit from the data of young users, the safeguards
intended to protect these users remain inadequate. This raises fundamental ethical questions
about the prioritization of profit over safety and the responsibilities of social media companies
in creating a safe online environment for children. Without significant reform in data protection
practices and more robust regulatory frameworks, the risk to children's privacy and safety in
the digital landscape will continue to grow. The obvious needs to be said: social media
companies need to do much more and be made accountable.
GAMIFICATION OF ONLINE SAFETY EDUCATION
The gamification of online safety education has emerged as a promising technological trend
aimed at safeguarding children online. Gamification, recognized to be the application of game
design elements in non-game contexts, has been increasingly employed in online safety
education to engage children in learning about online risks and best practices for safe digital
behavior (Schiavo et al., 2024). By making the learning process interactive, fun, and
immersive, gamified approaches enhance children's retention and application of safety
principles in their digital interactions. As stated earlier in this essay, the role of digital literacy
in online safety cannot be overemphasized because, without the initiative and development of
children towards recognizing online threats, technological tools are limited in their application.
Hence, one of the main reasons for the growing popularity of gamification in online safety
education is its ability to engage children.
Traditional online safety programs, often delivered through lectures or text-heavy websites,
tend to struggle to hold the attention of younger users. In contrast, gamified platforms such as
“Interland” by Google’s Be Internet Awesome initiative, offer an interactive and enjoyable way
for children to learn about cyber risks, privacy, and digital citizenship (Hollandsworth et al.,
2017; Rogers-Whitehead, 2019). Research has shown that game-based learning can improve
knowledge retention, as games' interactive nature enables children to actively participate and
apply the lessons in simulated environments. Additionally, gamified platforms often include
rewards, such as points, badges, and progress levels, which motivate children to complete tasks
and engage more deeply with the content. These reward systems provide instant feedback,
reinforcing positive behaviors and discouraging unsafe practices online. As children are
typically more responsive to such immediate incentives, gamification taps into intrinsic and
extrinsic motivations, making online safety education more appealing and relevant to young
users.
A key benefit of gamification is its ability to foster critical thinking and decision-making skills,
essential for navigating the complexities of online spaces. Gamified safety programs often use
scenarios and problem-solving challenges to teach children how to recognize and respond to
online risks. For example, programs like “ThinkUKnow” (Molok et al., 2023) or “NetSmartz”
(Kempen, 2019) present children with simulated interactions where they must make choices
about whether to share personal information, accept friend requests, or click on suspicious
links. Through these experiences, children learn to assess online situations and develop the
confidence to make safer decisions in real-world contexts. Research suggests that such
experiential learning has a more profound impact on children than passive information
consumption. By encountering scenarios that mirror real-life online challenges, children
acquire knowledge and practice applying that knowledge in a risk-free environment (Schiavo
et.al., 2024). This helps bridge the gap between theory and practice, making children more
likely to recall and implement safety principles in their everyday internet use.
While gamification has clear advantages, several challenges limit its efficacy as a tool for
safeguarding children online. One of the primary concerns is the oversimplification of complex
issues. Online safety involves technical knowledge and social, emotional, and legal aspects.
Gamified platforms may reduce these complexities into simplified binaries, such as right and
wrong answers or safe and unsafe decisions. This approach may fail to capture the nuanced
nature of online risks, where situations are not always clear-cut. For instance, children may not
fully understand the subtleties of privacy breaches or the long-term consequences of sharing
personal data, which are often difficult to encapsulate in a game format. Moreover,
gamification is inherently limited by the quality and realism of the scenarios presented.
Simulated environments in online safety games are unlikely to capture the full range of
potential online interactions children may face, particularly as digital environments evolve
rapidly. What constitutes a risk today may change tomorrow as technology and online
behaviours shift. Therefore, gamified education tools may struggle to keep pace with new
threats, such as deepfakes, sophisticated phishing scams, or the emergence of harmful online
subcultures (Ibrahim, 2024).
Another critical issue surrounding gamified online safety education is the risk of over-reliance
on technology as a solution to digital threats. According to Dichev & Dicheva (2017), relying
solely on gamified approaches may lead to an incomplete understanding of the broader context
of online risks, particularly if children are not exposed to complementary educational tools,
such as parental guidance, school-based programs, or community-led initiatives. For instance,
gamified platforms might oversimplify complex issues, presenting them in a way that may not
fully convey the seriousness or nuances of specific online threats. This can result in children
developing a false sense of security, believing they are fully protected simply because they
have completed a game or earned a badge. There is the danger that children may approach
online safety games with a mindset focused on achieving rewards or completing levels rather
than internalizing the lessons. As Ke (2016) stated, this can result in "gaming the system,"
where children may prioritize winning the game over understanding the content. Studies have
found that extrinsic rewards in gamified systems can sometimes undermine intrinsic motivation
to learn, as children focus more on the incentives than the learning outcomes. Therefore, the
challenge lies in balancing engaging in games and ensuring that the educational content
remains at the forefront of the experience.
Moreover, the dynamic nature of digital threats means new risks constantly emerge. Gamified
tools may not continually be updated promptly to reflect these changes, potentially leaving
children vulnerable to the latest threats. Therefore, authors such as Le Compte et al. (2015)
identified that it is crucial to integrate gamified education with other forms of learning. Parental
guidance reinforces the lessons learned through gamification, providing real-world context and
personal experiences that a game cannot offer. Also, school-based programs and community
initiatives should be structured to give the children a well-rounded education on cyber security.
These initiatives can include workshops, seminars, and peer-led discussions encouraging
children to share their experiences and learn from one another. Combining gamified tools with
these complementary educational resources shows that a more robust and practical approach
to online safety education equips children with the knowledge and skills they need to navigate
the digital world safely and responsibly.
Furthermore, gamified lessons are often designed to offer immediate feedback or
consequences. However, this does not always align with the delayed and less visible nature of
many online risks, such as grooming or identity theft. Children accustomed to immediate
responses within games may struggle to recognize or respond to threats that unfold more
gradually or subtly in real online interactions. In gamified environments, actions typically
result in instant rewards or penalties, creating a clear and direct link between behavior and
outcome. This immediacy can be highly effective in teaching specific skills or concepts quickly.
However, the nature of many online threats is quite different. For example, grooming often
involves a prolonged process where an individual builds trust with a child over time, making
the threat less obvious and more insidious. Similarly, identity theft can occur without
immediate visible signs, as personal information is collected and misused over some time. The
discrepancy between the immediate feedback in games and the delayed nature of real-world
online risks can lead to a significant gap in children’s understanding and preparedness. They
might not be equipped to recognize the slow and subtle warning signs of grooming, such as an
adult asking for personal information or trying to isolate them from friends and family.
Likewise, they may not understand the long-term consequences of sharing personal
information online, which can be exploited for identity theft (Faith et al., 2024). To address this
gap, it is essential to complement gamified lessons with educational strategies emphasizing
these threats' gradual and often hidden nature. This could include scenario-based learning,
where children are presented with realistic situations that unfold over time, helping them to
identify and respond to potential dangers. Additionally, discussions and role-playing activities
can give children a deeper understanding of recognizing and handling these risks in real life.
Integrating immediate feedback from gamified lessons with educational approaches
highlighting the delayed and less visible nature of online risks can create a more comprehensive
and practical online safety education for children.
Another limitation of gamified online safety education tools is their accessibility and
inclusivity. While many gamified platforms are designed to be user-friendly, they may not
always cater to children from diverse backgrounds or with varying levels of digital literacy.
Children in marginalized communities, particularly those with limited access to digital devices
or the internet, may not benefit from these tools, thereby widening the digital divide. Moreover,
children with disabilities, such as visual or cognitive impairments, may find certain game-based
learning tools inaccessible or difficult to navigate (Wulandari et al., 2024). The success of
gamified online safety education depends on including adaptive learning technologies that
accommodate different learning styles and abilities. However, many current gamified tools lack
these adaptive features, which can result in unequal access to critical online safety education.
Thus, while gamification can enhance learning for some children, it risks excluding others,
particularly those already disadvantaged in the digital space.
It is also stated that while studies have shown that children can improve their understanding of
online risks through gamified approaches, there is little evidence to suggest that this knowledge
translates into sustained behavior change over time, which is the long-term effect of this system
(Gjertsen et.al., 2017; Wu et.al., 2021). Online safety is about knowledge acquisition and
developing habitual safe behaviors, which may require ongoing education and reinforcement
beyond gamified platforms. It is stated that the real victory of any form of digital literacy
program is ensuring that the knowledge gained translates into consistent, real-world behavior.
Research by Malone et al. (2021) emphasizes that habitual behavior development requires
continuous learning and reinforcement, which cannot solely rely on one-off lessons or games.
Instead, a more comprehensive approach is needed, integrating real-life examples, parental
involvement, and routine reminders. For example, children may need regular follow-ups or
integration of online safety principles into their daily technology use, such as ongoing
discussions with educators or guardians.

EMERGING TECHNOLOGIES AND SOLUTIONS FOR PROTECTING CHILDREN


FROM ONLINE PREDATORS
As the digital landscape continues to evolve, so do the technologies to protect children from
online predators. While these innovations hold significant potential, their effectiveness and
ethical implications warrant scrutiny. Emerging technologies such as blockchain, biometric
authentication, and advanced encryption are increasingly being proposed as solutions for
enhancing online safety, but their implementation and impact remain subjects of debate. The
COVID-19 pandemic further accelerated children's reliance on digital platforms for education
and social interaction, exposing them to new online risks. Consequently, the demand for robust
protective measures has grown, raising critical questions about how technology can safeguard
children and respect their privacy. This evolving challenge calls for a study of emerging
technological trends in protecting children from online predators.
BLOCKCHAIN TECHNOLOGY
The rise of blockchain technology has sparked widespread interest, particularly in its ability to
protect children in the digital environment. Blockchain's decentralized, irreversible ledger was
initially developed as the foundation for cryptocurrencies such as Bitcoin. Still, it has since
been studied as a means of improving internet privacy, security, and accountability.
Blockchain's primary feature of decentralization ensures that no single entity has overarching
authority over data, which is essential when protecting sensitive information such as children's
data. In contrast to centralized systems, where breaches can expose vast amounts of data,
blockchain technology can distribute data across a wide network, making it harder for
malicious actors to access and exploit. This decentralized nature can be particularly valuable
in online environments where children’s information is at risk from hackers or organizations
with poor data security practices. According to Zyskind & Nathan (2015), blockchain could
serve as a robust alternative to centralized databases, preventing mass data breaches that have
compromised children's safety in the past.
One of the emerging applications of blockchain is its potential use in identity management
systems for children, particularly in preventing identity theft, fraud, and exploitation.
Blockchain-based identity verification systems could ensure that a child’s identity is securely
stored and verified without relying on third-party organizations, which may be vulnerable to
breaches. For example, platforms like uPort and Sovrin are exploring blockchain-based identity
management to give users, including minors, more control over their digital identities. By
encrypting and verifying identities on a blockchain, these systems could prevent unauthorized
access to children’s profiles on social media platforms, educational tools, and other digital
spaces, thus providing an additional layer of security. Additionally, the blockchain's
immutability ensures that it cannot be altered once data is recorded on the blockchain. This
feature can be applied to protecting digital content involving children, such as educational
materials or personal media. For instance, in cases of content distribution, blockchain
technology could help authenticate the origins and ownership of materials, preventing the
circulation of illicit or harmful content such as child exploitation media. Blockchain’s ability
to timestamp and verify the source of digital materials can deter the spread of such content by
allowing authorities to trace the origin and distribution chain (Tapscott & Tapscott, 2016).
While the above seems very promising and positive, blockchain technology has limitations
when applied to child online safety. One of the primary challenges is scalability. Blockchain
networks, particularly public blockchains, are notoriously slow compared to centralized
systems, making it challenging to implement widespread systems for monitoring and protecting
children online. As Sanka & Cheung (2021) point out, blockchain's existing scalability
limitations significantly impede its adoption for large-scale applications such as social
networking platforms or online gaming environments with millions of active users.
Implementing blockchain-based safety measures for such large platforms may add delay,
compromising the real-time protection required for kid safety. As a result, technology
developers must endeavour to improve the scalability of their technology for large-scale
applications.
Another critical issue is blockchain's permanence, which, while being one of its strengths, can
also be a liability. One of the core features of blockchain technology is its immutability. Once
data is recorded on a blockchain, it cannot be altered or deleted. This ensures the integrity and
trustworthiness of the data, making it highly secure and reliable for various applications, such
as financial transactions and record-keeping. However, this immutability can become a
significant drawback when harmful or illegal content is added to the blockchain. Since the data
cannot be removed, any illicit material, such as child exploitation content, remains permanently
accessible. This poses severe ethical and legal challenges. For example, if illicit material
involving children is encoded into a blockchain, it becomes virtually impossible to remove.
This could lead to long-term exposure and exploitation, as the content remains accessible to
anyone accessing the blockchain. The permanence of such harmful content raises significant
ethical concerns. It challenges the ability of authorities and organizations to protect vulnerable
individuals and prevent the spread of illegal material. Researchers have suggested methods like
“pruning” to address this issue. Pruning involves removing certain parts of the blockchain data
while maintaining the overall integrity of the blockchain. However, these methods are still
experimental and have not been widely adopted. The lack of fully developed and accepted
solutions means that the ethical concerns surrounding using blockchain for safeguarding
purposes remain unresolved. This highlights the need for further research and development to
find effective ways to manage and mitigate the risks associated with blockchain’s permanence.
Furthermore, blockchain's decentralized nature complicates governance and accountability.
Unlike centralized systems, where a specific entity oversees data and content, blockchain’s lack
of a single governing body can make it difficult to regulate and enforce child protection laws
effectively. This poses a challenge for implementing blockchain-based tools in environments
where legal oversight is crucial, such as in the context of protecting children from exploitation.
Finck (2018) noted that decentralized systems often operate across multiple jurisdictions,
further complicating enforcement and leading to potential legal ambiguities in child protection
cases.
Finally, there are concerns related to the ethical use of blockchain regarding data privacy. While
blockchain can provide enhanced security and privacy protections, it can also create risks if
improperly implemented. For instance, blockchain-based systems may store personal data in a
difficult way to alter or delete, raising concerns about children’s “right to be forgotten” as
stipulated by regulations such as the European Union’s GDPR (Jongerius, 2024). Implementing
blockchain in child safety initiatives would require careful consideration of how data is stored
and the mechanisms for rectifying or removing it when necessary.
Despite the challenges, blockchain technology continues to offer considerable promise for
safeguarding children online, particularly when combined with other emerging technologies
such as AI and machine learning. AI-driven content filtering systems, for example, could be
enhanced by blockchain’s verification and security features, creating a multi-layered approach
to child protection that blends real-time content analysis with the long-term integrity and
security of blockchain-verified data. Researchers are also exploring the use of blockchain to
facilitate more transparent content moderation processes on platforms used by children (NIU
et.al., 2024). Current content moderation systems are often criticized for their lack of
transparency and accountability, leading to calls for more open and auditable systems.
Blockchain could offer a solution by providing a transparent, decentralized ledger of
moderation decisions, allowing for greater accountability and public trust in content
moderation processes.
BIOMETRIC AUTHENTICATION: ETHICAL AND PRIVACY CONCERNS
Biometric authentication, which involves using unique biological traits such as fingerprints,
facial recognition, voice patterns, or iris scans for identity verification, has emerged as a
promising tool for safeguarding children online. As online environments become increasingly
hazardous for minors due to the rise of cyber threats, such as grooming, exploitation, and
exposure to inappropriate content, biometric authentication presents an innovative way to
ensure that only authorized users can access specific digital spaces. This technology is gaining
attention due to its potential to enhance security and protect children's online presence. For
children, who may not always follow best practices in setting strong passwords or securing
their accounts, biometrics presents a more reliable alternative. Fingerprints, facial recognition,
and other biometric markers cannot be easily stolen or replicated (Debas et al., 2023), providing
a higher level of security against unauthorized access to platforms frequented by children, such
as social media, educational tools, and online games. This makes it harder for predators or
cybercriminals to gain access to children’s profiles or communication channels. For instance,
biometric authentication could be critical in parental control systems. By using a child’s
fingerprint or facial recognition to access specific online activities or content, parents can
ensure that only the authorized child can access specific applications, chats, or websites
(Domebale et al., 2023). This approach is seen as a step forward from PINs or passwords, which
are susceptible to being shared or guessed. As argued by Lupton & Williamson (2017),
integrating biometric security into parental control systems can bolster efforts to restrict
harmful content or interactions while allowing children to engage safely with digital
technologies.
Furthermore, biometrics can be used in educational settings where children increasingly
participate in remote learning or online classroom environments (Hernandez-de-Menendez
et.al., 2021). In these settings, biometric authentication can ensure that the registered child is
the one engaging in the learning process, preventing identity fraud or unauthorized use of
educational platforms. The COVID-19 pandemic's shift to online learning has heightened the
need for secure, verifiable access to digital classrooms, making biometrics an attractive
solution. Research by (Betrand et.al., 2023) highlights how schools adopting biometric
authentication systems have improved attendance and accountability by ensuring that only
authorized students participate in lessons.
However, despite the above benefits, different researchers have highlighted limitations and
issues with biometric authentication for online kid protection. One of the primary objections is
that, unlike passwords, biometric data cannot be quickly updated if compromised (O'Gorman,
2003). Once a fingerprint or facial scan is saved in a database, stealing that information could
have long-term consequences, especially for youngsters less conscious of data security threats.
If a child's biometric data is compromised, they may be vulnerable to identity theft for years.
As Bonneau et al. (2012) note out, while biometrics may provide more robust security than
typical passwords, the irreversible nature of this data makes breaches far more worrying,
especially when it comes to protecting children. Moreover, collecting and storing biometric
data can lead to significant ethical dilemmas. Critics argue that capturing children’s biometric
information at a young age may violate their rights to privacy and data protection (Livingstone
et al., 2019.). Children's inability to fully comprehend the long-term implications of sharing
such data adds to the ethical complexities. The European Union's GDPR addresses these
concerns by stipulating strict conditions for processing children's biometric data, such as
parental consent. However, the ethical concern remains that children who cannot give informed
consent are having their biological data stored in ways that may not fully respect their privacy
rights.
In addition, there are concerns related to surveillance. The use of biometric systems (like facial
recognition, fingerprint scanning, etc.) in schools and recreational activities can make
surveillance seem normal to children from a young age. This can lead to a societal shift where
constant monitoring is accepted as a standard part of life. When children grow up in
environments where their every move is tracked, it can affect their sense of personal freedom
and autonomy. They might become accustomed to being watched and not develop a healthy
understanding of privacy. They may become desensitized to privacy invasions and more
accepting of intrusive technologies. This normalization can lead to a generation that is less
aware of privacy rights and more willing to accept invasive technologies without questioning
their implications (Paik et al., 2022). It can also affect how they interact with technology,
potentially making them more vulnerable to privacy breaches and less critical of surveillance
practices. Similarly, Constant surveillance can create an atmosphere of control, where children
feel they are constantly being watched. This can undermine trust between children and
authority figures (like teachers and parents), as children might feel their privacy is being
invaded. Over time, children might resist these surveillance measures, feeling that their
personal space and freedom are being encroached upon. This resistance can manifest in various
ways, from behavioral issues to a general mistrust of technology and authority.
Additionally, biometric authentication systems face challenges related to accuracy and bias.
Research shows that biometric systems, especially facial recognition technology, can produce
inaccurate results when dealing with diverse populations, particularly among
children. Younger users can present unique challenges for facial recognition
algorithms. Children’s faces undergo significant changes as they age, making it difficult for
these systems to maintain accuracy over time. This can result in higher error rates, where
children might be incorrectly identified or not recognized. Facial recognition algorithms have
performed poorly with younger users and individuals from racial or ethnic minorities. For
example, studies by the US National Institute of Standards and Technology (NIST) found that
African American and Asian faces experienced significantly higher false favorable rates
compared to Caucasian faces (BBC News, 2019). As noted by Buolamwini & Gebru (2018),
facial recognition systems often have higher error rates for non-white individuals, a significant
concern when these systems are used for safeguarding purposes. When used in settings meant
to protect children, such as schools or online platforms, these biases can have serious
consequences; for instance, children from minority backgrounds might be wrongly denied
access to educational content or flagged as security risks. This affects their learning experience
and can lead to feelings of exclusion and discrimination. The presence of bias in biometric
systems can undermine trust in these technologies. If children and their parents perceive these
systems as unfair or discriminatory, they may resist their use, which can hinder the
implementation of potentially beneficial security measures. To mitigate these issues,
developers need to improve the algorithms used in biometric systems. This includes training
these systems on more diverse datasets to ensure they perform well across different
demographic groups. Organizations deploying these technologies must also consider the ethical
implications of their use. This involves being transparent about the limitations of biometric
systems and actively working to address any biases that may exist (Glavin, 2024).
Another technical issue is the high cost of implementing biometric systems. While biometrics
may offer more robust security than passwords, the technology is costly to deploy on a large
scale. Many educational institutions, online platforms, or apps designed for children may lack
the resources to integrate sophisticated biometric authentication systems, leading to uneven
adoption and leaving some children without the benefits of this technology. Moreover, many
countries and regions still lack the necessary infrastructure for widespread biometric
authentication, particularly in underdeveloped areas where children increasingly access the
internet through mobile devices or shared computers (Tanwar et al., 2019).
As the biometric authentication technology continues to evolve, there is potential for
advancements that could address many of the concerns raised. For example, researchers are
exploring ways to enhance the security and privacy of biometric systems, such as integrating
blockchain technology to store biometric data more securely. Furthermore, combining
biometric authentication with other technologies, such as AI and machine learning, could
improve the accuracy and reliability of these systems, particularly in identifying and
responding to threats against children (Albalawi et al., 2022). Hence, scholars such as Tanwar
et al. (2019) have advocated that as governments and regulatory bodies develop frameworks
for the ethical use of biometric technologies, it will be crucial to strike a balance between
enhancing child safety online and the limitations of the technology. The role of parents and
guardians in making informed decisions about using biometrics will also be critical, as they are
ultimately responsible for managing their children’s interactions with online technologies.
AUGMENTED REALITY (AR) AND VIRTUAL REALITY (VR)
Augmented Reality (AR) and Virtual Reality (VR) are immersive technologies that alter or
simulate real-world environments in different ways. AR overlays digital elements, such as
images, sounds, and other sensory inputs, onto the real-world environment. Unlike VR, it does
not replace the physical world but enhances it with virtual elements. VR is a fully immersive
technology that creates an entirely virtual environment, blocking out the physical world. Users
typically wear VR headsets to interact with this 3D simulated environment, which may be
computer-generated or based on real-world spaces (Gandhi & Patel, 2018.). AR and VR are
increasingly recognized as transformative technologies with vast potential across various
sectors, including education, entertainment, and social interaction. As these immersive
technologies become more integrated into children's digital lives, they also introduce new
opportunities and concerns regarding online safety. AR and VR offer opportunities and
challenges in safeguarding children in virtual environments, creating new demands for safety
protocols, technological solutions, and policies. One of the significant advantages of AR and
VR is their ability to create controlled environments where children can safely engage with
digital content. Educational AR and VR applications can enhance learning by creating
simulations that enable children to explore complex subjects in a risk-free, controlled
environment. These applications offer an opportunity to teach children about online safety
more engagingly and interactively, which is seen as a crucial advantage compared to traditional
learning methods (Alnajim et al., 2023).
Regarding online safety, AR and VR can offer age-appropriate content in secure environments
(Abu Deeb, 2024). For example, platforms can integrate filtering tools that limit children's
exposure to potentially harmful or inappropriate content, offering parental control mechanisms
within the experience. (Julier et al., 2000). A key area of development is the creation of virtual
learning environments where children can explore and engage safely under close supervision.
According to Jin et al. (2024), AR and VR platforms are also beginning to incorporate features
that allow parents or educators to monitor a child's online activity within these virtual worlds,
giving them more control over their interactions and experiences. In addition, VR can simulate
real-world online safety scenarios in a highly interactive manner (Buttussi & Chittaro, 2017).
Training programs within VR environments allow children to recognize risky behaviours, such
as inappropriate advances from strangers, grooming, or scams, all within a controlled
environment where they can learn without exposure to harm (Salazar et.al., 2013). This could
enhance children’s ability to recognize and avoid online risks, making AR and VR an active
learning tool for online safety education.
While AR and VR hold potential for child safety, they also introduce new privacy and security
concerns. One of the primary risks associated with AR and VR platforms is the vast amount of
personal data these technologies collect, including biometric data such as facial expressions,
body movements, and spatial orientation. This data, when improperly managed, poses
significant privacy risks, particularly when used in contexts involving children. Unlike
traditional online platforms, where privacy concerns are mostly limited to personal
identification and browsing data, AR and VR environments involve far more intrusive data
collection methods, raising concerns about long-term privacy violations and data misuse. For
instance, AR/VR devices often track eye movements, facial expressions, and body language to
create responsive environments or for user interface purposes. Gaze tracking can analyse where
the user is looking and for how long, creating highly personal profiles about interests or
cognitive state. Voice recognition can be used for commands, capturing vocal patterns and
emotional tone. Similarly, emotional responses can be inferred from these gestures and other
non-verbal cues, which may be exploited for commercial purposes or manipulated in harmful
ways. Moreover, children are especially vulnerable in these immersive environments because
they cannot often recognize and manage their privacy settings (Zakaria et al., 2011). Also, the
immersive nature of AR and VR can blur the line between the real and virtual worlds, making
it more challenging for children to understand the extent of their personal data exposure. AR
and VR environments can make children more susceptible to exploitation due to the personal
nature of the data collected, especially in environments where children are encouraged to
engage deeply with avatars and virtual personas.
In addition to data privacy issues, the integration of AR and VR platforms with social
networking functions has raised concerns about child predators exploiting these immersive
spaces (George, 2024.). Virtual worlds and multiplayer VR games provide new avenues for
predators to engage with children. In AR/VR platforms, users often interact through avatars—
digital representations that may look vastly different from their real-world selves. Predators
can easily use these avatars to mask their identities, appearing as peers or non-threatening
characters, thereby creating trust with unsuspecting children. Unlike text-based platforms,
where language and tone may raise red flags, the immersive visual aspect of AR/VR makes it
harder for children to discern the true identity and intentions of the person behind the avatar.
This anonymity allows predators to exploit children without revealing personal details like
appearance or voice, enhancing their ability to deceive. The immersive quality of VR, where
users feel as if they are physically present in the virtual world, can make interactions more
emotionally intense and real to children. Predators can manipulate this immersive experience
to groom children by creating environments that mimic real-life settings, such as classrooms,
playgrounds, or even private spaces where children feel safe.
The real-time interaction in these environments also makes it difficult for children to withdraw
or recognize danger, especially if they are emotionally invested in the virtual relationship. One
of the most critical challenges in safeguarding children within AR and VR spaces is the real-
time nature of interactions (Fiani et.al., 2024). Unlike traditional text-based social platforms,
where harmful behavior may be flagged through keyword detection or content moderation
algorithms, real-time audio and visual interactions in VR environments are much harder to
monitor and control. Predatory behavior may not leave behind a clear digital trace, as
inappropriate actions could occur in private virtual spaces or through non-verbal cues like body
movements or gestures, which are difficult to detect through automated systems. The sheer
scale of social interaction in VR spaces compounds the difficulty of moderating these
platforms. Popular multiplayer VR games or virtual worlds can host thousands of users
simultaneously, each engaging in unique, real-time interactions. While platforms employ
community moderation tools—such as reporting systems or automated behavioral
monitoring—these methods often do not detect nuanced or covert grooming behaviors (Hine
et al., 2024). As a result, many inappropriate interactions go unnoticed until significant harm
has occurred. Furthermore, many VR platforms have delayed or inefficient responses to user
reports, leaving children vulnerable for extended periods. A study by Nogel et al., (2021) hinted
that virtual reality environments are significantly harder to moderate due to the complexity of
user interactions. In these spaces, children may be subjected to verbal abuse, bullying, or
exposure to inappropriate content, often without immediate recourse or the ability to flag or
report the situation effectively. In these spaces, children may be subjected to verbal abuse,
bullying, or exposure to inappropriate content, often without immediate recourse or the ability
to flag or report the situation effectively.
To address these challenges, some platforms are beginning to experiment with AI-driven
moderation tools designed specifically for AR and VR environments. These tools analyze user
interactions in real-time and detect potentially harmful behaviors through voice recognition,
gesture analysis, and behavioral patterns. While promising, these solutions are still in their
early stages and face issues related to accuracy, particularly in distinguishing between harmless
interactions and those that may threaten child safety (Chen et al., 2024).
Beyond technical challenges, using AR and VR as safety tools for children raises ethical and
psychological concerns. Immersive technologies can profoundly affect children’s
development, and exposing them to virtual worlds too early or for prolonged periods may lead
to adverse psychological outcomes. Critics argue that children, whose cognitive and emotional
development is still underway, may struggle to differentiate between virtual and real-world
experiences, affecting their ability to understand online risks accurately (Kaimara et al., 2022).
There are also ethical questions surrounding the commercialization of AR and VR
environments, where children are often exposed to advertising and other forms of monetization.
Children interacting within these platforms may be subjected to targeted marketing based on
their in-game behaviours and data, a practice that has raised concerns among privacy advocates.
The ethical implications of using immersive technologies to collect and profit from children's
interactions in virtual environments call for stricter regulations and child protection policies in
these digital spaces.
Significant improvements are needed in content moderation, data privacy protections, and
ethical standards to fully harness the potential of AR and VR in safeguarding children online.
One promising area is the development of AI-driven moderation systems that can monitor real-
time interactions more effectively in AR and VR environments. These systems could be further
integrated with biometric data and machine learning algorithms to improve the accuracy of
detecting inappropriate behaviour or content. Additionally, educational initiatives to teach
children about online safety should be integrated into AR and VR experiences, creating an
interactive way for children to learn about online risks while using the platforms (Alnajim et
al., 2023).
Another key recommendation is the implementation of stricter privacy regulations for AR and
VR environments that specifically address protecting children’s data. Current regulations, such
as the COPPA, need to be updated to reflect the unique privacy challenges posed by immersive
technologies. As AR and VR platforms continue to grow, greater collaboration between
policymakers, technologists, and child protection advocates will be necessary to ensure that
these technologies are used safely and ethically for children.

CONCLUSION AND SUMMARY OF FINDINGS


The rapid advancement of technology in the digital age has created opportunities and risks for
children. This essay explores current trends and emerging technologies to protect children from
online predators, emphasizing the critical role of technological tools like AI, machine learning,
parental control software, and newer innovations such as blockchain and augmented reality
(AR) and virtual reality (VR) systems. These technologies help identify and mitigate online
threats but are not without challenges.
One of the most significant advantages of current technologies, particularly AI and machine
learning, is their ability to proactively detect harmful content and behaviours. These systems
can analyze large volumes of data, identifying patterns of risk—such as predatory grooming,
cyberbullying, and exposure to inappropriate content—before they escalate into more severe
issues. Machine learning algorithms improve over time by learning from data patterns and
adapting to new threats. For instance, social media platforms like Facebook and Instagram use
AI to detect and flag harmful content for review, preventing children from being exposed to
inappropriate material (Gorwa et al., 2020).
Additionally, real-time monitoring and alert systems significantly enhance protective
measures. Tools such as Bark and Qustodio provide parents and guardians with immediate
notifications when potential risks or inappropriate behaviours are detected, enabling prompt
intervention. These systems have proven especially effective in preventing online grooming,
cyber stalking, and the distribution of explicit content. By catching threats early, these
technologies offer a proactive defence layer that balances prevention and responsiveness (Ali
et al., 2020).
Parental control software has also evolved, offering a comprehensive suite of tools that allow
parents to monitor, filter, and control their children's online activities. Platforms like Net Nanny
and Norton Family enable parents to restrict access to specific websites, apps, or content while
setting time limits on device usage. These controls balance protection and independence,
allowing children to explore the internet safely while developing healthy online habits.
Educational tools and resources are crucial in safeguarding children online. Digital literacy
programs, integrated into platforms like “Be Internet Awesome,” empower children, parents,
and educators to understand online risks, recognize harmful behavior, and respond effectively.
By teaching critical thinking skills and safe online navigation, these tools reduce children's
vulnerability to online predators or cyberbullying. Furthermore, gamification—a technique that
uses game-like elements to engage children in learning—has become an increasingly effective
way to teach online safety. Programs like “Interland”, developed by Google, use gamified
approaches to teach children about phishing, password security, and cyberbullying, making
online safety education interactive and engaging
Emerging technologies such as blockchain and biometric authentication enhance online
privacy and security. Blockchain’s decentralized nature allows for tamper-proof records of
online activities, making it harder for predators or malicious actors to manipulate data or
conceal their actions. Blockchain technology is instrumental in managing online identity and
ensuring that only authorized individuals interact with children in digital spaces. Similarly,
biometric authentication, using unique biological identifiers like fingerprints or facial
recognition, ensures that only authorized users access sensitive information, adding another
layer of security (Griffin, 2016).
Customization is another strength of safeguarding technologies. Tools such as parental control
apps, monitoring systems, and AI-based solutions often include customizable settings tailored
to different age groups, levels of maturity, or specific concerns. This flexibility allows parents
and educators to adjust filters, time limits, and access controls based on the child's
developmental stage, offering a more nuanced and practical approach to online safety.
Despite these advances, several significant challenges remain. One major issue is the
occurrence of false positives and false negatives in AI and machine learning systems. While
designed to detect harmful content, these systems often struggle with nuance. False positives
can flag innocent interactions as dangerous, frustrating users and discouraging healthy online
engagement. Conversely, false negatives—where harmful content goes undetected—leave
children vulnerable to exploitation (Jada & Mayayise, 2023).
Accessibility and inclusivity also present challenges. Many safeguarding tools require
specialized hardware or software that may not be available to all families, exacerbating the
digital divide and leaving vulnerable populations without adequate protection. Additionally,
many tools are designed for English-speaking users, limiting their effectiveness in multilingual
or culturally diverse settings. Equitable access to these technologies must be prioritized to
ensure that all children benefit from protective measures.
Privacy concerns are another significant issue. Many safeguarding technologies require
constant monitoring, which can lead to intrusive surveillance. This not only impacts children's
sense of autonomy but also raises ethical questions about consent, especially when minors are
involved. Parents and guardians may be unaware of how their children's data is being used,
leading to a lack of trust in these technologies. Moreover, inadequate data protection measures
can expose sensitive information to malicious actors, further jeopardizing children's safety.
In addition, many of these technologies depend heavily on parental supervision. However, not
all parents have the knowledge or resources to use these tools effectively. Studies show that
children often find ways to bypass parental controls, rendering them ineffective. To address
this gap, a more comprehensive approach that involves children in their own online safety
education is necessary.
Technological solutions often target specific threats like cyberbullying or explicit content, but
they frequently overlook issues such as emotional manipulation or social isolation.
Cyberbullying and explicit content are more visible and easier to detect with current
technology, leading to a focus on these areas. However, emotional manipulation and social
isolation are subtler and can be just as harmful, yet they often go unnoticed by existing tools.
As online predation evolves, safeguarding tools must remain adaptable. Predators continuously
find new ways to exploit technology, making it crucial for safeguarding tools to evolve in
response. This adaptability ensures that tools can address emerging threats effectively. The
rapid pace of technological change can render these tools obsolete quickly, requiring
continuous updates that many developers struggle to maintain. Technology advances at such a
fast rate that safeguarding tools can become outdated almost as soon as they are developed.
This necessitates frequent updates to keep up with new threats and vulnerabilities. However,
not all developers have the resources or capacity to provide these continuous updates, leading
to gaps in protection.
While technologies play a crucial role in safeguarding children online, they cannot replace the
need for robust educational programs. Effective safeguarding requires a holistic approach that
integrates technology, education, and community involvement. Digital literacy, critical
thinking, and social-emotional learning are key components that empower children to navigate
online spaces safely (Falloon, 2020).
Finally, the ethical implications of using some of these technologies, such as surveillance tools
and gamified approaches, must be carefully considered. Striking a balance between protecting
children and respecting their privacy is essential to creating an environment of trust and
empowerment rather than fear.
Therefore, while current technologies offer valuable tools in the fight against online threats,
they are not without significant shortcomings. Addressing these challenges requires a holistic
approach that integrates technological solutions, educational programs, and community
support. As online threats continue to evolve, so too must the strategies and technologies
designed to protect young users.

RECOMMENDATIONS
Looking ahead, the ongoing integration of these technological solutions will necessitate
refinement and collaboration among all stakeholders, including policymakers, technology
developers, educators, and parents. Therefore, the following recommendations are proposed:
1. Enhancing AI and Machine Learning Capabilities
While AI and machine learning are at the forefront of technological solutions for detecting
harmful content and behaviours, their effectiveness remains limited by issues such as false
positives and false negatives. It is essential to invest in refining these technologies to better
capture contextual nuances, especially in detecting more subtle forms of online exploitation,
such as grooming or manipulative conversations. Algorithms need to evolve to process not just
isolated words or images but also the intention and underlying context. This can be achieved
by incorporating a more diverse dataset that represents the wide variety of online interactions
and continually updating models to adapt to emerging threats. However, such improvements
must also prioritize privacy and transparency, avoiding overreach that could result in
unwarranted restrictions or privacy violations (Walters & Novak, 2021).
2. Improving Data Privacy and Ethical Use in Biometric Technologies
Biometric authentication holds significant potential in safeguarding children’s online activities
by verifying identity and preventing unauthorized access. However, its application must be
carefully regulated to avoid misuse or privacy breaches. Developers and policymakers should
prioritize establishing stringent guidelines on the collection, storage, and use of biometric data
to ensure it is not exploited for unethical purposes (Gates, 2011). Transparency in how data is
handled, clear opt-in mechanisms, and the ability for users (or their guardians) to control their
data are critical. Further, institutions that use biometric tools should implement regular audits
to ensure compliance with privacy standards and safeguard children’s rights.
3. Expanding Education-Based Solutions
Technological tools alone cannot comprehensively protect children from online predators;
education must play a pivotal role in equipping children with the knowledge and skills to
navigate the digital world safely. Education-based initiatives, including gamified learning
platforms, should be scaled up to cover a wider demographic. Moreover, these initiatives
should not solely target children; they must also involve parents, educators, and caregivers.
Awareness programs should focus on how to identify signs of online grooming, exploitation,
and trafficking, emphasizing collaborative efforts between children and their guardians.
Governments and educational institutions should integrate online safety into school
curriculums, ensuring a sustained focus on teaching digital literacy and critical thinking.
4. Strengthening Monitoring and Alert Systems
Monitoring and alert systems that track children's online behavior and flag potential threats in
real-time are crucial but remain underdeveloped in terms of their accuracy and effectiveness.
One recommendation is the creation of unified platforms where monitoring systems can
collaborate across devices and services, reducing fragmentation and ensuring comprehensive
coverage (Ajish, 2024). It is also critical that these systems offer granular customization,
enabling parents and guardians to adjust settings based on their child’s maturity level and
unique online habits (Altuna et al., 2020). Additionally, monitoring systems should operate
transparently, ensuring that children’s autonomy and privacy are respected while still providing
necessary safeguards (Hollanek, T., 2023).
5. Developing Cross-Platform Policies and Standards
One of the greatest challenges in safeguarding children online is the decentralized nature of the
Internet, where different platforms operate under varying standards. To overcome this, tech
companies, governments, and international regulatory bodies should collaborate to develop
universal protocols that platforms must adhere to when handling child safety. Such protocols
could include mandatory reporting systems for detecting and removing harmful content, stricter
age verification methods, and enhanced data-sharing practices between law enforcement
agencies and tech platforms for better tracking of offenders. These standards should extend
across borders, given the global nature of online interactions, creating a more consistent and
reliable framework for child safety.
6. Regulating the Internet of Things (IoT) and Smart Devices
The proliferation of IoT and smart devices has introduced new avenues for predators to exploit,
particularly through insecure devices that collect sensitive information or facilitate
unmonitored communication. To mitigate these risks, manufacturers must be held accountable
for embedding robust security measures into their products, particularly those intended for
child users. Governments should enforce regulations that require IoT devices to include built-
in safety features such as end-to-end encryption, secure authentication processes, and parental
control settings (Singh et.al., 2024). Additionally, awareness campaigns aimed at parents and
caregivers should emphasize the importance of securing these devices and understanding the
risks they pose.
7. Incorporating Blockchain Technology for Accountability and Transparency
Blockchain technology offers a promising solution for increasing transparency and
accountability in the digital space, particularly in terms of data security and monitoring online
transactions. Implementing blockchain in child-safeguarding tools can create an immutable
record of online interactions, ensuring that malicious activities are traceable and making it more
difficult for predators to cover their tracks. For example, blockchain could be used to
authenticate the age and identity of users without compromising privacy, ensuring that children
are not able to bypass age restrictions or access inappropriate content (Alotaibi, 2019).
However, blockchain solutions must be developed carefully to avoid inadvertently exposing
children to new vulnerabilities or privacy risks.
8. Advancing Public-Private Partnerships and International Collaboration
The fight against online predators requires more than just technological innovation; it demands
robust collaboration between private tech companies, governments, non-governmental
organizations (NGOs), and international bodies (Minnaar, 2023) Public-private partnerships
should focus on pooling resources and knowledge to create scalable solutions that protect
children globally. Such collaborations could fund research into emerging threats, develop
universally applicable technological solutions, and promote data-sharing between entities to
catch and prosecute offenders across borders. International cooperation is also crucial in
establishing consistent regulations and standards, especially when it comes to cross-border
online crimes.
9. Investing in Research on Emerging Trends
Finally, continuous research into emerging technologies and their potential risks and benefits
is necessary to stay ahead of the evolving tactics used by online predators. Governments,
educational institutions, and tech companies should collaborate to fund long-term studies on
the effectiveness of current safeguarding tools and identify areas for improvement. This
research should also include ethical considerations, particularly regarding how technologies
like AI, IoT, and blockchain may infringe upon the rights of children or be used inappropriately
by bad actors.
Consequently, while significant strides have been made in leveraging technology to protect
children online, there is still room for improvement. AI and machine learning, IoT, biometric
authentication, blockchain, and education-based solutions offer a multi-faceted approach, but
they are not foolproof. These technologies must continue to evolve alongside the threats they
aim to mitigate, and greater collaboration between stakeholders is essential to ensuring a safer
digital environment for children. Ultimately, a combination of technological advancements,
policy development, and education will provide the most comprehensive protection for
children against online predators.
REFERENCE
Abdelmaboud, A., Ahmed, A. I. A., Abaker, M., Eisa, T. A. E., Albasheer, H., Ghorashi, S. A.,
& Karim, F. K. (2022). Blockchain for IoT applications: taxonomy, platforms, recent
advances, challenges and future research directions. Electronics, 11(4), 630.
Abu Deeb, F. (2024). Enhancing Cybersecurity with Extended Reality: A Systematic
Review. Journal of Computer Information Systems, 1-15.
Açar, K. V. (2016). Sexual Extortion of Children in Cyberspace. International Journal of
Cyber Criminology, 10(2).
Adel, A., & Norouzifard, M. (2024). Weaponization of the Growing Cybercrimes inside the
Dark Net: The Question of Detection and Application. Big Data and Cognitive
Computing, 8(8), 91.
Ajish, D. (2024). Streamlining Cybersecurity: Unifying Platforms for Enhanced
Defense. International Journal of Information Technology, Research and Applications, 3(2),
48-57.
Albalawi, S., Alshahrani, L., Albalawi, N., Kilabi, R., & Alhakamy, A. A. (2022). A
comprehensive overview on biometric authentication systems using artificial intelligence
techniques. International Journal of Advanced Computer Science and Applications, 13(4), 1-
11.
Ali, S., Elgharabawy, M., Duchaussoy, Q., Mannan, M., & Youssef, A. (2020, December).
Betrayed by the guardian: Security and privacy risks of parental control solutions.
In Proceedings of the 36th Annual Computer Security Applications Conference (pp. 69-83).
Alnajim, A. M., Habib, S., Islam, M., AlRawashdeh, H. S., & Wasim, M. (2023). Exploring
cybersecurity education and training techniques: a comprehensive review of traditional,
virtual reality, and augmented reality approaches. Symmetry, 15(12), 2175.
Alotaibi, B. (2019). Utilizing blockchain to overcome cyber security concerns in the internet
of things: A review. IEEE Sensors Journal, 19(23), 10953-10971..
Altuna, J., Martínez-de-Morentin, J. I., & Lareki, A. (2020). The impact of becoming a parent
about the perception of Internet risk behaviors. Children and Youth Services Review, 110,
104803.
Amankwah-Amoah, J., Khan, Z., Wood, G., & Knight, G. (2021). COVID-19 and
digitalization: The great acceleration. Journal of business research, 136, 602-611.
Ambagtsheer, F. (2021). Understanding the challenges to investigating and prosecuting organ
trafficking: a comparative analysis of two cases. Trends in Organized Crime, 1-28.
Arora, A., Nakov, P., Hardalov, M., Sarwar, S. M., Nayak, V., Dinkov, Y., ... &
Bailey, J. (2010). Twenty Years Later Taylor Still Has It Right: How the Canadian Human
Rights Act’s Hate Speech Provision Continues to Contribute to Equality. The Supreme Court
of Canada and Social Justice: Commitment, Retrenchment or Retreat, Sheila McIntyre and
Sanda Rodgers, eds., LexisNexis Canada.
Bailey, J. (2017). From “zero tolerance” to “safe and accepting”: Surveillance and equality in
the evolution of Ontario education law and policy. Education Law Journal, 26(2), 147-180.
Bailey, J., Henry, N., & Flynn, A. (2021). Technology-Facilitated Violence and Abuse:
International Perspectives and Experiences. In Emerald Publishing Limited eBooks (pp. 1–
17). https://doi.org/10.1108/978-1-83982-848-520211001
Baker, I., 2022. Blackmail on the Internet: an exploration of the online sexual coercion of
children. Nottingham Trent University (United Kingdom).
Betrand, C. U., Onyema, C. J., Benson-Emenike, M. E., & Kelechi, D. A. (2023).
Authentication system using biometric data for face recognition. International Journal of
Sustainable Development Research, 68-78.
Binder, M. (2019, February 22). YouTube’s pedophilia problem: More than 400 channels
deleted as advertisers flee over child predators. Mashable.
https://mashable.com/article/youtube-wakeup-child-exploitation-explained
Black, P. J., Wollis, M., Woodworth, M., & Hancock, J. T. (2015). A linguistic analysis of
grooming strategies of online child sex offenders: Implications for our understanding of
predatory sexual behavior in an increasingly computer-mediated world. Child abuse &
neglect, 44, 140-149.
Blackwell, C. (2013). Teacher practices with mobile technology integrating tablet computers
into the early childhood classroom. Journal of Education Research, 7(4).
Bonneau, J., Herley, C., Van Oorschot, P. C., & Stajano, F. (2012, May). The quest to replace
passwords: A framework for comparative evaluation of web authentication schemes. In 2012
IEEE symposium on security and privacy (pp. 553-567). IEEE.
Brown, A. J. (2020). “Should I stay or should I leave?”: Exploring (dis) continued Facebook
use after the Cambridge Analytica scandal. Social media+ society, 6(1), 2056305120913884.
Buil-Gil, D., Kemp, S., Kuenzel, S., Coventry, L., Zakhary, S., Tilley, D., & Nicholson, J.
(2023). The digital harms of smart home devices: A systematic literature review. Computers
in Human Behavior, 145, 107770.
Buolamwini, J., & Gebru, T. (2018, January). Gender shades: Intersectional accuracy
disparities in commercial gender classification. In Conference on fairness, accountability and
transparency (pp. 77-91). PMLR.
Buttussi, F., & Chittaro, L. (2017). Effects of different types of virtual reality display on
presence and learning in a safety training scenario. IEEE transactions on visualization and
computer graphics, 24(2), 1063-1076.
Carlson, B. (2019). Disrupting the master narrative: Indigenous people and tweeting colonial
history. Griffith Review, (64), 224-234.
Carr, J. (2003). Child abuse, child pornography, and the internet. London: NCH.
Cassidy, W., Faucher, C., & Jackson, M. (2013). Cyberbullying among youth: A
comprehensive review of current international research and its implications and application to
policy and practice. School Psychology International, 34(6), 575-612.
Chen, L. W., Chen, T. P., Chen, H. M., & Tsai, M. F. (2019). Crowdsourced children
monitoring and finding with holding up detection based on internet-of-things
technologies. IEEE Sensors Journal, 19(24), 12407-12417.
Chen, X., Gao, W., Chu, Y., & Song, Y. (2024). Enhancing Interaction in Virtual-Real
Architectural Environments: A Comparative Analysis of Generative AI-driven Reality
Approaches. Building and Environment, 112113.
Citron, D. K., & Franks, M. A. (2014). Criminalizing revenge porn. Wake Forest L. Rev., 49,
345.
Colliver, B., Coyle, A., & Silvestri, M. (2019). The ‘online othering’of transgender people in
relation to ‘gender neutral toilets’. Online othering: Exploring digital violence and
discrimination on the web, 215-237.
Connellan, S. (2023, November 16). “Fortnite” players can now report others using voice
recordings. Here’s how. Mashable. https://mashable.com/article/fortnite-report-voice-audio
Craven, S., Brown, S., & Gilchrist, E. (2006). Sexual grooming of children: Review of
literature and theoretical considerations. Journal of sexual aggression, 12(3), 287-299.
Debas, E. A., Alajlan, R. S., & Rahman, M. H. (2023, February). Biometric in cyber security:
A mini review. In 2023 International Conference on Artificial Intelligence in Information and
Communication (ICAIIC) (pp. 570-574). IEEE.
Demmese, F., Yuan, X., & Dicheva, D. (2020, December). Evaluating the effectiveness of
gamification on students’ performance in a cybersecurity course. In Journal of the
Colloquium for Information System Security Education (Vol. 8, No. 1).
Dhinakaran, D., Sankar, S. M., Selvaraj, D., & Raja, S. E. (2024). Privacy-Preserving Data in
IoT-based Cloud Systems: A Comprehensive Survey with AI Integration. arXiv preprint
arXiv:2401.00794.
Dichev, C., & Dicheva, D. (2017). Gamifying education: what is known, what is believed and
what remains uncertain: a critical review. International journal of educational technology in
higher education, 14, 1-36.
Dodel, M., & Mesch, G. (2018). Inequality in digital skills and the adoption of online safety
behaviors. Information, Communication & Society, 21(5), 712-728.
Dombrowski, S. C., Gischlar, K. L., & Durst, T. (2007). Safeguarding young people from
cyber pornography and cyber sexual predation: A major dilemma of the Internet. Child Abuse
Review: Journal of the British Association for the Study and Prevention of Child Abuse and
Neglect, 16(3), 153-170.
Dombrowski, S. C., LeMasney, J. W., Ahia, C. E., & Dickson, S. A. (2004). Protecting
children from online sexual predators: technological, psychoeducational, and legal
considerations. Professional Psychology: Research and Practice, 35(1), 65.
Duan, C., & Grimmelmann, J. (2024). Content moderation on end-to-end encrypted systems:
A legal analysis. Geo. L. Tech. Rev., 8, 1.
ECPAT. (2022, May 11). ECPAT. https://ecpat.org/story/international-women-and-girls-
series-5-how-does-trafficking-affect-women-girls-and-children/
Edwards, G., Christensen, L. S., Rayment-McHugh, S., & Jones, C. (2021). Cyber strategies
used to combat child sexual abuse material. Trends and issues in crime and criminal justice,
(636), 1-16.
Equality Now. (2023, February 8). Ending Online Sexual Exploitation and Abuse of Women
and Girls: A Call for International Standards - Equality Now.
https://equalitynow.org/resource/ending-online-sexual-exploitation-and-abuse-of-women-
and-girls-a-call-for-international-standards/
Erickson, L. B., Wisniewski, P., Xu, H., Carroll, J. M., Rosson, M. B., & Perkins, D. F.
(2016). The boundaries between: Parental involvement in a teen's online world. Journal of
the Association for Information Science and Technology, 67(6), 1384-1403.
Europol. (2020). EXPLOITING ISOLATION: Offenders and victims of online child sexual
abuse during the COVID-19 pandemic.
https://www.europol.europa.eu/sites/default/files/documents/europol_covid_report-
cse_jun2020v.3_0.pdf
Faith, B. F., Long, Z. A., & Hamid, S. (2024, May). Promoting cybersecurity knowledge via
gamification: an innovative intervention design. In 2024 Third International Conference on
Distributed Computing and High Performance Computing (DCHPC) (pp. 1-8). IEEE.
Falloon, G. (2020). From digital literacy to digital competence: the teacher digital
competency (TDC) framework. Educational technology research and development, 68(5),
2449-2472.
Faraz, A., Mounsef, J., Raza, A., & Willis, S. (2022). Child safety and protection in the online
gaming ecosystem. Ieee Access, 10, 115895-115913.
Fiani, C., Bretin, R., Macdonald, S. A., Khamis, M., & McGill, M. (2024, May). " Pikachu
would electrocute people who are misbehaving": Expert, Guardian and Child Perspectives on
Automated Embodied Moderators for Safeguarding Children in Social Virtual Reality. In
Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 1-23).
Finck, M. (2018). Blockchains and data protection in the European Union. Eur. Data Prot. L.
Rev., 4, 17.
Finkelhor, D., & Hotaling, G. T. (1984). Sexual abuse in the national incidence study of child
abuse and neglect: An appraisal. Child abuse & neglect, 8(1), 23-32.
Finkelhor, D., Walsh, K., Jones, L., Mitchell, K. and Collier, A., 2021. Youth internet safety
education: Aligning programs with the evidence base. Trauma, violence, & abuse, 22(5),
pp.1233-1247.
Flynn, A. (2019). Image-based sexual abuse. In Oxford research encyclopedia of criminology
and criminal justice.
Gandhi, R. D., & Patel, D. S. (2018). Virtual reality–opportunities and challenges. Virtual
Reality, 5(01), 2714-2724.
Gates, K. A. (2011). Our biometric future: Facial recognition technology and the culture of
surveillance (Vol. 2). NYU Press.
George, A. S. (2024). Virtual Violence: Legal and Psychological Ramifications of Sexual
Assault in Virtual Reality Environments. Partners Universal International Innovation Journal,
2(1), 96-114.
George, M. J., & Odgers, C. L. (2015). Seven fears and the science of how mobile
technologies may be influencing adolescents in the digital age. Perspectives on psychological
science, 10(6), 832-851.
Gezinski, L. B., & Gonzalez-Pons, K. M. (2024). Sex trafficking and technology: A
systematic review of recruitment and exploitation. Journal of Human Trafficking, 10(3), 497-
511.
Gillespie, T. (2018). Custodians of the Internet: Platforms, content moderation, and the
hidden decisions that shape social media. Yale University Press.
Gjertsen, E. G. B., Gjære, E. A., Bartnes, M., & Flores, W. R. (2017, February). Gamification
of Information Security Awareness and Training. In ICISSP (pp. 59-70).
Glavin, L. (2024, August 30). Bias in Biometrics: How Organizations Can Launch Remote
Identity Verification Confidently - FIDO Alliance. FIDO Alliance.
https://fidoalliance.org/bias-in-biometrics-how-organizations-can-launch-remote-identity-
verification-confidently/
Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical
and political challenges in the automation of platform governance. Big Data & Society, 7(1),
2053951719897945.
Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical
and political challenges in the automation of platform governance. Big Data & Society, 7(1),
2053951719897945.
Green, A. (2019). Cucks, fags and useful idiots: The othering of dissenting white
masculinities online. Online othering: Exploring digital violence and discrimination on the
web, 65-89.
Green, L., Haddon, L., Livingstone, S., Holloway, D., Jaunzems, K., Stevenson, K. J., &
O'Neill, B. (2019). Parents' failure to plan for their children's digital futures. Media@ LSE
Working Paper Series.
Griffin, P. H. (2016). Biometric-based cybersecurity techniques. In Advances in Human
Factors in Cybersecurity: Proceedings of the AHFE 2016 International Conference on Human
Factors in Cybersecurity, July 27-31, 2016, Walt Disney World®, Florida, USA (pp. 43-53).
Springer International Publishing.
Hafner, L., Peifer, T. P., & Hafner, F. S. (2024). Equal accuracy for Andrew and Abubakar—
detecting and mitigating bias in name-ethnicity classification algorithms. AI & society, 39(4),
1605-1629.
Henriquez, M. (2021, December 8). The Top 12 Data Breaches of 2019. 2019-12-05 |
Security Magazine. https://www.securitymagazine.com/articles/91366-the-top-12-data-
breaches-of-2019
Henry, N., & Umbach, R. (2024). Sextortion: Prevalence and correlates in 10 countries.
Computers in Human Behavior, 158, 108298.
Henry, N., Flynn, A., & Powell, A. (2018). Policing image-based sexual abuse: Stakeholder
perspectives. Police practice and research, 19(6), 565-581.
Henry, N., Flynn, A., & Powell, A. (2020). Technology-facilitated domestic and sexual
violence: A review. Violence against women, 26(15-16), 1828-1854.
Hernandez-de-Menendez, M., Morales-Menendez, R., Escobar, C. A., & Arinez, J. (2021).
Biometric applications in education. International Journal on Interactive Design and
Manufacturing (IJIDeM), 15, 365-380.
Hine, E., Rezende, I. N., Roberts, H., Wong, D., Taddeo, M., & Floridi, L. (2024). Safety and
privacy in immersive extended reality: An analysis and policy recommendations. Digital
Society, 3(2), 33.
Hodge, R. (2019, December 27). 2019 Data Breach Hall of Shame: These were the biggest
data breaches of the year. CNET. https://www.cnet.com/news/privacy/2019-data-breach-hall-
of-shame-these-were-the-biggest-data-breaches-of-the-year/
Hoffmann, A. (2022). Regulating the Internet in Times of Mass Surveillance: A Universal
Global Space with Universal Human Rights?. In Problematising Intelligence Studies (pp.
181-200). Routledge.
Hofmann, F., Wurster, S., Ron, E., & Böhmecke-Schwafert, M. (2017, November). The
immutability concept of blockchains and benefits of early standardization. In 2017 ITU
Kaleidoscope: Challenges for a Data-Driven Society (ITU K) (pp. 1-8). IEEE.
Hoge, E., Bickham, D., & Cantor, J. (2017). Digital media, anxiety, and depression in
children. Pediatrics, 140(Supplement_2), S76-S80.
Hollandsworth, R., Donovan, J., & Welch, M. (2017). Digital citizenship: You can’t go home
again. TechTrends, 61, 524-530.
Hollanek, T. (2023). AI transparency: a matter of reconciling design with critique. Ai &
Society, 38(5), 2071-2079.
Hopper, E., & Hidalgo, J. (2006). Invisible chains: Psychological coercion of human
trafficking victims. Intercultural Hum. Rts. L. Rev., 1, 185.
Howard, T. (2019). Sextortion: Psychological effects experienced and seeking help and
reporting among emerging adults (Doctoral dissertation, Walden University).
Ibrahim, A. (2024, February). Guarding the Future of Gaming: The Imperative of
Cybersecurity. In 2024 2nd International Conference on Cyber Resilience (ICCR) (pp. 1-9).
IEEE.
ILO. (2024, August 14). Love146. https://love146.org/learn/
Immigration & Customs Enforcement. (2023, August 22). ICE.
https://www.ice.gov/features/sextortion
INTERPOL issues global warning on human trafficking-fueled fraud. (2023).
https://www.interpol.int/en/News-and-Events/News/2023/INTERPOL-issues-global-warning-
on-human-trafficking-fueled-fraud
Jada, I., & Mayayise, T. O. (2023). The impact of artificial intelligence on organisational
cyber security: An outcome of a systematic literature review. Data and Information
Management, 100063.
Johnson, A. (2024, July 9). How to Address Children’s Online Safety in the United States.
ITIF. https://itif.org/publications/2024/06/03/how-to-address-childrens-online-safety-in-
united-states/
Jongerius, S. (2024, February 22). GDPR’s Right to be Forgotten in Blockchain: it’s not black
and white. TechGDPR. https://techgdpr.com/blog/gdpr-right-to-be-forgotten-blockchain/
Jordan, J. M. (2024). The Rise of the Algorithms: How YouTube and TikTok Conquered the
World. Penn State Press.
Jordan, M. I., & Mitchell, T. M. (2015). Machine learning: Trends, perspectives, and
prospects. Science, 349(6245), 255-260.
Kaimara, P., Oikonomou, A., & Deliyannis, I. (2022). Could virtual reality applications pose
real risks to children and adolescents? A systematic review of ethical issues and concerns.
Virtual Reality, 26(2), 697-735.
Karadimce, A., & Bukalevska, M. (2023). Threats Targeting Children on Online Social
Networks. WSEAS TRANSACTIONS ON ADVANCES in ENGINEERING EDUCATION, 20,
25–31. https://doi.org/10.37394/232010.2023.20.4
Karayianni, E., Fanti, K.A., Diakidoy, I.A., Hadjicharalambous, M.Z. and Katsimicha, E.,
2017. Prevalence, contexts, and correlates of child sexual abuse in Cyprus. Child abuse &
neglect, 66, pp.41-52.
Karunamurthy, A., Kiruthivasan, R., & Gauthamkrishna, S. (2023). Human-in-the-Loop
Intelligence: Advancing AI-Centric Cybersecurity for the Future. Quing: International Journal
of Multidisciplinary Scientific Research and Development, 2(3), 20-43.
Katsh, M. E., & Rabinovich-Einy, O. (2017). Digital justice: technology and the internet of
disputes. Oxford University Press.
Kavanagh, E., Jones, I., & Sheppard-Marks, L. (2020). Towards typologies of virtual
maltreatment: Sport, digital cultures & dark leisure. In Re-thinking leisure in a digital
age (pp. 75-88). Routledge.
Ke, F. (2016). Designing and integrating purposeful learning in game play: A systematic
review. Educational Technology Research and Development, 64, 219-244.
Kerrigan, N. (2019). Rural racism in the digital age. Online othering: Exploring digital
violence and discrimination on the Web, 259-279.
Kowalski, R. M., Giumetti, G. W., Schroeder, A. N., & Lattanner, M. R. (2014). Bullying in
the digital age: a critical review and meta-analysis of cyberbullying research among youth.
Psychological bulletin, 140(4), 1073.
Land, M. (2013). Toward an international law of the internet. Harv. Int'l LJ, 54, 393.
Lanitis, A. (2010). A survey of the effects of aging on biometric identity verification.
International Journal of Biometrics, 2(1), 34-52.
Le Compte, A., Elizondo, D., & Watson, T. (2015, May). A renewed approach to serious
games for cyber security. In 2015 7th International Conference on Cyber Conflict:
Architectures in Cyberspace (pp. 203-216). IEEE.
Lindau, J. D. (2022). Surveillance and the Vanishing Individual: Power and Privacy in the
Digital Age. Rowman & Littlefield.
Livingstone, S., & Blum-Ross, A. (2020). Parenting for a digital future: How hopes and fears
about technology shape children's lives. Oxford University Press, USA.
Livingstone, S., & Smith, P. K. (2014). Annual research review: Harms experienced by child
users of online and mobile technologies: The nature, prevalence and management of sexual
and aggressive risks in the digital age. Journal of child psychology and psychiatry, 55(6),
635-654.
Livingstone, S., & Stoilova, M. (2021). Using global evidence to benefit children’s online
opportunities and minimise risks. Contemporary Social Science.
Livingstone, S., Ólafsson, K., Helsper, E.J., Lupiáñez-Villanueva, F., Veltri, G.A. and
Folkvord, F., 2017. Maximizing opportunities and minimizing risks for children online: The
role of digital skills in emerging strategies of parental mediation. Journal of
communication, 67(1), pp.82-105.
Livingstone, S., Stoilova, M., & Nandagiri, R. (2019). Children's data and privacy online:
growing up in a digital age: an evidence review.
Llansó, E. J. (2020). No amount of “AI” in content moderation will solve filtering’s prior-
restraint problem. Big Data & Society, 7(1), 2053951720920686.
Llansó, E., VaN hoboKeN, J., Leerssen, P., & Harambam, J. (2020). Content Moderation, and
Freedom of Expression. Algorithms.
Lupton, D., & Williamson, B. (2017). The datafied child: The dataveillance of children and
implications for their rights. New media & society, 19(5), 780-794.
Majaranta, P., & Bulling, A. (2014). Eye tracking and eye-based human-computer interaction.
In Advances in physiological computing (pp. 39-65). London: Springer London.
Malone, M., Wang, Y., James, K., Anderegg, M., Werner, J., & Monrose, F. (2021, March). To
gamify or not? on leaderboard effects, student engagement, and learning outcomes in a
cybersecurity intervention. In Proceedings of the 52nd ACM Technical Symposium on
Computer Science Education (pp. 1135-1141).
Manne, G. A., Sperry, B., & Stout, K. (2022). Who Moderates the Moderators? A Law &
Economics Approach to Holding Online Platforms Accountable Without Destroying the
Internet. Rutgers Computer & Tech. LJ, 49, 26.
Manthiramoorthy, C., & Khan, K. M. S. (2024). Comparing several encrypted cloud storage
platforms. International Journal of Mathematics, Statistics, and Computer Science, 2, 44-62.
Marsoof, A., Luco, A., Tan, H., & Joty, S. (2023). Content-filtering AI systems–limitations,
challenges and regulatory approaches. Information & Communications Technology Law,
32(1), 64-101.
Martin, J., & Alaggia, R. (2013). Sexual abuse images in cyberspace: Expanding the ecology
of the child. Journal of child sexual abuse, 22(4), 398-415.
Marzano, G. (2021). Anti-Cyberbullying Interventions. In Research Anthology on School
Shootings, Peer Victimization, and Solutions for Building Safer Educational Institutions (pp.
468-488). IGI Global.
Mensah, G. B. (2023). Artificial intelligence and ethics: a comprehensive review of bias
mitigation, transparency, and accountability in AI Systems. Preprint, November, 10.
Meurens, N., Notté, E., Wanat, A., & Mariano, L. (2022). Child safety by design that works
against online sexual exploitation of children. Down to Zero Alliance, Netherlands.
Minnaar, A. (2023). An examination of early international and national efforts to combat
online child pornography and child sexual exploitation and abuse material on the
Internet. Child Abuse Research in South Africa, 24(2), 1-26.
Molok, N. N. A., Hakim, N. A. H. A., & Jamaludin, N. S. (2023). SmartParents: Empowering
Parents to Protect Children from Cyber Threats. International Journal on Perceptive and
Cognitive Computing, 9(2), 73-79.
Moxley, J. F. O. a. E. (2023, November 14). Our approach to responsible AI innovation.
blog.youtube. https://blog.youtube/inside-youtube/our-approach-to-responsible-ai-innovation/
Muthazhagu, V. H., Surendiran, B., & Arulmurugaselvi, N. (2024, July). Navigating the AI
Landscape: A Comparative Study of Models, Applications, and Emerging Trends. In 2024
International Conference on Signal Processing, Computation, Electronics, Power and
Telecommunication (IConSCEPT) (pp. 1-8). IEEE.
Nahmias, Y., & Perel, M. (2021). The oversight of content moderation by AI: impact
assessments and their limitations. Harv. J. on Legis., 58, 145.
Nguyen, T. (2024, June 25). Who is being targeted most by sextortion on social media? The
answer may surprise you. USA TODAY.
https://www.usatoday.com/story/news/nation/2024/06/25/financial-sextortion-teenage-boys-
social-media-report/74200070007/
Niu, Y. H., Gao, S., Zhang, H. K., & Gong, Y. J. (2024). Enhancing Content Moderation in
Wireless Mobile Networks: A Decentralized Quality Management Approach. Journal of
Information Science & Engineering, 40(4).
, M., Kovács, G., & Wersényi, G. (2021). The Regulation of Digital Reality in Nutshell.
In 12th IEEE International Conference on Cognitive Infocommunications (CogInfoCom) (pp.
1-7).
O’Connell, R., Cyberspace Research Unit, & University of Central Lancashire. (2003). A
TYPOLOGY OF CHILD CYBERSEXPLOITATION AND ONLINE GROOMING
PRACTICES. https://image.guardian.co.uk/sys-
files/Society/documents/2003/07/17/Groomingreport.pdf
O’Malley, R. L. (2023). Short-term and long-term impacts of financial sextortion on victim’s
mental well-being. Journal of interpersonal violence, 38(13-14), 8563-8592.
O'Gorman, L. (2003). Comparing passwords, tokens, and biometrics for user
authentication. Proceedings of the IEEE, 91(12), 2021-2040.
Okeh, A. (2023, February 7). Google to enable default SafeSearch filter for signed-in users.
Punch Newspapers. https://punchng.com/google-to-enable-default-safesearch-filter-for-
signed-in-users/
Ortega-Barón, J., Machimbarrena, J. M., Calvete, E., Orue, I., Pereda, N., & González-
Cabrera, J. (2022). Epidemiology of online sexual solicitation and interaction of minors with
adults: A longitudinal study. Child Abuse & Neglect, 131, 105759.
Paik, S., Mays, K. K., & Katz, J. E. (2022). Invasive yet inevitable? Privacy normalization
trends in biometric technology. Social Media+ Society, 8(4), 20563051221129147.
Patchin, J. W. & Hinduja, S. (2024). Cyberbullying Facts. Cyberbullying Research Center.
https://cyberbullying.org/facts
Patchin, J. W., & Hinduja, S. (2015). Measuring cyberbullying: Implications for
research. Aggression and violent behavior, 23, 69-74.
Pavan, E. (2017). Internet intermediaries and online gender-based violence. In Gender,
technology and violence (pp. 62-78). Routledge.
Pea, R. D., Biernacki, P., Bigman, M., Boles, K., Coelho, R., Docherty, V., ... & Vishwanath,
A. (2023). Four surveillance technologies creating challenges for education. Learning:
Designing the Future, 317.
PEN America. (2024, September 25). Shouting into the Void - PEN America.
https://pen.org/report/shouting-into-the-void/
Perasso, G. (2020). Cyberbullying detection through machine learning: Can technology help
to prevent internet bullying?. International Journal of Management and Humanities, 4(11),
57-69.
Polak, S., Schiavo, G., & Zancanaro, M. (2022, April). Teachers’ perspective on artificial
intelligence education: An initial investigation. In CHI Conference on Human Factors in
Computing Systems Extended Abstracts (pp. 1-7).
Quayle, E., & Koukopoulos, N. (2019). Deterrence of online child sexual abuse and
exploitation. Policing: A Journal of Policy and Practice, 13(3), 345-362.
Quayle, E., & Newman, E. (2015). The role of sexual images in online and offline sexual
behaviour with minors. Current Psychiatry Reports, 17, 1-6.
Quayle, E., & Taylor, M. (2001). Child seduction and self-representation on the
Internet. CyberPsychology & Behavior, 4(5), 597-608.
Quayle, E., & Taylor, M. (2003). Model of problematic Internet use in people with a sexual
interest in children. CyberPsychology & Behavior, 6(1), 93-106.
Quayle, E., (2016). Global Kids Online. http://www.globalkidsonline.net/sexual-exploitation
Quayle, E., Allegro, S., Hutton, L., Sheath, M., & Lööf, L. (2014). Rapid skill acquisition and
online sexual grooming of children. Computers in Human Behavior, 39, 368-375.
Rebhi, T. (2023). Challenges and Prospects in Enforcing Legal Protection of Children from
Online Sexual Exploitation. Krytyka Prawa, 21.
Redmiles, E. M., Bodford, J., & Blackwell, L. (2019, July). “I just want to feel safe”: A Diary
Study of Safety Perceptions on Social Media. In Proceedings of the International AAAI
Conference on Web and Social Media (Vol. 13, pp. 405-416).
Rogers-Whitehead, C. (2019). Digital citizenship: teaching strategies and practice from the
field. Rowman & Littlefield.
Sanka, A. I., & Cheung, R. C. (2021). A systematic review of blockchain scalability: Issues,
solutions, analysis and future research. Journal of Network and Computer Applications, 195,
103232.
Sarkar, S. (2015). Use of technology in human trafficking networks and sexual exploitation:
A cross-sectional multi-country study. Transnational Social Review, 5(1), 55-68.
Schiavo, G., Roumelioti, E., Deppieri, G., & Marconi, A. (2024, June). Gamification
Strategies for Child Protection: Best Practices for Applying Digital Gamification in Child
Sexual Abuse Prevention. In Proceedings of the 23rd Annual ACM Interaction Design and
Children Conference (pp. 282-289).
Schneider, P. J., & Rizoiu, M. A. (2023). The effectiveness of moderating harmful online
content. Proceedings of the National Academy of Sciences, 120(34), e2307360120.
Seale, J., & Schoenberger, N. (2018). Be internet awesome: A critical analysis of google's
child-focused internet safety program. Emerging Library & Information Perspectives, 1(1),
34-58.
Sezer, N., & Tunçer, S. (2021). Cyberbullying hurts: the rising threat to youth in the digital
age. Digital siege (ss. 179-194). Istanbul: Istanbul University Press. https://doi.
org/10.26650/B/SS07, 9.
Shen, H., DeVos, A., Eslami, M., & Holstein, K. (2021). Everyday algorithm auditing:
Understanding the power of everyday users in surfacing harmful algorithmic
behaviors. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1-29.
Singh, S., & Nambiar, V. (2024). Role of Artificial Intelligence in the Prevention of Online
Child Sexual Abuse: A Systematic Review of Literature. Journal of Applied Security
Research, 1-42.
Smith, C. (2018). Cyber security, safety, & ethics education (Master's thesis, Utica College).
Song, S. (2020). Keeping Private Messages Private: End-to-End Encryption on Social Media.
In Boston College Intellectual Property and Technology Forum (Vol. 2020, pp. 1-12).
Southern, R., & Harmer, E. (2019). Othering political women: Online misogyny, racism and
ableism towards women in public life. Online othering: Exploring digital violence and
discrimination on the web, 187-210.
Staksrud, E., & Livingstone, S. (2009). Children and online risk: powerless victims or
resourceful participants?. Information, Communication & Society, 12(3), 364-387.
Sun, K., Zou, Y., Radesky, J., Brooks, C., & Schaub, F. (2021). Child safety in the smart
home: parents' perceptions, needs, and mitigation strategies. Proceedings of the ACM on
Human-Computer Interaction, 5(CSCW2), 1-41.
Tanwar, S., Tyagi, S., Kumar, N., & Obaidat, M. S. (2019). Ethical, legal, and social
implications of biometric technologies. Biometric-based physical and cybersecurity systems,
535-569.
Tapscott, D., & Tapscott, A. (2016). Blockchain revolution: how the technology behind
bitcoin is changing money, business, and the world. Penguin.
Taylor, E. (2012). The rise of the surveillance school. In Routledge handbook of surveillance
studies (pp. 225-231). Routledge.
Taylor, J., & Pagliari, C. (2018). Mining social media data: How are research sponsors and
researchers addressing the ethical challenges?. Research Ethics, 14(2), 1-39.
Tejani, A. S., Ng, Y. S., Xi, Y., & Rayan, J. C. (2024). Understanding and mitigating bias in
imaging artificial intelligence. RadioGraphics, 44(5), e230067.
Todres, J. (2010). Taking prevention seriously: Developing a comprehensive response to
child trafficking and sexual exploitation. Vand. J. Transnat'l L., 43, 1.
Uitts, B. S. (2022). Sex Trafficking of Children Online: Modern Slavery in Cyberspace.
Rowman & Littlefield.
Wang, F. (2024). Breaking the silence: Examining process of cyber sextortion and victims’
coping strategies. International Review of Victimology.
https://doi.org/10.1177/02697580241234331
Wang, G., Zhao, J., Van Kleek, M., & Shadbolt, N. (2021). Protection or punishment?
relating the design space of parental control apps and perceptions about them to support
parenting for online safety. Proceedings of the ACM on Human-Computer
Interaction, 5(CSCW2), 1-26.
Weru, T., Sevilla, J., Olukuru, J., Mutegi, L., & Mberi, T. (2017, May). Cyber-smart children,
cyber-safe teenagers: Enhancing internet safety for children. In 2017 IST-Africa Week
Conference (IST-Africa) (pp. 1-8). IEEE.
Whittle, H., Hamilton-Giachritsis, C., Beech, A., & Collings, G. (2013). A review of online
grooming: Characteristics and concerns. Aggression and violent behavior, 18(1), 62-70.
Wiggers, K. (2024, January 29). OpenAI partners with Common Sense Media to
collaborate on AI guidelines. TechCrunch. https://techcrunch.com/2024/01/29/openai-
partners-with-common-sense-media-to-collaborate-on-ai-guidelines/
Williams, R., Elliott, I. A., & Beech, A. R. (2013). Identifying sexual grooming themes used
by internet sex offenders. Deviant behavior, 34(2), 135-152.
Winters, G. M., Schaaf, S., Grydehøj, R. F., Allan, C., Lin, A., & Jeglic, E. L. (2022). The
sexual grooming model of child sex trafficking. Victims & Offenders, 17(1), 60-77.
Wittes, B., Poplin, C., Jurecic, Q., & Spera, C. (2016). Sextortion: Cybersecurity, teenagers,
and remote sexual assault. Center for Technology Innovation at Brookings, 11, 1-47.
Wolak, J. and Finkelhor, D., 2016. Sextortion: Findings from a survey of 1,631 victims.
http://www.unh.edu/ccrc/pdf/Sextortion_RPT_FNL_rev0803.pdf
Wortley, R. K., & Smallbone, S. (2006). Child pornography on the internet (pp. 5-2006).
Washington, DC: US Department of Justice, Office of Community Oriented Policing
Services.
Wu, T., Tien, K. Y., Hsu, W. C., & Wen, F. H. (2021). Assessing the effects of gamification on
enhancing information security awareness knowledge. Applied Sciences, 11(19), 9266.
Wulandari, C. E., Firdaus, F. A., & Saifulloh, F. (2024). Promoting Inclusivity Through
Technology: A Literature Review in Educational Settings. Journal of Learning and
Technology, 3(1), 19-28.
Wylde, V., Prakash, E., Hewage, C., & Platts, J. (2023). Ethical challenges in the use of
digital technologies: AI and big data. In Digital Transformation in Policing: The Promise,
Perils and Solutions (pp. 33-58). Cham: Springer International Publishing.
Yaseen, A. (2023). AI-driven threat detection and response: A paradigm shift in
cybersecurity. International Journal of Information and Cybersecurity, 7(12), 25-43.
Ybarra, M. L., Mitchell, K. J., & Korchmaros, J. D. (2011). National trends in exposure to
and experiences of violence on the Internet among children. Pediatrics, 128(6), e1376-e1386.
Yu, Y., Sharma, T., Hu, M., Wang, J., & Wang, Y. (2024). Exploring Parent-Child Perceptions
on Safety in Generative AI: Concerns, Mitigation Strategies, and Design Implications. arXiv
preprint arXiv:2406.10461.
Zakaria, N., Yew, L. K., Alias, N. M. A., & Husain, W. (2011, December). Protecting the
privacy of children in social networking sites with rule-based privacy tools. In 8th
International Conference on High-capacity Optical Networks and Emerging
Technologies (pp. 253-257). IEEE.
Zhu, C. (2024, September 23). Parents are finding new ways to monitor their kids. But some
experts are concerned. CBC. https://www.cbc.ca/radio/thecurrent/parenting-surveillance-
concerns-1.7329667
Zyskind, G., & Nathan, O. (2015, May). Decentralizing privacy: Using blockchain to protect
personal data. In 2015 IEEE security and privacy workshops (pp. 180-184). IEEE.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy