ssrn-4986870
ssrn-4986870
ssrn-4986870
BY
The rapid proliferation of digital technologies has fundamentally altered how children interact
with the world, offering unprecedented opportunities for learning, socializing, and
entertainment. However, this digital landscape also harbors significant risks, particularly the
threat of online predators. This research paper explores current trends and future directions
in technological solutions designed to protect children from online predators, addressing a
critical gap in the literature regarding the efficacy and ethical implications of these protective
measures.
Key findings reveal that while AI and machine learning offer promising capabilities in
detecting and mitigating online threats, they face limitations such as false positives and
negatives and privacy concerns. Parental control software, while widely adopted, often
struggles to keep pace with tech-savvy children and may inadvertently infringe on children's
autonomy. Emerging technologies like blockchain and biometric authentication present novel
approaches to identity verification and data security but raise ethical questions regarding data
privacy and the long-term implications of collecting biometric data from minors.
The research also highlights the potential of gamification in online safety education, noting its
effectiveness in engaging children but cautioning against oversimplification of complex issues.
AR and VR technologies emerge as double-edged swords, offering immersive educational
experiences while introducing new exploitation vectors. This study underscores the need for a
multi-faceted approach to online child protection, integrating technological solutions with
robust academic programs and policy frameworks. It emphasizes the importance of balancing
security measures with children's rights to privacy and autonomy, calling for greater
collaboration between technologists, policymakers, educators, and child protection advocates.
The paper concludes by proposing future research directions, including developing more
transparent AI systems, exploring blockchain applications in age verification, and creating
ethical guidelines for using immersive technologies in child-centric environments. It advocates
for a proactive stance in anticipating and mitigating emerging online threats, ensuring that
protective measures evolve with technological advancements. As online predators adapt their
methods, the proactive identification and development of advanced protective technologies
remain crucial in mitigating risks and ensuring the well-being of young internet users.
LITERATURE REVIEW
FORMS OF ONLINE PREDATIONS
Online predation of minors encompasses a range of harmful activities. These include the
production, dissemination, and possession of child sexual abuse materials; online grooming;
'sexting'; 'sextortion'; revenge pornography; commercial sexual exploitation; online
prostitution; and live streaming of sexual abuse using voice-over-internet protocols such as
Skype, trafficking, bullying, to mention a few. Many of these abuses involve sexual imagery
of children, obtained either through direct abuse or by persuading the child to create and share
such images. While these forms of abuse existed before the internet, technological
advancements have profoundly shaped and amplified their manifestation and global reach.
Stakeholders must understand these various forms of online predations, and many researchers
have concentrated on these multiple forms in their research works. Research by Whittle et al.
(2013) on grooming as a form of online predation reveals that online predators employ a variety
of deceptive tactics, such as grooming, to build trust with their victims. According to the author,
grooming involves manipulating a child's emotions to gain their confidence, ultimately leading
to exploitation. This process is complex and multifaceted. Finkelhor & Hotaling (1984) and
Craven et al. (2006) have developed theories on the stages of sexual offending and grooming.
Craven et al. (2006) adapted Finkelhor & Hotaling (1984) preconditions for sexual offending
to outline a threefold grooming process: grooming the self, grooming the surroundings
(including significant others), and grooming the child. These stages facilitate the predator’s
control over the child and their environment (Whittle et al., 2013). O’Connell (2003) proposed
a five-stage model of online grooming: 1) Friendship forming, 2) Relationship forming, 3) Risk
assessment, 4) Exclusivity, and 5) Sexual stages.
This model is widely used to explain how offenders manipulate children online (Black et al.,
2015). Each stage involves specific strategies to engage and exploit the child. The initial stage
of friendship building often involves casual conversation and questions about the child's life,
establishing a foundation of trust crucial for progressing to offline meetings. In 2016,
Barnardo’s conducted a survey of its sexual exploitation services in the UK, which included
702 children who had received support in the previous six months. Of these, 297 disclosed
being groomed online, with two-thirds having met the perpetrator and being sexually exploited.
The majority of these children were female, aged 14-17, and over half reported involvement
with multiple perpetrators. While this survey is not representative of all online abuse cases, it
highlights the pervasive nature of online grooming and exploitation, underscoring the need to
reconsider the contexts in which abuse occurs due to the omnipresence of technology.
Martin & Alaggia (2013) argue that 'cyberspace' introduces a new dimension to child protection
and should be a key consideration for practitioners and law enforcement addressing digital
media's role in child sexual abuse. The proliferation of the internet since the early 1990s has
transformed the dynamics of grooming. Traditionally, offenders would groom children in
familial, workplace, or care settings. Today, the internet and social media have made it easier
for offenders to access and target youths. Social media platforms allow offenders to select
potential victims based on their online profiles, facilitating more targeted and effective
grooming (Quayle et al., 2014). The relationship-building stage involves making the child feel
unique and offering gifts, a critical aspect of online grooming. This stage helps establish
exclusivity, where the predator creates a sense of special connection with the child, often
leading to offline meetings. Rapport building is facilitated through instant communication tools
and chat rooms, enhancing the predator's ability to form a bond with the child. A distinguishing
feature of grooming is the introduction of sexualized content in communications with the child.
This process normalizes inappropriate behaviour and prepares the child for physical contact.
Sexualization may manifest as flirting, discussions about sexual activity, or enacting sexual
fantasies. The rate at which sexualization occurs varies depending on the offender’s
motivations and strategies. Although the speed of grooming makes it challenging to quantify
the number of children solicited online at any given time, repeated and sustained contact is
crucial for the success of the grooming process.
According to O'Connell (2003), risk assessment is an important part of the grooming process.
This assessment includes considering risks associated with both the chance of detection online
and the potential dangers of meeting the youngster offline. According to Williams et al., (2013),
risk assessment is a continuous process throughout the grooming cycle rather than a single
occurrence. Predators frequently use covert tactics to ensure their safety, such as searching for
information about the child's computer setup or the presence of carers. Although O’Connell
(2003) initially described the stages of grooming as largely sequential, subsequent research has
demonstrated that these stages can be non-sequential, depending on the offender's
characteristics.
Açar (2016), research explores sextortion as a form of online predation of minors. According
to the author, sextortion is a form of sexual exploitation where perpetrators coerce victims into
providing explicit images or sexual favors by threatening to release existing private material.
Wittes, et.al., (2016), similarly stated that sextortion is a subset of broader cyber-enabled
crimes, such as child exploitation, that typically involves the threat of releasing sensitive or
compromising material unless the victim complies with demands, which can range from
sending more explicit content to engaging in offline sexual encounters. The International
Centre for Missing and Exploited Children (ICMEC) defines sextortion as a form of sexual
exploitation that relies on coercion rather than force (Baker, 2022). This coercion is typically
achieved through blackmail, where offenders threaten to distribute intimate images or videos
unless additional sexual material is provided, or sexual favours are granted.
Researchers have noted this as a growing threat, particularly to children and adolescents in the
digital age. Studies show that perpetrators often use social media platforms, gaming sites, and
instant messaging services to initiate contact with children (Faraz, et al., 2022). In some cases,
the initial images may have been shared voluntarily by the child during online interactions, or
they may have been obtained through hacking, impersonation, or other forms of online
deception. Research into sextortion cases highlights the increasing prevalence of this crime
against children. According to a report from the NCMEC, there was a significant rise in
sextortion cases reported in recent years, with most victims being minors, particularly teenage
girls (Henry & Umbach, 2024). According to the body's report, financial "sextortion" schemes
have increased since 2020, with offenders primarily targeting teenage boys via Instagram and
other social media platforms, threatening victims with compromising imagery in exchange for
cash. From 2020 to 2023, the organisation examined more than 15 million reports to the
NCMEC's hotline. The investigation discovered that sextortion instances had considerably
increased in recent years, with reports of online enticement growing by 82% between 2021 and
2022, when the NCMEC's hotline received over 80,500 reports. According to statistics
reviewed between August 2022 and August 2023, the number of events increased over the
previous year, with an average of 812 weekly complaints. According to Thorn CEO Julie
Cordua, financial sextortion is a serious and growing menace to children, particularly
adolescent boys (Nguyen, 2024). Unlike classic types of sextortion, these offenders use fear
and the threat of publishing intimate photographs to extort victims before they can seek help.
Girls have always been the most common targets of juvenile sextortion scams, according to the
survey. These schemes frequently included requests for intimate imagery, sexual activities, or
being in a love relationship. However, with financial sextortion, most victims were teenage
boys, with 90% falling between the ages of 14 and 17. The psychological and emotional toll
on victims of sextortion is profound, often leading to anxiety, depression, social withdrawal,
and even suicide (O’Malley, 2023). The combination of sexual exploitation, fear of exposure,
and shame can push victims into a state of mental health crisis, where they feel trapped and
powerless to seek help (Wolak & Finkelhor, 2016). As sextortion cases frequently go
unreported due to fear and embarrassment, many children endure this abuse in silence, with
long-term psychological consequences.
Researchers like Howard (2019), Wittes, et.al., (2016) recognised that children are particularly
susceptible to sextortion due to various factors, including their developmental stage, naivety
about online risks, and the desire for social validation. The anonymity provided by the internet
enables offenders to misrepresent themselves and establish trust with their victims by posing
as peers or influential figures (Quayle & Taylor, 2001). Research shows that children are often
groomed over time and lured into sending explicit content before realizing the gravity of the
situation. According to Ortega-Barón et al. (2022), adolescents who are particularly sensitive
to peer pressure and societal expectations are more likely to comply with requests to send
images or videos, unaware of the potential for abuse. Various countries have sought to combat
sextortion through legislative reforms and cybercrime laws aimed at protecting children from
exploitation. For instance, the United States' Protecting Against Child Exploitation Act of 2017
was introduced to address the growing issue of online sexual coercion. International efforts,
including those led by organizations like Interpol and Europol, have also resulted in the
identification and prosecution of perpetrators involved in child sextortion. Despite these efforts,
a significant challenge remains in policing the digital space. The anonymity of the internet and
the ability for offenders to operate across borders complicate law enforcement efforts to track
and apprehend offenders. The lack of a unified global legal framework and the variations in
child protection laws across different jurisdictions further hinder the ability to effectively
combat sextortion on a large scale (Flynn, 2019). Technology companies have been urged to
develop more robust safeguards, including privacy protections, AI-driven content monitoring,
and more secure platforms to reduce children's risk exposure. Organizations such as the
National Cyber Security Alliance should partner with schools and communities to deliver age-
appropriate content on online safety (Smith, 2018). Law enforcement authorities, in partnership
with non-profit organizations and the business sector, have been trying to strengthen victim
reporting systems and provide prompt assistance via helplines and counseling programs
(Immigration & Customs Enforcement, 2023). Despite these attempts, many academics, like
Henry et al., (2018), believe that better cross-sector coordination, more resources for victim
care, and a holistic approach to combating sextortion are still required (Henry & Umbach,
2024).
Cyberbullying, another prevalent form of online predation, has been defined by Sezer & Tunçer
(2021) as intentional and repeated harm inflicted through digital platforms, which has emerged
as a significant issue affecting children and adolescents in the digital age. Cyberbullying
involves the use of digital technologies to harass, threaten, or humiliate others. According to
Smith et al. (2008), it encompasses spreading rumors, sending threatening messages, sharing
embarrassing images or videos, and exclusion from online groups. What distinguishes
cyberbullying from traditional bullying is its pervasive nature: it can occur 24/7, reach a large
audience quickly, and often remains permanently accessible online. Children and adolescents
are particularly susceptible to cyberbullying due to their frequent use of digital platforms and
their limited understanding of the risks associated with online interactions. The anonymity
afforded by the internet can embolden perpetrators, as they are less likely to face immediate
consequences for their actions. Moreover, the lack of physical proximity between the bully and
the victim can make it difficult for parents, teachers, and guardians to detect cyberbullying in
real-time.
Numerous studies have sought to quantify the prevalence of cyberbullying among children and
adolescents. In a large-scale study by the Cyberbullying Research Centre, about 30% of the
surveyed teens over the last 12 years reported having experienced cyberbullying at least once
in their lifetime (Patchin & Hinduja, 2024). Other research indicates that girls are more likely
to be victims of cyberbullying than boys, particularly about appearance-based insults and social
exclusion. However, boys are more likely to engage in direct forms of cyberbullying, such as
physical threats and harassment in online gaming environments. Cyberbullying can have
profound psychological, emotional, and social consequences for children. Research shows that
victims of cyberbullying often experience higher levels of anxiety, depression, low self-esteem,
and suicidal ideation compared to their non-bullied peers. The continuous nature of
cyberbullying—where victims may be subjected to harassment even within the perceived
safety of their homes—intensifies feelings of hopelessness and isolation. Moreover, the public
nature of cyberbullying, where hurtful content can be shared widely and rapidly, exacerbates
the emotional distress felt by victims. In some cases, the fear of social rejection or retaliation
prevents victims from reporting incidents of cyberbullying, leading to prolonged suffering.
Research has shown that these emotional scars can persist long after the bullying stops,
affecting children’s academic performance, social relationships, and mental health into
adulthood (Kowalski et al., 2014).
According to Alleva (2019), social media platforms are the primary arenas for cyberbullying
among children, with sites such as Instagram, TikTok, Snapchat, and Facebook frequently
implicated in research. As these platforms facilitate sharing and interaction among peers, they
are also venues for harmful behaviors like exclusion, public shaming, and harassment. Studies
have found that cyberbullying often takes place on platforms where children can create
anonymous or pseudonymous profiles, allowing perpetrators to engage in bullying without fear
of identification (Cassidy et al., 2013). Platforms that feature "likes," comments, or direct
messages also make it easy for bullies to target victims and for harmful content to gain viral
momentum. The design of these platforms, which encourages constant engagement and the
curation of one’s digital image, has been criticized for creating an environment where bullying
can thrive (George & Odgers, 2015). Addressing cyberbullying through legal and policy
frameworks has been a complex challenge. While many countries have enacted laws against
traditional bullying, the enforcement of cyberbullying laws has lagged due to difficulties in
defining jurisdiction, issues with anonymity, and the cross-border nature of online platforms.
Some regions, such as the European Union and certain U.S. states, have enacted specific laws
that criminalize online harassment, but enforcement is still uneven. Schools are progressively
implementing anti-cyberbullying rules, recognizing that the effects of online bullying extend
into the classroom. According to research, schools that encourage digital literacy and anti-
bullying education programs saw fewer cases of cyberbullying among kids (Marzano, 2021).
However, successfully adopting these policies necessitates collaboration among schools,
parents, and law enforcement, as well as a thorough understanding of the cyberbullying
dynamics in each community.
Child trafficking is another heinous crime that exploits vulnerable children for various forms
of labor, sexual exploitation, and abuse. Researchers have identified online child trafficking to
be the use of digital platforms, social media, and the dark web to facilitate illegal activities
related to the exploitation of children. According to Sarkar, (2015), online child trafficking
refers to the use of the internet and digital technologies by traffickers to recruit, exploit, and
traffic children. According to the United Nations Office on Drugs and Crime (UNODC),
trafficking in children through online platforms can occur for various reasons, including sexual
exploitation, forced labor, and illegal adoption. The International Labour Organization (ILO)
estimates that more than 3 million children are being exploited in sex and labor trafficking,
with an increasing number being lured and exploited through online means (ILO, 2022; Uitts,
2022). The internet allows traffickers to not only groom victims but also distribute child sexual
abuse materials (CSAM), further commodifying the exploitation of children. Research shows
that online child trafficking often begins with grooming, where traffickers establish
relationships with children through online platforms (Winters et al., 2022). Social media, chat
rooms, and online gaming platforms are shared spaces where traffickers can pose as peers or
trusted adults to build rapport with children. Once trust is established, traffickers can
manipulate, coerce, or threaten children into exploitative situations. Traffickers use
sophisticated methods to avoid detection, often employing encryption technologies, dark web
forums, and anonymous payment systems like cryptocurrencies to conduct illegal activities
(Adel & Norouzifard, 2024). Online classified advertisements, fake job offers, and modeling
scams are commonly used tactics to lure children into trafficking networks. A study by ECPAT
International (2021) highlighted how traffickers leverage digital platforms to recruit children
for both local and cross-border trafficking. Promises of work, education, or romantic
relationships often deceive children. Once recruited, traffickers may exploit them for
pornography, prostitution, or forced labor. The anonymity provided by the internet enables
traffickers to operate across international borders, making the problem difficult to regulate or
control.
Measuring the exact prevalence of online child trafficking has been seen to be difficult due to
the hidden nature of the crime and the evolving tactics used by traffickers. However, studies
indicate that the problem is widespread and growing. A report by Europol found that reports of
child exploitation online surged during the COVID-19 pandemic, with an increase in both the
production and dissemination of CSAM (Europol, 2020). The report found that online
platforms are increasingly being used to traffic children, particularly for sexual exploitation. In
addition, data from Interpol and Europol show an increasing number of online cases related to
child trafficking and exploitation across multiple regions, including Europe, North America,
and Southeast Asia (Europol, 2020; INTERPOL, 2023). These cases often involve international
trafficking rings that use online platforms to distribute explicit content and coordinate the
movement of trafficked children.
The psychological and physical impact of online child trafficking on victims has been found to
be profound and long-lasting. The experience of children trafficked for sexual exploitation is
that of repeated abuse, coercion, and violence; due to this, the victims of trafficking experience
significant mental health issues, including depression, anxiety, PTSD, and suicidal ideation
(Hopper & Hidalgo, 2006). Trafficked children also suffer from social isolation, trust issues,
and difficulties forming healthy relationships in the future. Moreover, the digital nature of
online trafficking means that explicit content involving trafficked children can be shared and
circulated indefinitely, causing ongoing harm to the victims long after their immediate
exploitation ends (Raines, 2022). This "digital permanence" exacerbates the trauma and
humiliation experienced by victims, who may fear re-victimization every time images or videos
resurface online.
There has been growing recognition of the need for a multi-faceted approach to combat online
child trafficking, involving prevention, intervention, and collaboration between various
stakeholders. For instance, Todres (2010) advocates that prevention efforts should focus on
educating children, parents, and communities about the risks of online exploitation and
grooming. Schools and organizations have developed digital literacy programs to help children
recognize and avoid online threats. Similarly, international organizations like ECPAT
International and UNICEF have advocated for stronger laws and policies to protect children
from online trafficking and exploitation (Rebhi, 2023). Writer Williams (2013), also noted and
commended that many countries have passed legislation aimed at criminalizing online child
trafficking and imposing stricter regulations on internet service providers (ISPs) and social
media platforms. However, as the author noted, the enforcement of these laws remains
inconsistent, and traffickers often exploit legal loopholes and jurisdictional issues to evade
prosecution. Another critical aspect of intervention is providing support and rehabilitation for
child victims. Specialized trauma-informed care, including counselling and mental health
services, is essential for helping victims recover from the emotional and psychological damage
caused by trafficking. Long-term support, such as access to school, healthcare, and social
services, is required to reintegrate trafficked children back into society. Furthermore, Williams
(2013) and Gezinski & Gonzalez-Pons (2024) argued that future studies on online child
trafficking should focus on understanding traffickers' developing techniques, particularly
considering technical improvements. Research into the intersections of human trafficking,
technology, and international law is critical for building more effective policies and
interventions. Furthermore, more data on the frequency and trends of online child trafficking
are needed, particularly in areas where internet connection is quickly rising. Comparative
research between nations with diverse legal regimes could provide insights into the most
effective measures for preventing and prosecuting online trafficking crimes (Ambagtsheer,
2021).
The various literature reviewed reveals that online predators utilize sophisticated psychological
manipulation techniques, including flattery, gift-giving, and threats, to isolate their victims
from protective social networks and coerce them into exploitative situations. Wortley (2013)
critiques the notion that the Internet merely serves as a platform for offending, proposing
instead that it fundamentally engenders these crimes. He argues that relying solely on
traditional tertiary prevention strategies—such as arrest and rehabilitation—fails to address the
environmental and situational factors that underpin these offenses. Wortley (2013) also
highlights that while most individuals are not inherently attracted to children, situational factors
can induce exploitative behavior. To effectively combat abuse and exploitation of children,
Wortley (2013) advocates for a paradigm shift towards addressing environmental cues that
facilitate offending, rather than solely focusing on offender rehabilitation. Public health
models, which emphasize altering environmental conditions to prevent crime, offer a more
comprehensive approach. This approach acknowledges the role of prosecution and treatment
but prioritizes the removal of hazards to reduce risk.
Generally, the review of various literature reveals a predominance of quantitative and
qualitative research on online child sexual exploitation (e.g., Karayianni et.al., 2017). The
studies also include recommendations for responding to TFVA, encompassing legal,
technological, and educational approaches and support initiatives for victims. Various studies
focus on specific forms of TFVA, such as image-based sexual abuse (e.g., Citron & Franks,
2014), online hate speech (e.g., Bailey, 2010; Citron, 2014), online harassment and trolling
(e.g., Bailey, 2017; Pavan, 2017), etc. Each study contributes to a broader understanding of
how to address and mitigate the impacts of TFVA and online predation of minors.
CURRENT TECHNOLOGICAL SOLUTIONS
Researchers have identified various technological solutions aimed at mitigating the risks posed
by online predators, ranging from parental control software to advanced AI-driven systems
(Mwijage & Ghosh, 2024). According to the developers of these tools and researchers, these
tools are designed to protect children from harmful content and predatory behavior, but their
effectiveness remains a subject of ongoing debate.
PARENTAL CONTROL SOFTWARE
As children increasingly rely on the internet for education, social interaction, and
entertainment, the risks associated with online exposure, such as cyberbullying, exposure to
inappropriate content, and online predation, have also risen. Parental control software has
emerged as a technological tool designed to safeguard children from these threats by enabling
parents to monitor, filter, and limit their children's online activities. Tools such as Net Nanny,
Qustodio, and Norton Family offer features, including content filtering, screen time
management, and location tracking. A large body of evidence indicates that these tools are
primarily intended to block access to hazardous or inappropriate content, limit screen time, and
track online interactions in order to detect potentially risky behavior. Livingstone et al. (2017)
contend that these methods can effectively reduce children's exposure to hazardous online
information. According to a 2019 study by Green et al., many parents consider these
technologies to be vital in today's highly digitalized environment, where children face a wide
range of online risks. Parental control software is frequently viewed as a prophylactic strategy
that allows parents to maintain control over their children's digital behaviors and intervene
when necessary. However, while such software can block explicit content and limit harmful
exposure, this research has found no proof that it is foolproof, and its actual effectiveness is a
subject of ongoing debate. Several studies suggest that parental control software can help
reduce the risk of children encountering harmful online content. For example, Dombrowski et
al., (2007) concluded that children whose parents used filtering and monitoring software were
less likely to access sexually explicit content or be exposed to online grooming. Moreover,
these tools can provide peace of mind to parents who struggle to keep up with the evolving
nature of online platforms and the increasing amount of time children spend online (Helsper et
al., 2024).
However, the effectiveness of parental control software in safeguarding children is not
universally supported. Research has shown that children often find ways to circumvent these
controls, especially older children and teenagers who are more digitally savvy (Sun et.al.,
2021). Children may use VPNs, proxy servers, or alternate devices to bypass restrictions, thus
rendering the software ineffective. Yu et al. (2024) argues that while parental control software
can be helpful for younger children, it is less successful in addressing the online behavior of
adolescents, who are more likely to find loopholes or view restrictions as a challenge. Another
significant criticism of parental control software, stated by researchers, is its potential to
infringe on children's privacy and autonomy (Erickson et.al., 2016). Surveillance features, such
as monitoring browsing history, recording keystrokes, and tracking location, have sparked
ethical concerns about the balance between protection and invasion of privacy (Livingstone et
al., 2019). Critics argue that constant surveillance can undermine trust between parents and
children, potentially leading to secrecy, rebellion, or strained family relationships. Moreover,
the literature suggests that overly restrictive use of parental control tools can stifle children's
ability to explore and learn independently in the digital world. A study by Livingstone et al.,
(2019) found that while filtering and monitoring can protect children from harmful content, it
can also limit their opportunities to develop critical digital skills and literacy. Children who are
overly shielded by parental controls may be less prepared to navigate the online world
independently and may lack the resilience needed to deal with online risks. The tension
between protecting children and allowing them the freedom to explore the internet has been a
central theme in much of the research on parental control software. Scholars such as
Livingstone & Blum-Ross (2020) argue that when used excessively or without dialogue,
parental control tools can undermine children's agency and sense of responsibility. Instead, they
suggest a more balanced approach that incorporates both technological safeguards and open
communication between parents and children.
Studies have also called into question the reliance on parental control software as a stand-alone
solution for online safety. For instance, the literature of Ali et al. (2020) suggests that the
effectiveness of these tools can be significantly enhanced when combined with active parental
mediation and digital literacy education. This criticism also notes the point that teaching
children how to navigate the internet safely, recognize online risks, and practice responsible
digital behavior is crucial in preparing them for independent internet use. Parental control
software can be helpful in providing immediate protection, but it does little to promote long-
term digital literacy or critical thinking about online content. As Finkelhor et al. (2021)
emphasize, digital parenting should go beyond setting up software to include regular
conversations about online experiences, risks, and responsibilities. Parents who engage in
active mediation are more likely to raise digitally resilient children capable of handling online
challenges, such as cyberbullying or exposure to harmful content. Hence, much literature
argues that digital literacy programs within schools and communities should complement
parental efforts to safeguard children. Wang et.al., corroborated this in their study in 2021,
when they noted that children who participated in digital safety education programs were better
equipped to identify online risks and employ self-regulation strategies, regardless of whether
parental control software was in place. Therefore, education's role at home and in formal
settings is crucial in fostering safer online environments for children.
AI-DRIVEN CONTENT FILTERING
AI-driven content filtering is a technology that utilizes AI to identify, monitor, and block
harmful content in real time (Marsoof et al., 2023). This tool is used across platforms such as
social media, search engines, and educational websites to prevent children from accessing
inappropriate materials, including violence, explicit sexual content, and other forms of online
harm. AI-driven content filtering uses machine learning algorithms, natural language
processing (NLP), and computer vision technologies to analyze and classify content based on
its risk to children (Muthazhagu et al., 2024). Unlike traditional rule-based filtering systems,
AI filtering models adapt and learn from vast amounts of data, improving their ability to detect
harmful content in various forms, including text, images, videos, and live streams. This
dynamic adaptability allows AI tools to keep pace with the constant evolution of online threats.
The literature on AI-driven content filtering shows promising results regarding its ability to
shield children from a wide range of online harms. Jordan (2024) argues that AI systems are
particularly effective in large-scale platforms like YouTube, Facebook, and TikTok, where
human moderation alone would be inadequate due to the sheer volume of content generated.
AI tools can process vast quantities of user-generated material in real time, identifying potential
threats before they reach children. This level of automation allows for faster and more
comprehensive coverage than manual monitoring. A study by Singh, and Nambiar (2024)
highlights that AI-driven filtering systems can significantly reduce children's exposure to
explicit content, especially on platforms frequented by younger users. For example, Google's
Safe Search (Okeh, 2023) and YouTube’s (Moxley, 2023) restricted mode relies heavily on AI
to block inappropriate search results or videos, reducing the likelihood of children accidentally
encountering harmful materials.
Despite these successes, the reviewed literature also points to several limitations of AI-driven
filtering in practice. One issue is over-blocking, where legitimate content is mistakenly flagged
and filtered out. For instance, Marsoof et al., (2023), pointed out that educational content
related to sexual health or social justice movements may be censored, as AI algorithms
sometimes struggle to interpret nuanced contexts. This can limit children's access to valuable
information, particularly in school settings or research environments. In addition, Udupa et al.,
(2023), noted the inability of AI systems to account for all cultural and linguistic nuances. AI
filters are often trained on datasets that reflect dominant cultural norms and languages, which
can lead to bias or inaccuracies in detecting harmful content in non-Western contexts. The
reviewed literature increasingly recognizes that AI-driven content filtering, while effective,
cannot operate in isolation. Many scholars advocate for a hybrid approach that combines AI
tools with human moderators to ensure better accuracy and context sensitivity. AI systems are
highly efficient at identifying large volumes of problematic content quickly, but they often
require human input to assess edge cases or content that requires nuanced judgment, such as
satire, irony, or artistic expression. (Gorwa et al., 2020) argue that the integration of human
oversight is essential in mitigating issues like over-blocking and false positives. Human
moderators can review flagged content and make informed decisions about whether it
genuinely poses a threat to children's safety. This collaborative approach leverages the strengths
of AI—speed, scale, and pattern recognition—and human moderators' capacity for
understanding complex social and cultural contexts. Nevertheless, commentators such as
Manne et al., (2022) and Gillespie (2018), note that the reliance on human moderators presents
its challenges. Given the overwhelming amount of content generated daily, human moderation
can be costly and resource-intensive, making it difficult for smaller platforms or schools to
implement this approach effectively. Furthermore, human moderators are susceptible to
psychological harm from being exposed to disturbing content, a problem exacerbated by the
sheer volume of material that requires review.
EDUCATION-BASED SOLUTIONS
While technological solutions like monitoring systems and AI-driven content filtering provide
some level of protection, education-based solutions have gained attention as a critical tool for
safeguarding children online. Education-based solutions focus on equipping children with the
knowledge, skills, and behaviors needed to protect themselves from online risks. Various
literature suggests that education-based solutions are vital to online safety strategies. According
to authors such as Weru et.al., (2017), they foster critical thinking, self-regulation, and the
capacity for independent decision-making—skills that children need to navigate the
complexities of the digital world. Staksrud & Livingstone, (2009) state that children who
participated in digital literacy programs were more likely to avoid risky behaviours, such as
sharing personal information or engaging in conversations with strangers online. Furthermore,
these children were better equipped to recognize online threats and respond appropriately, such
as reporting suspicious behavior or seeking help from a trusted adult; hence unlike passive
forms of protection, such as filtering or blocking content, these approaches emphasize
proactive engagement. Researchers state that digital literacy is a cornerstone of education-
based solutions. According to Livingstone et al. (2017), digital literacy goes beyond teaching
children how to use digital devices; it includes understanding online privacy, identifying
credible sources, and recognizing inappropriate content or interactions. Key components
include digital literacy programs, online safety curricula, and awareness-raising initiatives that
teach children how to recognize threats, avoid risky behaviors, and seek help when needed.
While education-based solutions focus on behavioural change, technology has been recognized
to play a critical role in delivering and enhancing these programs. Interactive platforms, e-
learning modules, and gamified learning experiences can engage children in a way that
traditional education methods might not. Tools like Be Internet Awesome by Google (Seale &
Schoenberger, 2018) and Common-Sense Media (Wiggers, 2024) use digital games and
interactive modules to teach children how to manage online risks effectively. These tools
provide real-life simulations where children can practice safe behaviors in a controlled
environment, helping them develop the necessary skills to apply in actual online interactions.
Polak et al., (2022) indicated that education-based solutions delivered through technology are
more likely to capture children's attention, leading to better retention of key safety concepts.
Children are more engaged when lessons are interactive, relevant, and integrated with the
technology they already use daily (Blackwell, 2013.). Additionally, the use of AI-driven
educational platforms can tailor content to different age groups and learning styles, making
these solutions more accessible and effective for diverse audiences.
Despite their potential, education-based solutions face several challenges. One significant issue
identified by researchers is accessibility and equity. Not all children have access to high-quality
digital literacy programs, particularly in low-income or underserved communities. According
to Dodel & Mesch, (2018), the digital divide exacerbates this issue, as children without regular
internet access or technological resources are less likely to benefit from online safety education.
As a result, these children may be more vulnerable to online risks compared to their more
digitally literate peers. Additionally, the rapid pace of technological change makes it difficult
for education-based solutions to stay current. As new platforms, apps, and online behaviours
emerge, digital literacy programs must continuously evolve to address these developments.
Livingstone & Stoilova (2019) emphasize that education-based solutions need to be flexible
and regularly updated to remain relevant to the latest online trends and risks. However,
developing and distributing up-to-date educational content can be resource-intensive, and
many schools or educational programs may lack the necessary funding or expertise to keep
pace with these changes. Another challenge is the variability in program quality and content.
Falloon, (2020) noted that not all digital literacy programs are created equally, and there is no
universal standard for what constitutes adequate online safety education. Some programs may
focus primarily on basic internet usage, neglecting more nuanced topics such as online privacy,
and ethical online behaviour, or recognizing sophisticated forms of manipulation, such as
online grooming. This variability in content and quality can lead to gaps in children's
understanding of online risks.
The role of parents in supporting education-based solutions is discussed as being crucial.
According to commentators, parents can reinforce the lessons learned through digital literacy
programs by discussing online safety with their children, monitoring their internet usage, and
setting clear guidelines for acceptable online behaviour. However, research by Altuna et al.,
(2020) shows that many parents feel ill-equipped to guide their children in navigating the
digital world, either due to a lack of technical knowledge or because they underestimate the
risks their children face online. This gap in parental involvement underscores the importance
of providing digital literacy education for both children and parents. Parents should be educated
on how to recognize online risks, how to talk to their children about internet safety, and how to
use available tools, such as parental controls, to create a safer digital environment at home
(Livingstone & Blum-Ross, 2020). Programs that incorporate parental education alongside
children's digital literacy training tend to have better outcomes in terms of overall online safety.
GAPS IN THE LITERATURE
Despite the growing body of research on safeguarding children online, this research notes that
there is a notable scarcity of studies specifically focused on technological solutions designed
to protect minors. Existing research often emphasizes legal aspects, policy frameworks, or
behavioural interventions, while technological solutions receive comparatively less attention.
This lack of focus is significant because understanding the efficacy of these technologies is
crucial for ensuring they are genuinely effective in safeguarding children from online threats.
It is noticed, however, that the few studies that do examine technological solutions frequently
fall short in several areas. For instance, there is a gap in understanding the long-term
effectiveness of these technologies, as many studies focus on their immediate impacts rather
than their sustainability over time. Additionally, as online predators continuously adapt their
tactics, the existing research does not sufficiently address how well these technologies evolve
to counter new and emerging threats.
Furthermore, there is limited exploration of the impact of advanced technologies such as
blockchain and biometric authentication on child safety. While these technologies hold promise
for enhancing security, their specific applications and effectiveness in the context of child
protection remain underexplored. Blockchain, for instance, could offer novel ways to verify
identities and ensure data integrity, but its potential benefits and limitations in this field have
not been thoroughly investigated (Alotaibi, 2019). Similarly, biometric authentication could
provide more secure access controls, yet its implications for child safety are still unclear.
Moreover, there is a lack of comprehensive studies on the global applicability of technological
solutions. Research often concentrates on developed regions with well-established
technological infrastructures and regulatory frameworks. However, studies that address how
these solutions can be adapted or scaled for use in developing regions, where access to
technology and the enforcement of regulatory measures may be inconsistent, are needed.
Understanding these differences is crucial for creating effective, universally applicable
solutions that can protect children regardless of their geographic location.
Addressing these gaps is essential for developing a more nuanced understanding of how
technological solutions can enhance child safety online. Future research should focus on
evaluating the long-term effectiveness of existing technologies, exploring emerging
advancements, and considering the global applicability of protective measures to ensure a
comprehensive approach to safeguarding children in the digital age.
RECOMMENDATIONS
Looking ahead, the ongoing integration of these technological solutions will necessitate
refinement and collaboration among all stakeholders, including policymakers, technology
developers, educators, and parents. Therefore, the following recommendations are proposed:
1. Enhancing AI and Machine Learning Capabilities
While AI and machine learning are at the forefront of technological solutions for detecting
harmful content and behaviours, their effectiveness remains limited by issues such as false
positives and false negatives. It is essential to invest in refining these technologies to better
capture contextual nuances, especially in detecting more subtle forms of online exploitation,
such as grooming or manipulative conversations. Algorithms need to evolve to process not just
isolated words or images but also the intention and underlying context. This can be achieved
by incorporating a more diverse dataset that represents the wide variety of online interactions
and continually updating models to adapt to emerging threats. However, such improvements
must also prioritize privacy and transparency, avoiding overreach that could result in
unwarranted restrictions or privacy violations (Walters & Novak, 2021).
2. Improving Data Privacy and Ethical Use in Biometric Technologies
Biometric authentication holds significant potential in safeguarding children’s online activities
by verifying identity and preventing unauthorized access. However, its application must be
carefully regulated to avoid misuse or privacy breaches. Developers and policymakers should
prioritize establishing stringent guidelines on the collection, storage, and use of biometric data
to ensure it is not exploited for unethical purposes (Gates, 2011). Transparency in how data is
handled, clear opt-in mechanisms, and the ability for users (or their guardians) to control their
data are critical. Further, institutions that use biometric tools should implement regular audits
to ensure compliance with privacy standards and safeguard children’s rights.
3. Expanding Education-Based Solutions
Technological tools alone cannot comprehensively protect children from online predators;
education must play a pivotal role in equipping children with the knowledge and skills to
navigate the digital world safely. Education-based initiatives, including gamified learning
platforms, should be scaled up to cover a wider demographic. Moreover, these initiatives
should not solely target children; they must also involve parents, educators, and caregivers.
Awareness programs should focus on how to identify signs of online grooming, exploitation,
and trafficking, emphasizing collaborative efforts between children and their guardians.
Governments and educational institutions should integrate online safety into school
curriculums, ensuring a sustained focus on teaching digital literacy and critical thinking.
4. Strengthening Monitoring and Alert Systems
Monitoring and alert systems that track children's online behavior and flag potential threats in
real-time are crucial but remain underdeveloped in terms of their accuracy and effectiveness.
One recommendation is the creation of unified platforms where monitoring systems can
collaborate across devices and services, reducing fragmentation and ensuring comprehensive
coverage (Ajish, 2024). It is also critical that these systems offer granular customization,
enabling parents and guardians to adjust settings based on their child’s maturity level and
unique online habits (Altuna et al., 2020). Additionally, monitoring systems should operate
transparently, ensuring that children’s autonomy and privacy are respected while still providing
necessary safeguards (Hollanek, T., 2023).
5. Developing Cross-Platform Policies and Standards
One of the greatest challenges in safeguarding children online is the decentralized nature of the
Internet, where different platforms operate under varying standards. To overcome this, tech
companies, governments, and international regulatory bodies should collaborate to develop
universal protocols that platforms must adhere to when handling child safety. Such protocols
could include mandatory reporting systems for detecting and removing harmful content, stricter
age verification methods, and enhanced data-sharing practices between law enforcement
agencies and tech platforms for better tracking of offenders. These standards should extend
across borders, given the global nature of online interactions, creating a more consistent and
reliable framework for child safety.
6. Regulating the Internet of Things (IoT) and Smart Devices
The proliferation of IoT and smart devices has introduced new avenues for predators to exploit,
particularly through insecure devices that collect sensitive information or facilitate
unmonitored communication. To mitigate these risks, manufacturers must be held accountable
for embedding robust security measures into their products, particularly those intended for
child users. Governments should enforce regulations that require IoT devices to include built-
in safety features such as end-to-end encryption, secure authentication processes, and parental
control settings (Singh et.al., 2024). Additionally, awareness campaigns aimed at parents and
caregivers should emphasize the importance of securing these devices and understanding the
risks they pose.
7. Incorporating Blockchain Technology for Accountability and Transparency
Blockchain technology offers a promising solution for increasing transparency and
accountability in the digital space, particularly in terms of data security and monitoring online
transactions. Implementing blockchain in child-safeguarding tools can create an immutable
record of online interactions, ensuring that malicious activities are traceable and making it more
difficult for predators to cover their tracks. For example, blockchain could be used to
authenticate the age and identity of users without compromising privacy, ensuring that children
are not able to bypass age restrictions or access inappropriate content (Alotaibi, 2019).
However, blockchain solutions must be developed carefully to avoid inadvertently exposing
children to new vulnerabilities or privacy risks.
8. Advancing Public-Private Partnerships and International Collaboration
The fight against online predators requires more than just technological innovation; it demands
robust collaboration between private tech companies, governments, non-governmental
organizations (NGOs), and international bodies (Minnaar, 2023) Public-private partnerships
should focus on pooling resources and knowledge to create scalable solutions that protect
children globally. Such collaborations could fund research into emerging threats, develop
universally applicable technological solutions, and promote data-sharing between entities to
catch and prosecute offenders across borders. International cooperation is also crucial in
establishing consistent regulations and standards, especially when it comes to cross-border
online crimes.
9. Investing in Research on Emerging Trends
Finally, continuous research into emerging technologies and their potential risks and benefits
is necessary to stay ahead of the evolving tactics used by online predators. Governments,
educational institutions, and tech companies should collaborate to fund long-term studies on
the effectiveness of current safeguarding tools and identify areas for improvement. This
research should also include ethical considerations, particularly regarding how technologies
like AI, IoT, and blockchain may infringe upon the rights of children or be used inappropriately
by bad actors.
Consequently, while significant strides have been made in leveraging technology to protect
children online, there is still room for improvement. AI and machine learning, IoT, biometric
authentication, blockchain, and education-based solutions offer a multi-faceted approach, but
they are not foolproof. These technologies must continue to evolve alongside the threats they
aim to mitigate, and greater collaboration between stakeholders is essential to ensuring a safer
digital environment for children. Ultimately, a combination of technological advancements,
policy development, and education will provide the most comprehensive protection for
children against online predators.
REFERENCE
Abdelmaboud, A., Ahmed, A. I. A., Abaker, M., Eisa, T. A. E., Albasheer, H., Ghorashi, S. A.,
& Karim, F. K. (2022). Blockchain for IoT applications: taxonomy, platforms, recent
advances, challenges and future research directions. Electronics, 11(4), 630.
Abu Deeb, F. (2024). Enhancing Cybersecurity with Extended Reality: A Systematic
Review. Journal of Computer Information Systems, 1-15.
Açar, K. V. (2016). Sexual Extortion of Children in Cyberspace. International Journal of
Cyber Criminology, 10(2).
Adel, A., & Norouzifard, M. (2024). Weaponization of the Growing Cybercrimes inside the
Dark Net: The Question of Detection and Application. Big Data and Cognitive
Computing, 8(8), 91.
Ajish, D. (2024). Streamlining Cybersecurity: Unifying Platforms for Enhanced
Defense. International Journal of Information Technology, Research and Applications, 3(2),
48-57.
Albalawi, S., Alshahrani, L., Albalawi, N., Kilabi, R., & Alhakamy, A. A. (2022). A
comprehensive overview on biometric authentication systems using artificial intelligence
techniques. International Journal of Advanced Computer Science and Applications, 13(4), 1-
11.
Ali, S., Elgharabawy, M., Duchaussoy, Q., Mannan, M., & Youssef, A. (2020, December).
Betrayed by the guardian: Security and privacy risks of parental control solutions.
In Proceedings of the 36th Annual Computer Security Applications Conference (pp. 69-83).
Alnajim, A. M., Habib, S., Islam, M., AlRawashdeh, H. S., & Wasim, M. (2023). Exploring
cybersecurity education and training techniques: a comprehensive review of traditional,
virtual reality, and augmented reality approaches. Symmetry, 15(12), 2175.
Alotaibi, B. (2019). Utilizing blockchain to overcome cyber security concerns in the internet
of things: A review. IEEE Sensors Journal, 19(23), 10953-10971..
Altuna, J., Martínez-de-Morentin, J. I., & Lareki, A. (2020). The impact of becoming a parent
about the perception of Internet risk behaviors. Children and Youth Services Review, 110,
104803.
Amankwah-Amoah, J., Khan, Z., Wood, G., & Knight, G. (2021). COVID-19 and
digitalization: The great acceleration. Journal of business research, 136, 602-611.
Ambagtsheer, F. (2021). Understanding the challenges to investigating and prosecuting organ
trafficking: a comparative analysis of two cases. Trends in Organized Crime, 1-28.
Arora, A., Nakov, P., Hardalov, M., Sarwar, S. M., Nayak, V., Dinkov, Y., ... &
Bailey, J. (2010). Twenty Years Later Taylor Still Has It Right: How the Canadian Human
Rights Act’s Hate Speech Provision Continues to Contribute to Equality. The Supreme Court
of Canada and Social Justice: Commitment, Retrenchment or Retreat, Sheila McIntyre and
Sanda Rodgers, eds., LexisNexis Canada.
Bailey, J. (2017). From “zero tolerance” to “safe and accepting”: Surveillance and equality in
the evolution of Ontario education law and policy. Education Law Journal, 26(2), 147-180.
Bailey, J., Henry, N., & Flynn, A. (2021). Technology-Facilitated Violence and Abuse:
International Perspectives and Experiences. In Emerald Publishing Limited eBooks (pp. 1–
17). https://doi.org/10.1108/978-1-83982-848-520211001
Baker, I., 2022. Blackmail on the Internet: an exploration of the online sexual coercion of
children. Nottingham Trent University (United Kingdom).
Betrand, C. U., Onyema, C. J., Benson-Emenike, M. E., & Kelechi, D. A. (2023).
Authentication system using biometric data for face recognition. International Journal of
Sustainable Development Research, 68-78.
Binder, M. (2019, February 22). YouTube’s pedophilia problem: More than 400 channels
deleted as advertisers flee over child predators. Mashable.
https://mashable.com/article/youtube-wakeup-child-exploitation-explained
Black, P. J., Wollis, M., Woodworth, M., & Hancock, J. T. (2015). A linguistic analysis of
grooming strategies of online child sex offenders: Implications for our understanding of
predatory sexual behavior in an increasingly computer-mediated world. Child abuse &
neglect, 44, 140-149.
Blackwell, C. (2013). Teacher practices with mobile technology integrating tablet computers
into the early childhood classroom. Journal of Education Research, 7(4).
Bonneau, J., Herley, C., Van Oorschot, P. C., & Stajano, F. (2012, May). The quest to replace
passwords: A framework for comparative evaluation of web authentication schemes. In 2012
IEEE symposium on security and privacy (pp. 553-567). IEEE.
Brown, A. J. (2020). “Should I stay or should I leave?”: Exploring (dis) continued Facebook
use after the Cambridge Analytica scandal. Social media+ society, 6(1), 2056305120913884.
Buil-Gil, D., Kemp, S., Kuenzel, S., Coventry, L., Zakhary, S., Tilley, D., & Nicholson, J.
(2023). The digital harms of smart home devices: A systematic literature review. Computers
in Human Behavior, 145, 107770.
Buolamwini, J., & Gebru, T. (2018, January). Gender shades: Intersectional accuracy
disparities in commercial gender classification. In Conference on fairness, accountability and
transparency (pp. 77-91). PMLR.
Buttussi, F., & Chittaro, L. (2017). Effects of different types of virtual reality display on
presence and learning in a safety training scenario. IEEE transactions on visualization and
computer graphics, 24(2), 1063-1076.
Carlson, B. (2019). Disrupting the master narrative: Indigenous people and tweeting colonial
history. Griffith Review, (64), 224-234.
Carr, J. (2003). Child abuse, child pornography, and the internet. London: NCH.
Cassidy, W., Faucher, C., & Jackson, M. (2013). Cyberbullying among youth: A
comprehensive review of current international research and its implications and application to
policy and practice. School Psychology International, 34(6), 575-612.
Chen, L. W., Chen, T. P., Chen, H. M., & Tsai, M. F. (2019). Crowdsourced children
monitoring and finding with holding up detection based on internet-of-things
technologies. IEEE Sensors Journal, 19(24), 12407-12417.
Chen, X., Gao, W., Chu, Y., & Song, Y. (2024). Enhancing Interaction in Virtual-Real
Architectural Environments: A Comparative Analysis of Generative AI-driven Reality
Approaches. Building and Environment, 112113.
Citron, D. K., & Franks, M. A. (2014). Criminalizing revenge porn. Wake Forest L. Rev., 49,
345.
Colliver, B., Coyle, A., & Silvestri, M. (2019). The ‘online othering’of transgender people in
relation to ‘gender neutral toilets’. Online othering: Exploring digital violence and
discrimination on the web, 215-237.
Connellan, S. (2023, November 16). “Fortnite” players can now report others using voice
recordings. Here’s how. Mashable. https://mashable.com/article/fortnite-report-voice-audio
Craven, S., Brown, S., & Gilchrist, E. (2006). Sexual grooming of children: Review of
literature and theoretical considerations. Journal of sexual aggression, 12(3), 287-299.
Debas, E. A., Alajlan, R. S., & Rahman, M. H. (2023, February). Biometric in cyber security:
A mini review. In 2023 International Conference on Artificial Intelligence in Information and
Communication (ICAIIC) (pp. 570-574). IEEE.
Demmese, F., Yuan, X., & Dicheva, D. (2020, December). Evaluating the effectiveness of
gamification on students’ performance in a cybersecurity course. In Journal of the
Colloquium for Information System Security Education (Vol. 8, No. 1).
Dhinakaran, D., Sankar, S. M., Selvaraj, D., & Raja, S. E. (2024). Privacy-Preserving Data in
IoT-based Cloud Systems: A Comprehensive Survey with AI Integration. arXiv preprint
arXiv:2401.00794.
Dichev, C., & Dicheva, D. (2017). Gamifying education: what is known, what is believed and
what remains uncertain: a critical review. International journal of educational technology in
higher education, 14, 1-36.
Dodel, M., & Mesch, G. (2018). Inequality in digital skills and the adoption of online safety
behaviors. Information, Communication & Society, 21(5), 712-728.
Dombrowski, S. C., Gischlar, K. L., & Durst, T. (2007). Safeguarding young people from
cyber pornography and cyber sexual predation: A major dilemma of the Internet. Child Abuse
Review: Journal of the British Association for the Study and Prevention of Child Abuse and
Neglect, 16(3), 153-170.
Dombrowski, S. C., LeMasney, J. W., Ahia, C. E., & Dickson, S. A. (2004). Protecting
children from online sexual predators: technological, psychoeducational, and legal
considerations. Professional Psychology: Research and Practice, 35(1), 65.
Duan, C., & Grimmelmann, J. (2024). Content moderation on end-to-end encrypted systems:
A legal analysis. Geo. L. Tech. Rev., 8, 1.
ECPAT. (2022, May 11). ECPAT. https://ecpat.org/story/international-women-and-girls-
series-5-how-does-trafficking-affect-women-girls-and-children/
Edwards, G., Christensen, L. S., Rayment-McHugh, S., & Jones, C. (2021). Cyber strategies
used to combat child sexual abuse material. Trends and issues in crime and criminal justice,
(636), 1-16.
Equality Now. (2023, February 8). Ending Online Sexual Exploitation and Abuse of Women
and Girls: A Call for International Standards - Equality Now.
https://equalitynow.org/resource/ending-online-sexual-exploitation-and-abuse-of-women-
and-girls-a-call-for-international-standards/
Erickson, L. B., Wisniewski, P., Xu, H., Carroll, J. M., Rosson, M. B., & Perkins, D. F.
(2016). The boundaries between: Parental involvement in a teen's online world. Journal of
the Association for Information Science and Technology, 67(6), 1384-1403.
Europol. (2020). EXPLOITING ISOLATION: Offenders and victims of online child sexual
abuse during the COVID-19 pandemic.
https://www.europol.europa.eu/sites/default/files/documents/europol_covid_report-
cse_jun2020v.3_0.pdf
Faith, B. F., Long, Z. A., & Hamid, S. (2024, May). Promoting cybersecurity knowledge via
gamification: an innovative intervention design. In 2024 Third International Conference on
Distributed Computing and High Performance Computing (DCHPC) (pp. 1-8). IEEE.
Falloon, G. (2020). From digital literacy to digital competence: the teacher digital
competency (TDC) framework. Educational technology research and development, 68(5),
2449-2472.
Faraz, A., Mounsef, J., Raza, A., & Willis, S. (2022). Child safety and protection in the online
gaming ecosystem. Ieee Access, 10, 115895-115913.
Fiani, C., Bretin, R., Macdonald, S. A., Khamis, M., & McGill, M. (2024, May). " Pikachu
would electrocute people who are misbehaving": Expert, Guardian and Child Perspectives on
Automated Embodied Moderators for Safeguarding Children in Social Virtual Reality. In
Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 1-23).
Finck, M. (2018). Blockchains and data protection in the European Union. Eur. Data Prot. L.
Rev., 4, 17.
Finkelhor, D., & Hotaling, G. T. (1984). Sexual abuse in the national incidence study of child
abuse and neglect: An appraisal. Child abuse & neglect, 8(1), 23-32.
Finkelhor, D., Walsh, K., Jones, L., Mitchell, K. and Collier, A., 2021. Youth internet safety
education: Aligning programs with the evidence base. Trauma, violence, & abuse, 22(5),
pp.1233-1247.
Flynn, A. (2019). Image-based sexual abuse. In Oxford research encyclopedia of criminology
and criminal justice.
Gandhi, R. D., & Patel, D. S. (2018). Virtual reality–opportunities and challenges. Virtual
Reality, 5(01), 2714-2724.
Gates, K. A. (2011). Our biometric future: Facial recognition technology and the culture of
surveillance (Vol. 2). NYU Press.
George, A. S. (2024). Virtual Violence: Legal and Psychological Ramifications of Sexual
Assault in Virtual Reality Environments. Partners Universal International Innovation Journal,
2(1), 96-114.
George, M. J., & Odgers, C. L. (2015). Seven fears and the science of how mobile
technologies may be influencing adolescents in the digital age. Perspectives on psychological
science, 10(6), 832-851.
Gezinski, L. B., & Gonzalez-Pons, K. M. (2024). Sex trafficking and technology: A
systematic review of recruitment and exploitation. Journal of Human Trafficking, 10(3), 497-
511.
Gillespie, T. (2018). Custodians of the Internet: Platforms, content moderation, and the
hidden decisions that shape social media. Yale University Press.
Gjertsen, E. G. B., Gjære, E. A., Bartnes, M., & Flores, W. R. (2017, February). Gamification
of Information Security Awareness and Training. In ICISSP (pp. 59-70).
Glavin, L. (2024, August 30). Bias in Biometrics: How Organizations Can Launch Remote
Identity Verification Confidently - FIDO Alliance. FIDO Alliance.
https://fidoalliance.org/bias-in-biometrics-how-organizations-can-launch-remote-identity-
verification-confidently/
Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical
and political challenges in the automation of platform governance. Big Data & Society, 7(1),
2053951719897945.
Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical
and political challenges in the automation of platform governance. Big Data & Society, 7(1),
2053951719897945.
Green, A. (2019). Cucks, fags and useful idiots: The othering of dissenting white
masculinities online. Online othering: Exploring digital violence and discrimination on the
web, 65-89.
Green, L., Haddon, L., Livingstone, S., Holloway, D., Jaunzems, K., Stevenson, K. J., &
O'Neill, B. (2019). Parents' failure to plan for their children's digital futures. Media@ LSE
Working Paper Series.
Griffin, P. H. (2016). Biometric-based cybersecurity techniques. In Advances in Human
Factors in Cybersecurity: Proceedings of the AHFE 2016 International Conference on Human
Factors in Cybersecurity, July 27-31, 2016, Walt Disney World®, Florida, USA (pp. 43-53).
Springer International Publishing.
Hafner, L., Peifer, T. P., & Hafner, F. S. (2024). Equal accuracy for Andrew and Abubakar—
detecting and mitigating bias in name-ethnicity classification algorithms. AI & society, 39(4),
1605-1629.
Henriquez, M. (2021, December 8). The Top 12 Data Breaches of 2019. 2019-12-05 |
Security Magazine. https://www.securitymagazine.com/articles/91366-the-top-12-data-
breaches-of-2019
Henry, N., & Umbach, R. (2024). Sextortion: Prevalence and correlates in 10 countries.
Computers in Human Behavior, 158, 108298.
Henry, N., Flynn, A., & Powell, A. (2018). Policing image-based sexual abuse: Stakeholder
perspectives. Police practice and research, 19(6), 565-581.
Henry, N., Flynn, A., & Powell, A. (2020). Technology-facilitated domestic and sexual
violence: A review. Violence against women, 26(15-16), 1828-1854.
Hernandez-de-Menendez, M., Morales-Menendez, R., Escobar, C. A., & Arinez, J. (2021).
Biometric applications in education. International Journal on Interactive Design and
Manufacturing (IJIDeM), 15, 365-380.
Hine, E., Rezende, I. N., Roberts, H., Wong, D., Taddeo, M., & Floridi, L. (2024). Safety and
privacy in immersive extended reality: An analysis and policy recommendations. Digital
Society, 3(2), 33.
Hodge, R. (2019, December 27). 2019 Data Breach Hall of Shame: These were the biggest
data breaches of the year. CNET. https://www.cnet.com/news/privacy/2019-data-breach-hall-
of-shame-these-were-the-biggest-data-breaches-of-the-year/
Hoffmann, A. (2022). Regulating the Internet in Times of Mass Surveillance: A Universal
Global Space with Universal Human Rights?. In Problematising Intelligence Studies (pp.
181-200). Routledge.
Hofmann, F., Wurster, S., Ron, E., & Böhmecke-Schwafert, M. (2017, November). The
immutability concept of blockchains and benefits of early standardization. In 2017 ITU
Kaleidoscope: Challenges for a Data-Driven Society (ITU K) (pp. 1-8). IEEE.
Hoge, E., Bickham, D., & Cantor, J. (2017). Digital media, anxiety, and depression in
children. Pediatrics, 140(Supplement_2), S76-S80.
Hollandsworth, R., Donovan, J., & Welch, M. (2017). Digital citizenship: You can’t go home
again. TechTrends, 61, 524-530.
Hollanek, T. (2023). AI transparency: a matter of reconciling design with critique. Ai &
Society, 38(5), 2071-2079.
Hopper, E., & Hidalgo, J. (2006). Invisible chains: Psychological coercion of human
trafficking victims. Intercultural Hum. Rts. L. Rev., 1, 185.
Howard, T. (2019). Sextortion: Psychological effects experienced and seeking help and
reporting among emerging adults (Doctoral dissertation, Walden University).
Ibrahim, A. (2024, February). Guarding the Future of Gaming: The Imperative of
Cybersecurity. In 2024 2nd International Conference on Cyber Resilience (ICCR) (pp. 1-9).
IEEE.
ILO. (2024, August 14). Love146. https://love146.org/learn/
Immigration & Customs Enforcement. (2023, August 22). ICE.
https://www.ice.gov/features/sextortion
INTERPOL issues global warning on human trafficking-fueled fraud. (2023).
https://www.interpol.int/en/News-and-Events/News/2023/INTERPOL-issues-global-warning-
on-human-trafficking-fueled-fraud
Jada, I., & Mayayise, T. O. (2023). The impact of artificial intelligence on organisational
cyber security: An outcome of a systematic literature review. Data and Information
Management, 100063.
Johnson, A. (2024, July 9). How to Address Children’s Online Safety in the United States.
ITIF. https://itif.org/publications/2024/06/03/how-to-address-childrens-online-safety-in-
united-states/
Jongerius, S. (2024, February 22). GDPR’s Right to be Forgotten in Blockchain: it’s not black
and white. TechGDPR. https://techgdpr.com/blog/gdpr-right-to-be-forgotten-blockchain/
Jordan, J. M. (2024). The Rise of the Algorithms: How YouTube and TikTok Conquered the
World. Penn State Press.
Jordan, M. I., & Mitchell, T. M. (2015). Machine learning: Trends, perspectives, and
prospects. Science, 349(6245), 255-260.
Kaimara, P., Oikonomou, A., & Deliyannis, I. (2022). Could virtual reality applications pose
real risks to children and adolescents? A systematic review of ethical issues and concerns.
Virtual Reality, 26(2), 697-735.
Karadimce, A., & Bukalevska, M. (2023). Threats Targeting Children on Online Social
Networks. WSEAS TRANSACTIONS ON ADVANCES in ENGINEERING EDUCATION, 20,
25–31. https://doi.org/10.37394/232010.2023.20.4
Karayianni, E., Fanti, K.A., Diakidoy, I.A., Hadjicharalambous, M.Z. and Katsimicha, E.,
2017. Prevalence, contexts, and correlates of child sexual abuse in Cyprus. Child abuse &
neglect, 66, pp.41-52.
Karunamurthy, A., Kiruthivasan, R., & Gauthamkrishna, S. (2023). Human-in-the-Loop
Intelligence: Advancing AI-Centric Cybersecurity for the Future. Quing: International Journal
of Multidisciplinary Scientific Research and Development, 2(3), 20-43.
Katsh, M. E., & Rabinovich-Einy, O. (2017). Digital justice: technology and the internet of
disputes. Oxford University Press.
Kavanagh, E., Jones, I., & Sheppard-Marks, L. (2020). Towards typologies of virtual
maltreatment: Sport, digital cultures & dark leisure. In Re-thinking leisure in a digital
age (pp. 75-88). Routledge.
Ke, F. (2016). Designing and integrating purposeful learning in game play: A systematic
review. Educational Technology Research and Development, 64, 219-244.
Kerrigan, N. (2019). Rural racism in the digital age. Online othering: Exploring digital
violence and discrimination on the Web, 259-279.
Kowalski, R. M., Giumetti, G. W., Schroeder, A. N., & Lattanner, M. R. (2014). Bullying in
the digital age: a critical review and meta-analysis of cyberbullying research among youth.
Psychological bulletin, 140(4), 1073.
Land, M. (2013). Toward an international law of the internet. Harv. Int'l LJ, 54, 393.
Lanitis, A. (2010). A survey of the effects of aging on biometric identity verification.
International Journal of Biometrics, 2(1), 34-52.
Le Compte, A., Elizondo, D., & Watson, T. (2015, May). A renewed approach to serious
games for cyber security. In 2015 7th International Conference on Cyber Conflict:
Architectures in Cyberspace (pp. 203-216). IEEE.
Lindau, J. D. (2022). Surveillance and the Vanishing Individual: Power and Privacy in the
Digital Age. Rowman & Littlefield.
Livingstone, S., & Blum-Ross, A. (2020). Parenting for a digital future: How hopes and fears
about technology shape children's lives. Oxford University Press, USA.
Livingstone, S., & Smith, P. K. (2014). Annual research review: Harms experienced by child
users of online and mobile technologies: The nature, prevalence and management of sexual
and aggressive risks in the digital age. Journal of child psychology and psychiatry, 55(6),
635-654.
Livingstone, S., & Stoilova, M. (2021). Using global evidence to benefit children’s online
opportunities and minimise risks. Contemporary Social Science.
Livingstone, S., Ólafsson, K., Helsper, E.J., Lupiáñez-Villanueva, F., Veltri, G.A. and
Folkvord, F., 2017. Maximizing opportunities and minimizing risks for children online: The
role of digital skills in emerging strategies of parental mediation. Journal of
communication, 67(1), pp.82-105.
Livingstone, S., Stoilova, M., & Nandagiri, R. (2019). Children's data and privacy online:
growing up in a digital age: an evidence review.
Llansó, E. J. (2020). No amount of “AI” in content moderation will solve filtering’s prior-
restraint problem. Big Data & Society, 7(1), 2053951720920686.
Llansó, E., VaN hoboKeN, J., Leerssen, P., & Harambam, J. (2020). Content Moderation, and
Freedom of Expression. Algorithms.
Lupton, D., & Williamson, B. (2017). The datafied child: The dataveillance of children and
implications for their rights. New media & society, 19(5), 780-794.
Majaranta, P., & Bulling, A. (2014). Eye tracking and eye-based human-computer interaction.
In Advances in physiological computing (pp. 39-65). London: Springer London.
Malone, M., Wang, Y., James, K., Anderegg, M., Werner, J., & Monrose, F. (2021, March). To
gamify or not? on leaderboard effects, student engagement, and learning outcomes in a
cybersecurity intervention. In Proceedings of the 52nd ACM Technical Symposium on
Computer Science Education (pp. 1135-1141).
Manne, G. A., Sperry, B., & Stout, K. (2022). Who Moderates the Moderators? A Law &
Economics Approach to Holding Online Platforms Accountable Without Destroying the
Internet. Rutgers Computer & Tech. LJ, 49, 26.
Manthiramoorthy, C., & Khan, K. M. S. (2024). Comparing several encrypted cloud storage
platforms. International Journal of Mathematics, Statistics, and Computer Science, 2, 44-62.
Marsoof, A., Luco, A., Tan, H., & Joty, S. (2023). Content-filtering AI systems–limitations,
challenges and regulatory approaches. Information & Communications Technology Law,
32(1), 64-101.
Martin, J., & Alaggia, R. (2013). Sexual abuse images in cyberspace: Expanding the ecology
of the child. Journal of child sexual abuse, 22(4), 398-415.
Marzano, G. (2021). Anti-Cyberbullying Interventions. In Research Anthology on School
Shootings, Peer Victimization, and Solutions for Building Safer Educational Institutions (pp.
468-488). IGI Global.
Mensah, G. B. (2023). Artificial intelligence and ethics: a comprehensive review of bias
mitigation, transparency, and accountability in AI Systems. Preprint, November, 10.
Meurens, N., Notté, E., Wanat, A., & Mariano, L. (2022). Child safety by design that works
against online sexual exploitation of children. Down to Zero Alliance, Netherlands.
Minnaar, A. (2023). An examination of early international and national efforts to combat
online child pornography and child sexual exploitation and abuse material on the
Internet. Child Abuse Research in South Africa, 24(2), 1-26.
Molok, N. N. A., Hakim, N. A. H. A., & Jamaludin, N. S. (2023). SmartParents: Empowering
Parents to Protect Children from Cyber Threats. International Journal on Perceptive and
Cognitive Computing, 9(2), 73-79.
Moxley, J. F. O. a. E. (2023, November 14). Our approach to responsible AI innovation.
blog.youtube. https://blog.youtube/inside-youtube/our-approach-to-responsible-ai-innovation/
Muthazhagu, V. H., Surendiran, B., & Arulmurugaselvi, N. (2024, July). Navigating the AI
Landscape: A Comparative Study of Models, Applications, and Emerging Trends. In 2024
International Conference on Signal Processing, Computation, Electronics, Power and
Telecommunication (IConSCEPT) (pp. 1-8). IEEE.
Nahmias, Y., & Perel, M. (2021). The oversight of content moderation by AI: impact
assessments and their limitations. Harv. J. on Legis., 58, 145.
Nguyen, T. (2024, June 25). Who is being targeted most by sextortion on social media? The
answer may surprise you. USA TODAY.
https://www.usatoday.com/story/news/nation/2024/06/25/financial-sextortion-teenage-boys-
social-media-report/74200070007/
Niu, Y. H., Gao, S., Zhang, H. K., & Gong, Y. J. (2024). Enhancing Content Moderation in
Wireless Mobile Networks: A Decentralized Quality Management Approach. Journal of
Information Science & Engineering, 40(4).
, M., Kovács, G., & Wersényi, G. (2021). The Regulation of Digital Reality in Nutshell.
In 12th IEEE International Conference on Cognitive Infocommunications (CogInfoCom) (pp.
1-7).
O’Connell, R., Cyberspace Research Unit, & University of Central Lancashire. (2003). A
TYPOLOGY OF CHILD CYBERSEXPLOITATION AND ONLINE GROOMING
PRACTICES. https://image.guardian.co.uk/sys-
files/Society/documents/2003/07/17/Groomingreport.pdf
O’Malley, R. L. (2023). Short-term and long-term impacts of financial sextortion on victim’s
mental well-being. Journal of interpersonal violence, 38(13-14), 8563-8592.
O'Gorman, L. (2003). Comparing passwords, tokens, and biometrics for user
authentication. Proceedings of the IEEE, 91(12), 2021-2040.
Okeh, A. (2023, February 7). Google to enable default SafeSearch filter for signed-in users.
Punch Newspapers. https://punchng.com/google-to-enable-default-safesearch-filter-for-
signed-in-users/
Ortega-Barón, J., Machimbarrena, J. M., Calvete, E., Orue, I., Pereda, N., & González-
Cabrera, J. (2022). Epidemiology of online sexual solicitation and interaction of minors with
adults: A longitudinal study. Child Abuse & Neglect, 131, 105759.
Paik, S., Mays, K. K., & Katz, J. E. (2022). Invasive yet inevitable? Privacy normalization
trends in biometric technology. Social Media+ Society, 8(4), 20563051221129147.
Patchin, J. W. & Hinduja, S. (2024). Cyberbullying Facts. Cyberbullying Research Center.
https://cyberbullying.org/facts
Patchin, J. W., & Hinduja, S. (2015). Measuring cyberbullying: Implications for
research. Aggression and violent behavior, 23, 69-74.
Pavan, E. (2017). Internet intermediaries and online gender-based violence. In Gender,
technology and violence (pp. 62-78). Routledge.
Pea, R. D., Biernacki, P., Bigman, M., Boles, K., Coelho, R., Docherty, V., ... & Vishwanath,
A. (2023). Four surveillance technologies creating challenges for education. Learning:
Designing the Future, 317.
PEN America. (2024, September 25). Shouting into the Void - PEN America.
https://pen.org/report/shouting-into-the-void/
Perasso, G. (2020). Cyberbullying detection through machine learning: Can technology help
to prevent internet bullying?. International Journal of Management and Humanities, 4(11),
57-69.
Polak, S., Schiavo, G., & Zancanaro, M. (2022, April). Teachers’ perspective on artificial
intelligence education: An initial investigation. In CHI Conference on Human Factors in
Computing Systems Extended Abstracts (pp. 1-7).
Quayle, E., & Koukopoulos, N. (2019). Deterrence of online child sexual abuse and
exploitation. Policing: A Journal of Policy and Practice, 13(3), 345-362.
Quayle, E., & Newman, E. (2015). The role of sexual images in online and offline sexual
behaviour with minors. Current Psychiatry Reports, 17, 1-6.
Quayle, E., & Taylor, M. (2001). Child seduction and self-representation on the
Internet. CyberPsychology & Behavior, 4(5), 597-608.
Quayle, E., & Taylor, M. (2003). Model of problematic Internet use in people with a sexual
interest in children. CyberPsychology & Behavior, 6(1), 93-106.
Quayle, E., (2016). Global Kids Online. http://www.globalkidsonline.net/sexual-exploitation
Quayle, E., Allegro, S., Hutton, L., Sheath, M., & Lööf, L. (2014). Rapid skill acquisition and
online sexual grooming of children. Computers in Human Behavior, 39, 368-375.
Rebhi, T. (2023). Challenges and Prospects in Enforcing Legal Protection of Children from
Online Sexual Exploitation. Krytyka Prawa, 21.
Redmiles, E. M., Bodford, J., & Blackwell, L. (2019, July). “I just want to feel safe”: A Diary
Study of Safety Perceptions on Social Media. In Proceedings of the International AAAI
Conference on Web and Social Media (Vol. 13, pp. 405-416).
Rogers-Whitehead, C. (2019). Digital citizenship: teaching strategies and practice from the
field. Rowman & Littlefield.
Sanka, A. I., & Cheung, R. C. (2021). A systematic review of blockchain scalability: Issues,
solutions, analysis and future research. Journal of Network and Computer Applications, 195,
103232.
Sarkar, S. (2015). Use of technology in human trafficking networks and sexual exploitation:
A cross-sectional multi-country study. Transnational Social Review, 5(1), 55-68.
Schiavo, G., Roumelioti, E., Deppieri, G., & Marconi, A. (2024, June). Gamification
Strategies for Child Protection: Best Practices for Applying Digital Gamification in Child
Sexual Abuse Prevention. In Proceedings of the 23rd Annual ACM Interaction Design and
Children Conference (pp. 282-289).
Schneider, P. J., & Rizoiu, M. A. (2023). The effectiveness of moderating harmful online
content. Proceedings of the National Academy of Sciences, 120(34), e2307360120.
Seale, J., & Schoenberger, N. (2018). Be internet awesome: A critical analysis of google's
child-focused internet safety program. Emerging Library & Information Perspectives, 1(1),
34-58.
Sezer, N., & Tunçer, S. (2021). Cyberbullying hurts: the rising threat to youth in the digital
age. Digital siege (ss. 179-194). Istanbul: Istanbul University Press. https://doi.
org/10.26650/B/SS07, 9.
Shen, H., DeVos, A., Eslami, M., & Holstein, K. (2021). Everyday algorithm auditing:
Understanding the power of everyday users in surfacing harmful algorithmic
behaviors. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1-29.
Singh, S., & Nambiar, V. (2024). Role of Artificial Intelligence in the Prevention of Online
Child Sexual Abuse: A Systematic Review of Literature. Journal of Applied Security
Research, 1-42.
Smith, C. (2018). Cyber security, safety, & ethics education (Master's thesis, Utica College).
Song, S. (2020). Keeping Private Messages Private: End-to-End Encryption on Social Media.
In Boston College Intellectual Property and Technology Forum (Vol. 2020, pp. 1-12).
Southern, R., & Harmer, E. (2019). Othering political women: Online misogyny, racism and
ableism towards women in public life. Online othering: Exploring digital violence and
discrimination on the web, 187-210.
Staksrud, E., & Livingstone, S. (2009). Children and online risk: powerless victims or
resourceful participants?. Information, Communication & Society, 12(3), 364-387.
Sun, K., Zou, Y., Radesky, J., Brooks, C., & Schaub, F. (2021). Child safety in the smart
home: parents' perceptions, needs, and mitigation strategies. Proceedings of the ACM on
Human-Computer Interaction, 5(CSCW2), 1-41.
Tanwar, S., Tyagi, S., Kumar, N., & Obaidat, M. S. (2019). Ethical, legal, and social
implications of biometric technologies. Biometric-based physical and cybersecurity systems,
535-569.
Tapscott, D., & Tapscott, A. (2016). Blockchain revolution: how the technology behind
bitcoin is changing money, business, and the world. Penguin.
Taylor, E. (2012). The rise of the surveillance school. In Routledge handbook of surveillance
studies (pp. 225-231). Routledge.
Taylor, J., & Pagliari, C. (2018). Mining social media data: How are research sponsors and
researchers addressing the ethical challenges?. Research Ethics, 14(2), 1-39.
Tejani, A. S., Ng, Y. S., Xi, Y., & Rayan, J. C. (2024). Understanding and mitigating bias in
imaging artificial intelligence. RadioGraphics, 44(5), e230067.
Todres, J. (2010). Taking prevention seriously: Developing a comprehensive response to
child trafficking and sexual exploitation. Vand. J. Transnat'l L., 43, 1.
Uitts, B. S. (2022). Sex Trafficking of Children Online: Modern Slavery in Cyberspace.
Rowman & Littlefield.
Wang, F. (2024). Breaking the silence: Examining process of cyber sextortion and victims’
coping strategies. International Review of Victimology.
https://doi.org/10.1177/02697580241234331
Wang, G., Zhao, J., Van Kleek, M., & Shadbolt, N. (2021). Protection or punishment?
relating the design space of parental control apps and perceptions about them to support
parenting for online safety. Proceedings of the ACM on Human-Computer
Interaction, 5(CSCW2), 1-26.
Weru, T., Sevilla, J., Olukuru, J., Mutegi, L., & Mberi, T. (2017, May). Cyber-smart children,
cyber-safe teenagers: Enhancing internet safety for children. In 2017 IST-Africa Week
Conference (IST-Africa) (pp. 1-8). IEEE.
Whittle, H., Hamilton-Giachritsis, C., Beech, A., & Collings, G. (2013). A review of online
grooming: Characteristics and concerns. Aggression and violent behavior, 18(1), 62-70.
Wiggers, K. (2024, January 29). OpenAI partners with Common Sense Media to
collaborate on AI guidelines. TechCrunch. https://techcrunch.com/2024/01/29/openai-
partners-with-common-sense-media-to-collaborate-on-ai-guidelines/
Williams, R., Elliott, I. A., & Beech, A. R. (2013). Identifying sexual grooming themes used
by internet sex offenders. Deviant behavior, 34(2), 135-152.
Winters, G. M., Schaaf, S., Grydehøj, R. F., Allan, C., Lin, A., & Jeglic, E. L. (2022). The
sexual grooming model of child sex trafficking. Victims & Offenders, 17(1), 60-77.
Wittes, B., Poplin, C., Jurecic, Q., & Spera, C. (2016). Sextortion: Cybersecurity, teenagers,
and remote sexual assault. Center for Technology Innovation at Brookings, 11, 1-47.
Wolak, J. and Finkelhor, D., 2016. Sextortion: Findings from a survey of 1,631 victims.
http://www.unh.edu/ccrc/pdf/Sextortion_RPT_FNL_rev0803.pdf
Wortley, R. K., & Smallbone, S. (2006). Child pornography on the internet (pp. 5-2006).
Washington, DC: US Department of Justice, Office of Community Oriented Policing
Services.
Wu, T., Tien, K. Y., Hsu, W. C., & Wen, F. H. (2021). Assessing the effects of gamification on
enhancing information security awareness knowledge. Applied Sciences, 11(19), 9266.
Wulandari, C. E., Firdaus, F. A., & Saifulloh, F. (2024). Promoting Inclusivity Through
Technology: A Literature Review in Educational Settings. Journal of Learning and
Technology, 3(1), 19-28.
Wylde, V., Prakash, E., Hewage, C., & Platts, J. (2023). Ethical challenges in the use of
digital technologies: AI and big data. In Digital Transformation in Policing: The Promise,
Perils and Solutions (pp. 33-58). Cham: Springer International Publishing.
Yaseen, A. (2023). AI-driven threat detection and response: A paradigm shift in
cybersecurity. International Journal of Information and Cybersecurity, 7(12), 25-43.
Ybarra, M. L., Mitchell, K. J., & Korchmaros, J. D. (2011). National trends in exposure to
and experiences of violence on the Internet among children. Pediatrics, 128(6), e1376-e1386.
Yu, Y., Sharma, T., Hu, M., Wang, J., & Wang, Y. (2024). Exploring Parent-Child Perceptions
on Safety in Generative AI: Concerns, Mitigation Strategies, and Design Implications. arXiv
preprint arXiv:2406.10461.
Zakaria, N., Yew, L. K., Alias, N. M. A., & Husain, W. (2011, December). Protecting the
privacy of children in social networking sites with rule-based privacy tools. In 8th
International Conference on High-capacity Optical Networks and Emerging
Technologies (pp. 253-257). IEEE.
Zhu, C. (2024, September 23). Parents are finding new ways to monitor their kids. But some
experts are concerned. CBC. https://www.cbc.ca/radio/thecurrent/parenting-surveillance-
concerns-1.7329667
Zyskind, G., & Nathan, O. (2015, May). Decentralizing privacy: Using blockchain to protect
personal data. In 2015 IEEE security and privacy workshops (pp. 180-184). IEEE.