Artificial Intelligence Trust Risk and S
Artificial Intelligence Trust Risk and S
Artificial Intelligence Trust Risk and S
Review
A R T I C L E I N F O A B S T R A C T
Keywords: Artificial Intelligence (AI) has become pervasive, enabling transformative advancements in various industries
AI TRiSM framework including smart city, smart healthcare, smart manufacturing, smart virtual world and the Metaverse. However,
AI TRiSM Orchestration concerns related to risk, trust, and security are emerging with the increasing reliance on AI systems. One of the
AI TRiSM Adaptive
most beneficial and original solutions for ensuring the reliability and trustworthiness of AI systems is AI Trust,
ModelOps
Risk and Security Management (AI TRiSM) framework. Despite being comparatively new to the market, the
Deepfake Technology and Adversarial attacks
framework has demonstrated already its effectiveness in various products and AI models. It has successfully
contributed to fostering innovation, building trust, and creating value for businesses and society. Due to the lack
of systematic investigations in AI TRiSM, we carried out a comprehensive and detailed review to bridge the
existing knowledge gaps and provide a better understanding of the framework from both theoretical and tech-
nical standpoints. This paper explores various applications of the AI TRiSM framework across different domains,
including finance, healthcare, and the Metaverse. Futhermore, the paper discusses the obstacles related to
implementing AI TRiSM framework, including adversarial attacks, the constantly changing landscape of threats,
ensuring regulatory compliance, addressing skill gaps, and acquiring expertise in the field. Finally, it explores the
future directions of AI TRiSM, emphasizing the importance of continual adaptation and collaboration among
stakeholders to address emerging risks and promote ethical and enhanced overall security bearing for AI systems.
1. Introduction 2023), and communications (Lv et al., 2020) has driven progress that
offers exceptional advantages.
The progress in technology has brought about the existence of Arti- The widespread adoption of AI technologies raises significant con-
ficial Intelligence educators, or more broadly, automated teaching sys- cerns related to trust, risk, and security. On the other hand, the idea of
tems. AI has become a powerful influence that is altering different trust in AI involves the assurance that users and involved parties have in
aspects of society, economy, and technology (Kim, 2022). The contin- the dependability, honesty, and ethical application of AI systems. On
uous advancements in computing capabilities have led to a substantial one hand, AI risks encompass the possible negative outcomes and un-
growth of AI technologies, spanning from machine learning to natural certainties linked to AI algorithms and systems. These risks include
language processing and voice recognition (Sohn and Kwon, 2020; biases, unintended results, violations of data privacy, and the possibility
Abuzaraida et al., 2021) and AI-driven solutions are gaining more of harm caused by AI. To elaborate the solution of aforementioned
importance and exerting a greater impact on individuals’ lives and our problems, AI TRiSM is suitable framework to offer solutions providing a
societies (Glikson and Woolley, 2020). These days, AI-based applica- structured approach to evaluating the trustworthiness of AI systems by
tions are dominant in practically every aspects of daily lives from assessing their transparency, explainability, and accountability.
product recommendation engines (Kim et al., 2022), smart city (Nikitas, AI TRiSM is a comprehensive framework designed to handle the
2020), education (Chen et al., 2020) and autonomous cars (Ma, 2020). challenges related to Artificial Intelligence (AI) systems, enabling fair-
Moreover, Its extensive adoption in crucial sectors like healthcare (Yu ness, governance, efficacy, reliability and privacy. The AI TRiSM
et al., 2018), finance (Sanz and Zhu, 2021), transportation (Bharadiya, framework is designed to assist organizations developing a systematic
* Corresponding authors.
E-mail addresses: adibhabbal@karabuk.edu.tr (A. Habbal), khaliifcade@siu.edu.so (M.K. Ali), abuzaraida@it.misuratau.edu.ly (M.A. Abuzaraida).
https://doi.org/10.1016/j.eswa.2023.122442
Received 16 August 2023; Received in revised form 24 October 2023; Accepted 2 November 2023
0957-4174/© 2023 Elsevier Ltd. All rights reserved.
A. Habbal et al. Expert Systems With Applications 240 (2024) 122442
approach to managing the risks associated with AI, including data pri- 2. Balancing AI trust, risk and security
vacy, risks related to security and ethical related concerns. The idea of AI
TRiSM is a relatively recent concept that has gained significance atten- Trust is identified as a key element for successful AI integration, with
tion as humans increasingly depend on AI-powered systems for complex studies emphasizing the need for transparency, explainability, fairness,
tasks in today’s society. A recent Gartner® report discussed the concept and accountability to establish trust among users and stakeholders
of AI TRiSM framework and highlighted the importance of organizations (Lamsal, 2001). Additionally, risk and security considerations are
implementing its strategy that combines specialized tools to fully paramount, as AI systems may introduce new vulnerabilities and threats.
leverage the benefits it offers (Groombridge, 2022). Furthermore, AI On the other hand, ensuring the ethical deployment and responsible of
TRiSM encompasses multiple elements such as transparency, re- well trusted AI systems necessitates the crucial task of balancing AI trust,
sponsibility, fairness, reliability, ethical considerations, and others. By risk, and security. It involves considering various influences to establish
embracing AI TRiSM framework, organizations can acquire a deeper and maintain trust, mitigate possible risks, and safeguard the security of
understanding of the processes involved in designing, developing, and AI systems. There is a high level of expectation for AI solutions to
deploying AI models. AI TRiSM is expected to enable effective moni- effectively tackle existing challenges. However, the increasing visibility
toring and mitigation of risks while ensuring the dependability and of AI solutions that fail to deliver the promised advancements poses a
trustworthiness of the AI systems (Groombridge, 2022). risk of undermining user confidence in AI systems (Roski, 2021). Here
Current frameworks might not possess thorough the mechanisms are some key ideas to deliberate:
essential to establish and maintain trust in AI systems, properly evaluate
the risks linked with AI implementations, and efficiently handle security 2.1. AI trust management conceptualizing
within the rapidly changing AI environment (Mahbooba, 2021). More-
over, the intricates and constantly changing characteristics of AI appli- By definition, AI is distinct from natural intelligence, being a created
cations, coupled with the evolving landscape of potential threats, and machine-based form of intelligence. In order to enhance decision-
present distinct challenges that require customized strategies. Address- making within intricate and unpredictable systems, AI technology has
ing these problems is crucial for advancing AI’s trustworthy deploy- been integrated, holding the promise to revolutionize medical ap-
ment, minimizing risks, and fortifying security, thereby fostering more proaches. AI models are engineered by humans to perform intricate
reliable and secure AI ecosystem for broader applications (Elahi, 2021). tasks and process information akin to our own cognitive processes. The
The challenging question is “how can we manage trust, risk and security effects of initial AI systems are currently evident, presenting both
of AI-based solutions using comprehensive framework design which is challenges and prospects, and establishing the groundwork for the
based on transparency, responsibility, fairness, reliability and ethical seamless integration of future AI advancements into social and economic
considerations?” In addressing this inquiry, researchers have focused on sectors. (Kumar et al., 2023; Bedemariam and Wessel, 2023).
understanding the subject of AI TRiSM to justify the AI-based solution Over the past few years, the research community has shown signif-
reliability, transparency, explainability, accountability and trustworthy. icant interest in the domain of trust and reputation management (Zhang,
In this review, we conduct a thorough examination of academic 2020). The utilization of widely-accepted, interdisciplinary definitions
literature by delving into the realm of AI Trust, Risk, and Security to describe trust as a psychological state encompasses a willingness to
Management within AI systems. The review stands apart from previous embrace vulnerability driven by optimistic beliefs about the intentions
research in four distinct manners: In particular, this study aims to: 1) or actions of another entity. Trusting convictions entail a trustor’s view
evaluate and synthesize existing frameworks and methodologies inten- that a trustee possesses qualities that will be advantageous to the trustor
ded to cultivate trust, mitigate risks, and enhance security in AI appli- (Cabiddu, 2022). Trust plays a critical role in the acceptance and
cations; 2) reveal and clarify five challenges and constraints adoption of AI technology, as individuals are more persuaded to use and
encountered when implementing AI TRiSM frameworks efficiently.; 3) depend on AI systems when they perceive them as reliable. Given the
Concentrate on AI trust, risk, and security management and some areas intricate algorithms employed by AI systems, privacy concerns arise for
that may be applicable; and 4) discus the potential advancements in the both individuals and organizations. Trust in AI involves a range of ele-
field of AI TRiSM framework. ments at the levels of society, organizations, and individuals that can
We contribute to the literature on AI TRiSM within AI systems in four enhance or diminish faith in AI technologies. Bais, discrimination, and
ways. Firstly, the review offers a consolidated and critical evaluation of privacy invasion are among the key factors concerning trust in AI.
existing frameworks, shedding light on their strengths and weaknesses Bias and Discrimination: The rise of AI techniques like machine
in fostering trust, managing risks, and ensuring security in AI applica- learning is intensifying the issue of digital discrimination, with an
tions. Secondly, by analyzing real-world applications of AI TRiSM, the increasing number of decisions being delegated to these systems (Ferrer,
review provides insights into practical implementations and their 2021). AI solutions propose that predictive algorithms can analyze data
impact on diverse sectors. Thirdly, the identification and delineation of in a more precise and impartial manner compared to humans by fore-
prevalent challenges offer a foundational understanding necessary for seeing intricate individual behaviors. Their suggestions also indicate
devising effective strategies to overcome obstacles in the field. Lastly, that the outcomes produced by predictive algorithms have the ability to
the researchers proposed future research directions aiming to guide and reduce inherent human biases and offer additional advantages, such as
inspire scholars, researchers, and practitioners in addressing emerging uniformity, impartiality, and objectivity in assessing information related
challenges and advancing AI TRiSM to meet the evolving demands of the to an offender. On the flip side, significant concerns emerge regarding
technological landscape. the court’s utilization of biased and discriminatory data, including de-
Our study extensively explores several crucial aspects of AI TRiSM, mographic and socio-economic factors, as inputs in the existing pre-
including frameworks, applications, challenges, and future directions. dictive AI system. Similarly, as they claim to generate biased and
The paper is structured as follows: The initial sections provide an discriminatory results, these AI systems have a negative impact on the
introduction to AI and AI TRiSM by balancing the current levels of trust, rights of individuals, principles of adjudication, and overall judicial
risks and security in AI and their consequences. Subsequently, we pre- integrity. This can lead to undesirable effects such as algorithmic bias,
sent how existing frameworks for AI Trust, Risk, and Security operate, AI racial discrimination, and a surge in incarceration rates due to an
TRiSM framework strategies and its application will be explored in the elevated dependence on these predictive algorithms within a specific
development of AI-based systems. The challenges and future directions criminal justice system, rendering such automated risk-assessment sys-
of AI TRiSM will be deliberated in the last section. Finally, the research tems deeply concerning from constitutional, technical, and ethical
conclusions of this paper will be drawn. standpoints (Malek, 2022). Moreover, in 2017, Amazon discontinued its
AI-driven candidate assessment recruitment tool due to evidence
2
A. Habbal et al. Expert Systems With Applications 240 (2024) 122442
indicating gender bias. The tool exhibited prejudiced behavior by giving cyberbullying, filter bubbles, and the propagation of misinformation.
lower ratings to resumes of female candidates, revealing a bias stem- Specifically, allegations have been made against social media platforms
ming from an insufficient representation of women in the training data for employing algorithms to control the content that users encounter in
used to develop the model (Mujtaba and Mahapatra, 2019). Lastly, the their feeds, with the aim of advancing specific political or commercial
occurrence of biased results, whether by accident or due to systemic agendas. Personalized search algorithms strive to enhance the search
issues, fosters doubt and worry, impeding broad trust in AI’s capability process for users by presenting results that align with their interests and
to act fairly and impartially. It is essential to tackle and reduce these requirements, aiming for a more efficient experience. Nevertheless,
adverse impacts to build confidence and guarantee the responsible and these algorithms have faced criticism for their potential to uphold cur-
impartial use of AI across different sectors. rent biases, restrict information diversity, and foster filter bubbles. A
Privacy Invasion: As AI advances, it gains the capability to draw significant hurdle associated with personalized search algorithms is
conclusions based on intricate data patterns that may elude human their dependence on extensive user data to operate efficiently. This data
perception. AI systems typically depend on extensive data for effective enables the creation of comprehensive user profiles, encompassing in-
training and functioning, which can pose a risk to privacy if sensitive terests, preferences, and behavioral trends. While this can enhance the
data is mishandled or used inappropriately. Consequently, individuals relevance of search outcomes, it also opens the door to user targeting for
may not even realize that their personal data is being used to form de- advertising and potential manipulation.
cisions that will affect them. Creating human trust in AI-based systems Deepfake Technology: Another possible concern is the utilization of
holds enormous importance in various domains, such as critical medical deepfake technology, a form of AI employed to produce convincing
care decisions that can be a matter of life and death, as well as significant counterfeit visuals, videos, and audio clips that give the impression of
financial and transportation choices (Asan et al., 2020; Nicodeme, authenticity. Deepfake technology operates by employing machine
2020). As AI systems increasingly take on these crucial decision-making learning algorithms to assess and modify various data, including images,
roles, it becomes imperative for developers to design systems that videos, and audio recordings, to generate fresh, artificial content (Ienca,
embody transparency, reliability, and accountability. Moreover, Health- 2023; Westerlund, 2019). Deepfakes are predominantly directed to-
related data, preserving an individual’s privacy concerning health data wards social media platforms, where it’s effortless for rumors, misin-
is a fundamental ethical value, given its direct link to personal well- formation, and conspiracies to proliferate, given that individuals often
being and identity. Safeguarding patients’ confidentiality is crucial to follow popular opinions. Simultaneously, a persistent ‘infopocalypse’ is
prevent any unauthorized or secondary utilization of their data. Failing causing individuals to believe in the reliability of information only if it
to meet patients’ privacy requirements could result in psychological and originates from their social circles, encompassing family, close friends,
reputational damage to them. The potential for data breaches heightens or acquaintances, and aligns with their preexisting beliefs. In reality, a
concerns about AI models that share personal health data. The worry is considerable number of individuals are receptive to content that vali-
that AI processes could reidentify supposedly anonymized data, ampli- dates their established perspectives, even when they harbor suspicions
fying fears of privacy infringement and data breaches (Esmaeilzadeh, about its authenticity (Westerlund, 2019). A significant risk concerning
2020). deepfake incidents in India during April 2018 was highlighted. In this
Moreover, effective communication with users regarding the inner instance, a video quickly circulated on WhatsApp, a widely used mobile
workings and limitations of AI systems is also vital to foster trust. instant messaging platform. The video, appearing to be from a surveil-
Additionally, the implementation of regulations and standards plays a lance camera, depicted two individuals on a motorcycle purportedly
significant role in building trust by ensuring responsible and ethical kidnapping a young child before swiftly escaping. This video, falsely
development and usage of AI systems. Conversely, when comparing portraying a kidnapping, triggered extensive bewilderment and fear,
trust in interpersonal connections, where both parties are human, to leading to an eight-week period of mob violence that tragically claimed
relationships involving technology or machines, the trust can be placed the lives of at least nine innocent individuals (Vaccari and Chadwick,
either in the technology itself or the organization providing it. Trust in 2020).
technology and the trust in the provider are interdependent, meaning Lethal Autonomous Weapons Systems (LAWS): In this context, the
that an increase in trust in one component will also lead to an increase in term ’autonomous’ pertains to any result generated by a machine or
trust in the other (Wazan, 2017). software without human involvement. LAWS are a distinctive category
of weapon systems that employ sensor arrays and computer algorithms
2.2. AI risk management conceptualizing to detect and attack a target without direct human intervention in the
system’s operation. The delegation of decision-making to automated
AI risk involves identifying possible threats and risks associated with weapons inevitably raises various concerns, including accountability,
AI systems. It encompasses examining the competences, constraints, and appropriateness, potential unintended escalation due to imminent ac-
possible failure modes of AI technologies. In order to detect vulnera- cidents, ethical quandaries, and additional aspects (Pedron and J.d.A. da
bilities, techniques like adversarial testing (Park, 2022) and verification Cruz, 2020). Employing LAWS could pose significant risks. In addition to
data (Benmoussa, 2022) may also be employed. AI is proving to have getting ready for a future featuring super-intelligent machines (Wogu,
both positive and negative implications, posing challenges for organi- 2018), it’s important to acknowledge that AI programmed like autono-
zations and individuals alike. Many are currently struggling with the mous weapons (Surber, 2018) to do something dangerous can already
issues tied to AI systems, which give rise to unintended risks. These risks, present risks. In essence, LAWS have the potential to alter the way
at times, can result in severe consequences, including bias (Nelson, humans exert authority over the deployment of force and its aftermath.
2019), privacy violations (Zhu, 2020), discrimination (Van Bekkum and Moreover, humans might lose the ability to foresee which individuals or
Borgesius, 2023); accidents (Hadj-Mabrouk, 2019), and political entities could become the focus of an assault, or even elucidate the
manipulation systems (Peters, 2022). Here are some considerations rationale behind a specific target selection made by a LAWS (Marr,
when conceptualizing risks in AI. 2018; de Ágreda, 2020). It is essential to recognize that AI risk man-
Society Manipulation: One of an unintended risk outcome of the agement is a multi-layered and evolving different domains. As AI tech-
recent AI advancement is the manipulation of social dynamics (King, nology develops and becomes more widespread, it is imperative to
2020). Social media has become omnipresent in modern society, serving proactively identify and address possible risks to guarantee the secure
a multitude of functions, including entertainment, spreading informa- and accountable development and utilization of AI systems.
tion, engaging in political discourse, and promoting businesses (Marr,
2018). Utilization of social media has brought to the forefront significant
societal, cultural, and moral concerns, encompassing topics like privacy,
3
A. Habbal et al. Expert Systems With Applications 240 (2024) 122442
2.3. AI security management conceptualizing systems. Demonstrating that machine learning algorithms outperform
humans in delivering security, AI integration in cybersecurity is
With the rising integration of AI technology across different fields, instrumental in error prevention (Ansari, 2022). However, guaranteeing
safeguarding the AI systems security together with the sensitive infor- the durability and strength of AI models against evolving adversarial
mation they manage becomes critically important. AI security man- attacks is an ongoing worry. Malicious entities can take advantage of
agement involves the adoption of practices and measures aimed at weaknesses in AI algorithms to alter results, potentially resulting in
protecting AI systems and the data they process from unauthorized ac- tangible real-life impacts. Additionally, it’s vital to prioritize safe-
cess, breaches, and malicious activities. AI security management en- guarding privacy and handling data responsibly, particularly given AI’s
compasses different key aspects like Threat Identification (Kumar and significant data needs. Balancing the extraction of valuable insights with
Kumar, 2023), Access control (Song, 2020), Security Awareness and privacy maintenance is a delicate task. Finally, the swift progress of AI
training (Solomon, 2022) and also Privacy (Schiliro et al., 2020). technology surpasses existing regulatory frameworks, highlighting the
Analyzing possible attack vectors such data breaches, unauthorized ac- need for flexible and adjustable policies to ensure AI systems comply
cess, adversarial attacks, and insider threats is one way to spot potential with ethical and legal guidelines (Di Vaio, 2020).
risks and vulnerabilities that could jeopardize the security of AI systems. Furthermore, guaranteeing adherence to privacy regulations and
Ensuring a secure and responsible deployment of AI involves effectively upholding user data privacy constitutes a vital component of AI security
managing the progress of AI technology while proactively addressing management. Incorporating privacy-by-design principles and employ-
the adverse social, organizational, and individual implications. Here are ing anonymization techniques can aid in safeguarding sensitive personal
some considerations when conceptualizing security in AI. information. Equally important is the promotion of security alertness
Malicious Use of AI: The remarkable technological advancements in among administrators, developers, and users of AI systems through
AI have garnered acclaim, presenting enhanced possibilities across training programs and educational initiatives. This serves to cultivate a
various aspects of our daily lives. Despite that, AI and machine learning security-conscious culture and minimize the potential risks associated
(ML) are transforming the risk landscape concerning security for in- with human errors and social engineering attacks. Moreover, Table 1
dividuals, organizations, and nations. The Malicious utilization of AI has illustrates the balancing of AI trust, risk, and security with respect to
the potential to endanger digital security (Schneier, 2015), physical threat types and damages signifies the relationship between these items
security (Weingart, 1987), and political security (Brundage, et al., 1802; in the context of probable threats, the resulting damages. By visualizing
Dhiman and Toshniwal, 2022). International law enforcement entities the relationship between threat types and damages the table depicts the
grapple with a variety of risks linked to the Malevolent Utilization of AI. importance of understanding and addressing possible threats, the
Interpol and the Center for Al and Robotics at the United Nations resulting damages in the context of AI trust, risk, and security. It high-
Interregional Crime and Justice Research Institute (UNICRI) have raised lights the need for a comprehensive and proactive method to protect AI
concerns about “political assaults,” notably employing deepfakes, and systems and their users from harm and maintain the desired balance
physical assaults carried out by criminals, such as utilizing combat between these critical elements.
drones integrated with facial recognition algorithms. AI can be
employed to perpetrate a crime directly or subvert another AI system by 3. AI TRiSM framework
tampering with the data (Bazarkina and Pashentsev, 2020). However,
addressing and mitigating the potential harms stemming from the ma- The widespread acceptance and effective integration of AI into
licious use of AI is a critical concern in the development and deployment different facets of society heavily depend on establishing trust in it.
of AI technologies. Given its diverse and multifaceted characteristics, it’s essential to take
Insufficient Security Measures: The progress of technology has into account a multitude of factors when assessing this trust, which is
been significant across various sectors (Khan, 2020). The impact of AI on currently absent in existing static models. In healthcare environments,
cybersecurity has been dual-sided, with positive and negative aspects. certain concerns pertain to enhancing trust, improving transparency of
AI-driven automation through machine learning algorithms has effec- AI-driven systems, and minimizing bias in medical use cases. A key
tively thwarted attackers from employing traditional attack methods on lesson drawn from these critical and delicate sectors is the significance
Table 1
The balancing of AI trust, risk, and security with respect to threat types and damages.
Aspect Threat Vector Types Types of Damages
AI Trust 1. Bias and Discrimination Destruction of public trust, hindrance to AI adoption, and impeding societal
Management Dissemination of misleading information and biased narratives to progress by fostering fear, skepticism, and reluctance towards leveraging AI
shape negative perceptions of AI’s capabilities and intentions. systems.
2. Privacy Invasion Erosion of user trust, compromised sensitive data, and potential for
Adversarial Attacks utilizing manipulated training data to deceive AI discriminatory or harmful decision-making.
systems.
AI Risk 1. Society Manipulation Dispersion of misleading or fostering social division, and creating an
Management Synchronized AI-driven misinformation campaigns intended at dis- environment susceptible to misinformation through AI-driven manipulation.
torting public perceptions and influencing social, political, or economic
outcomes.
2. Deepfake Technology: Damaging reputations, and undermining public trust by generating deceptive
Fabrication of realistic audiovisual content depicting AI systems content that is difficult to distinguish from reality, Discouragement the
making harmful decisions, perpetuating mistrust in AI’s reliability credibility of AI systems.
3. Lethal Autonomous Weapons Systems (LAWS) misuse, and loss of human oversight, ethical norms, raising significant concerns
Humans might lose the ability to foresee, cyberattacks targeting the about the uncontrolled use of AI in warfare.
communication, control, or decision-making mechanisms LAWS.
AI Security 1. Malicious Use of AI Breach of sensitive data, compromised system integrity, potential AI model
Management Data theft, or unauthorized access, exploiting vulnerabilities in AI poisoning, resulting in security breaches and loss of trust in AI-powered
systems. technologies.
2. Insufficient Security Measures Unauthorized access to sensitive information, and potential misuse of AI systems,
Mistreatment of weak authentication, encryption, or access control in leading to compromised privacy and loss of trust in AI technologies.
AI systems.
4
A. Habbal et al. Expert Systems With Applications 240 (2024) 122442
of incorporating human involvement within AI processes. This can be presenting a unified approach that brings together AI trust, risk evalu-
observed when AI defers a classification decision to a human when ation, and security protocols. It amalgamates key components from in-
uncertain about a specific case. The human-in-the-loop strategy in the dividual frameworks, promoting improved collaboration and synergy in
medical field shows potential in alleviating bias, enhancing trans- these critical aspects of AI governance and administration. This unified
parency, and building trust in AI-powered systems (Lukyanenko et al., approach ensures a comprehensive strategy to tackle challenges related
2022; Rehman, 2022). On the other hand, AI methods for risk man- to AI while promoting a stronger and more dependable AI system.
agement are increasingly extending into novel domains encompassing The AI TRISM framework emphasizes the importance of trust, risk,
the examination of extensive document repositories, the automation of and security considerations throughout the entire life cycle of AI sys-
repetitive tasks, and the identification of money laundering, which ne- tems, encompassing design, development, deployment, and operation
cessitates the analysis of sizable datasets. As AI systems continually stages. This framework provides a comprehensive approach to manage
advance, making it difficult for risk management frameworks to pre- the risks associated with AI systems. Furthermore, it can assist busi-
serve pace with the latest developments and potential risks. Leveraging nesses in formulating and implementing AI strategies that align with
individual data for evaluating risks is governed by the General Data their objectives and values. These guidelines cover the operational as-
Protection Regulation (GDPR) (Kingston, 2017; Mitrou, 2018). Com- pects, safety considerations, and potential ramifications related to AI
panies employing AI must adhere to this additional responsibility and TRiSM. Their purpose is to offer explicit direction to developers and the
strategize on aligning with GDPR guidelines. However, if a small or AI community on how to effectively deploy secure and innovative
medium-sized enterprise (SME) attempts to create an in-house AI risk platforms based on AI.
assessment system, it would be highly inefficient and too costly for the It is important for both AI model developers and users need to be
resources of a single SME to manage (Žigienė et al., 2019). Additionally, reminded the necessity to integrate safeguards into their frameworks
Numerous ongoing initiatives and suggestions are dedicated to and models. This step is crucial for instilling trust in AI systems, miti-
addressing AI security frameworks, aiming to mitigate security risks that gating associated risks, and upholding their security. A comprehensive
lead to a loss of control over AI application behavior and decision- AI TRiSM framework is indispensable for achieving these objectives.
making due to security challenges. These frameworks can act as a According to the current frameworks that organizations should accept
guide to enhance the safety and security features of AI while encour- include solutions, approaches, and measures that enable model expla-
aging its well-organized and safe progression (Jing, 2021). nation and interpretability (De, 2020), ensure smooth model operations,
The existing frameworks for AI Trust, Risk, and Security operate protect privacy and enhance resistance to adversarial attacks. These
independently and separately, lacking cohesion and alignment (Chau- measures are aimed at benefiting both the enterprise and its customers.
han and Gullapalli, 2021; Wickramasinghe, et al., 2020; Sobb et al., Fig. 1 illustrates the four core principles of AI TRiSM: Model Monitoring,
2020). These isolated frameworks often do not effectively cooperate or ModelOps, AI Application Security, and Model Privacy (Groombridge,
synchronize their actions, resulting in a fragmented strategy for man- 2022). Together, these pillars establish a comprehensive framework for
aging AI. The absence of seamless integration and coordination among responsible, trust and secure AI implementation. Here’s an overview of
these essential dimensions creates areas where trust may be established the components of the current AI TRISM framework:
without a complete understanding of the associated risks and security
consequences. The AI TRiSM Framework seeks to bridge this divide by
Fig. 1. The four AI TRiSM pillars to deliver managed trust, risk and security for AI systems.
5
A. Habbal et al. Expert Systems With Applications 240 (2024) 122442
3.1. Model monitoring throughout the process. It is evident that a more structured and efficient
approach is required for the development and management of the life-
One of the significant challenges faced by AI models today is the lack cycle of AI applications. An essential aspect of ModelOps involves
of trust among users, primarily stemming from issues related to trans- employing a domain-specific language that prioritizes core elements in
parency and ethics. Many individuals feel uncomfortable interacting AI solutions. These encompass datasets, model specifications, trained
with machines instead of human counterparts, to which they are models, applications, monitoring events, and the algorithms and plat-
accustomed. This concern becomes more pronounced when the forms utilized for data processing, model training, and application
decision-making process within the “black box” of AI models is complex deployment (Hummer, et al., 2019). An essential part of an AI TRiSM
or inscrutable, leaving users without explanations or reassurance. It is framework is the ModelOps procedure, which is the operationalization
crucial to enable the integration of AI into real-world applications in a of AI models, involves managing the lifecycle and governance of all AI
manner that prioritizes fairness, accountability, and transparency, uti- models as well as responsibility of managing the foundational infra-
lizing governance frameworks. The absence of transparency in AI sys- structure and environment, such as cloud resources, to ensure the
tems can pose accountability challenges and make it difficult to assign optimal performance of the models. Meanwhile, Fig. 3 illustrates the
responsibility for their actions. Therefore, it is vital to address these comprehensive ModelOps procedure, encompassing the key stages of
concerns to ensure that AI systems are transparent and accountable model design, deployment, operations, and monitoring. At the begin-
(Smith, 2021). While the application of AI in healthcare has great ning, the model design phase comprises collection of requirement and
promise for transformative advancements, it also introduces ethical di- data availability and prioritizing carefully and also data preprocessing
lemmas that necessitate careful consideration and resolution. In order to techniques to ensure optimal model performance and accuracy.
mitigate the risks associated with relinquishing control over AI systems, Following design, the deployment phase entails selecting appropriate
as well as potential issues like human deskilling and widespread sur- platforms and infrastructure to seamlessly integrate the model into the
veillance, it is essential to conduct thorough ethical analyses (Gerke desired environment. During operations, the model is actively utilized in
et al., 2020; Taddeo, 2019). real-world scenarios, necessitating continuous performance monitoring,
However, by implementing model monitoring and explainability it is error handling, and fine-tuning for maintaining optimal functionality.
making sure that AI models are functioning properly and do not add
biases. This contributes to the understanding of how AI models operate
and reach defensible conclusions and promoting transparency and 3.3. AI security application
building trust in the AI system. Fig. 2 shows the illustration and
description of interpretability AI TRiSM models monitoring operation’s AI security applications employ sophisticated machine learning al-
aim to achieve and how it provides transparent, trust and information to gorithms and methodologies to promptly identify and address weak-
the user. nesses, unauthorized access, and harmful actions. These applications
have the capability to observe network patterns, assess user actions, and
pinpoint irregularities that could signal a breach in security (Jain, et al.,
3.2. AI ModelOps 2020; Gopalan et al., 2020). The utilization of AI technology requires
huge amount of data, and ensuring the protection of that data is of
Despite the potential demonstrated by AI in various application paramount importance. Within the context of AI TRiSM, data security
areas, its integration into enterprises is still at a nascent stage. One holds particular significance in heavily regulated sectors such as
possible explanation for this is the lack of suitable tools and methodol- healthcare (Norori, 2021) and finance (Giudici and Raffinetti, 2023).
ogies to facilitate the complete development lifecycle of AI solutions. Furthermore, AI TRiSM, data protection frameworks like synthetic data,
This encompasses crucial tasks like data preparation, model design and differential privacy (Meden, 2023), as well protocols such as Full Ho-
training, application development, quality assurance, deployment, momorphic Encryption (FHE) (Kadykov et al., 2021) and Secure Multi-
monitoring, feedback, and ensuring reproducibility and auditability Party Computation (SMPC) are applied which are essential for proting
6
A. Habbal et al. Expert Systems With Applications 240 (2024) 122442
Fig. 3. Visually delineates how these interconnected properties contribute to an effective ModelOps lifecycle.
the great quantities of data required for AI technology, and ensuring protection is vital for upholding ethical practices and ensuring respon-
trust, security, and mitigating risks in AI systems. Sensitive or confi- sible and trustworthy deployments of AI. The AI TRiSM framework aids
dential information can be hidden while maintaining the general sta- organizations in establishing guidelines and protocols for the ethical
tistical properties of the original data by employing synthetic data for collection, storage, and utilization of data, which is particularly critical
data protection implementation. This encourages data analysis and in fields like healthcare, where various AI models are employed to
sharing without running the danger of disclosing private information. process sensitive patient information.
The implementation of SMPC in data privacy maintains the confi- Moreover, Table 2 showcases a comparison of key parameters of the
dentiality of sensitive data by avoiding plaintext sharing among parties trust, risk and security in AI systems challenges and potential im-
and using cryptographic techniques to protect data. It becomes chal- provements before and after the implementation of the AI TRiSM
lenging for individuals with malicious intent to manipulate or intercept Framework. The table focuses the key aspect like model monitoring, AI
data. Delivering inference services for the trained tree-based model modelOps, AI security application and model privacy.
presents numerous obstacles. In practical scenarios, the computational
or communication capacity of the model owner might be restricted and
inadequate for managing extensive queries. Entrusting the tree-based Table 2
model directly to a high-performance server, even though it’s a funda- AI systems challenges and potential improvements before and after the imple-
mental digital asset for enterprises or institutions, would greatly jeop- mentation of AI TRiSM Framework.
ardize its privacy and interests (Zhao, 2023). This approach ensures that Aspect Before and After AI TRiSM Framework
data accuracy is preserved while privacy is maintained. Challenges Potential Improvements
FHE allows data to be processed without ever being exposed in Model Limited transparency Improved explainability
plaintext, providing a high level of data protection. FHE can be used for Monitoring Uncertain model behavior ( through model introspection.
Vollmer, et al., 1812). Validation against expected
various applications, including data analysis and machine learning,
behavior.
where sensitive data needs to be processed while maintaining its Lack of accountability for AI Clear governance frameworks
confidentiality (Tonyali, 2018). It allows multiple parties to perform system behavior for increased accountability,
computations jointly without sharing their private data. AI security Potential biases and Fairness-aware algorithms and
application ensures the protection and security of models from cyber discriminatory outcomes ( bias detection/mitigation
Smith, 2021). techniques
threats, enabling organizations to utilize TRiSM’s framework to estab-
Limited security measures for Enhanced security protocols
lish security protocols and implement measures that safeguard against model monitoring (Tan, and measures for model
unauthorized access or tampering. 2021). monitoring.
7
A. Habbal et al. Expert Systems With Applications 240 (2024) 122442
4. Automation of trust risk and security management facilitates the prompt and effective deployment of security protocols,
reinforcing the overall robustness of AI systems within a constantly
Trust, risk, and security management are critical characteristics of changing threat environment.
operational organization, as they comprise preserving sensitive infor-
mation, protecting assets, and confirming compliance with regulations 4.3. Features of automation of trust, risk and security management
and industry standards. Traditionally, these responsibilities have been
manually made by expert teams, which can be time-consuming, causing The automation of overseeing trust, risk, and security in AI TRiSM
lot of errors, and problematic to scale. With the progression of tech- presents numerous features, transforming how organizations address
nology, automation has become a valuable tool in addressing AI based safety and dependability. Firstly, automation boosts effectiveness and
system challenges. In the following sections will elaborate how AI accuracy in assessing the reliability and possible risks linked to AI sys-
TRiSM will automate and manage trust, risk and security of AI systems. tems. Sophisticated algorithms can consistently observe and scrutinize
extensive data, quickly detecting any irregularities or deviations from
4.1. Automation of trust management anticipated behavior. This immediate monitoring enables prompt re-
actions and mitigation of security risks, reducing potential harm.
Recent research has revealed that AI-based systems are susceptible to Moreover, automation simplifies evaluating trust by utilizing advanced
unreliability, untrustworthiness (González-Gonzalo, 2022) and the trust models and measurements. It has the capability to monitor per-
presence of bias in AI systems can undermine trust among individuals formance, dependability, and compliance with predetermined ethical
and organizational clients, leading to discriminatory outcomes, eco- and regulatory criteria (Zhang, 2019; Jacobsson et al., 2016). This
nomic disturbances, and the potential for misuse by malicious entities. comprehensive and precise trust assessment offers organizations a pro-
This is primarily due to the fact that the presence of bias within an AI active strategy, enabling improved decision-making and strategic fore-
system is influenced by the bias found in the training data it relies on. sight. Ultimately, this approach cultivates increased trust and
Consequently, if the training data itself contains bias, the resulting AI acceptance of AI technologies both within the organization and across
system will also exhibit bias. This can lead to discriminatory decisions the broader community. Furthermore, automation trust, risk, and se-
that unfairly affect individuals based on factors like race, gender, or curity in AI TRiSM considerably lessens the workload on human re-
socioeconomic status. Additionally, AI technology has the capability to sources by automating repetitive and time-intensive security and risk
generate realistic counterfeit images and videos, which can be exploited management responsibilities. This enables security specialists and risk
to spread misinformation or manipulate public opinion. Furthermore, AI managers to concentrate on more intricate and strategic aspects of se-
can be utilized to create highly sophisticated phishing attacks, deceiving curity, like devising novel strategies to combat emerging threats and
individuals into divulging sensitive information or clicking on malicious enhancing the system’s overall resilience (Chen, 2021). Ultimately, this
links (Desolda, 2023). reassignment of tasks results in a more effective and productive security
To establish trust, it is crucial to automate the management of trust and risk management procedure, enhancing the organization’s overall
within AI trust and risk framework, ensuring that AI systems are gov- security stance and facilitating quicker and well-informed decisions in a
erned in a manner that prioritizes transparency, accountability, and swiftly evolving technological environment.
fairness. This necessitates designing and implementing AI systems in a However, it is important to note that when AI systems does not
way that mitigates bias and discrimination while adhering to ethical provide responsibility and transparency, it can lead to reduced adoption
principles and standards. Moreover, AI systems must be reliable, resil- and increased distrust among users. Fig. 4 illustrates the features pro-
ient, and capable of operating securely and safely. Managing trust in- vided by AI TRiSM framework to deliver trust, security and transparency
volves ensuring that these requirements are fulfilled to instill confidence in AI systems.
and trust in AI technology (Kong, 2023).
In order to avoid bias and discrimination through the automated 5. AI TRiSM applications
management of trust in AI TRiSM, it is important to train AI systems on
datasets that are diverse and representative. This means incorporating The following secion will provide five illustrative scenarios that
data from various sources and populations to prevent biases stemming highlight the effectiveness and possibilities offered by AI TRiSM. These
from underrepresented or skewed data. Additionally, involving a range instances demonstrate how organizations leverage AI TRiSM to foster
of stakeholders, including domain experts and ethicists, in the design innovation, enhance results, and generate value for both businesses and
and development of AI systems can help identify and address potential society.
biases and ensure protection against discrimination. Use case 1: Fair, Financial Transparent, and Accountable AI Models.
An example of AI TRiSM in action is demonstrated by the Danish
4.2. Automation of risk and security management Business Authority (DBA). The DBA aligns its ethical principles with
concrete actions, conducts fairness tests to validate model predictions,
Developing and deploying AI systems without taking ethical con- and establishes a robust monitoring structure. They have successfully
siderations into account can give rise to a range of potential risks and implemented and managed 16 AI models that oversee financial trans-
consequences. Some of the most significant risks are observed judicial actions valued in billions of euros. This strategy not only assisted DBA in
evaluation systems (Pechegin, 2022), facial autonomous vehicles sys- ensuring the morality of its AI models, but it also aided in increasing
tems (Kumar, 2022) and recognition systems (Pantic and Rothkrantz, customer and stakeholder trust.
2000). For instance, facial recognition systems often exhibit higher error Use case 2: cause-and-effect relationships interpretable AI models
rates for individuals with darker skin tones or who are female. Similarly, generator.
hiring algorithms have been found to demonstrate discriminatory Abzu, a Danish startup, has developed an AI product that constructs
behavior against specific groups based on factors like age, ethnicity, or mathematically interpretable models for identifying cause-and-effect
gender. Moreover, Automation enhances the efficiency of risk assess- relationships Abzu (Sebastian and Peter, 2022). Their clients leverage
ment, allowing organizations to precisely measure the risks linked to AI these models to validate outcomes with efficiency, resulting in the
implementations. Through the utilization of AI algorithms, organiza- successful development of effective drugs for breast cancer treatment.
tions can pinpoint potential aspects of worry, including privacy con- Abzu’s product has the capability to analyze vast volumes of data and
cerns, model biases, or software vulnerabilities, and categorize them uncover patterns and connections that may not be readily discernible to
according to their seriousness and likely consequences. Additionally, humans. This enables their clients to make well-informed decisions and
incorporating automation into the management of risks and security advance the development of enhanced treatments for patients. The
8
A. Habbal et al. Expert Systems With Applications 240 (2024) 122442
transparent models produced by Abzu’s AI product play a role in successful deployment and operation of AI systems applied in smart
fostering trust between patients and healthcare providers. These models cities.
offer a transparent view of the AI’s decision-making process, providing a
Case 5. AI TRiSM in Metaverse: Securing the Virtual Frontier
clear comprehension of how it reaches its conclusions.
The use of AI in the metaverse has the potential impact and improve
Case 3. The concept of sharing medical data in the context of smart
various features of virtual space like virtual economy and marketplace
healthcare.
and also education (Cheng, 2022; Huynh-The, 2023) content generation
AI models smart healthcare assisting professionals in a multitude of (Lin, 2023), AI can assist in generating vast amounts of virtual content
responsibilities, ranging from managing administrative tasks and doc- within the metaverse. Through methods such as generative adversarial
umenting clinical information to reaching out to patients. Additionally, networks (GANs) and procedural generation, AI algorithms have the
they offer specialized assistance in areas like analyzing medical images, ability to independently produce virtual assets such as landscapes,
automating medical devices, and monitoring patient health (Bohr and buildings, objects, and other elements within the metaverse. This
Memarzadeh, 2020). AI TRiSM may help safeguard patient data by capability aids in creation the metaverse with diverse and realistic
implementing robust security measures, access controls, and encryption content, minimizing the need for manual content creation (Yang, 2022;
techniques. It ensures that sensitive health information is stored, Goodfellow, 2020).
transmitted, and processed securely, reducing the risk of data breaches As the metaverse expands and becomes more complex, ensuring the
and unauthorized access. security and trustworthiness becomes paramount. AI TRiSM in the
Moreover, ATRSM can involve validating and auditing AI models metaverse refers to the application of trust, risk, and security manage-
used in healthcare applications. It ensures that the AI algorithms are ment framework in the context of AI within the virtual space. Through
accurate, reliable, and compliant with regulations and standards. This the integration of the AI TRiSM framework into the metaverse, de-
validation process minimizes the risk of biased or incorrect predictions, velopers and platform providers have the ability to establish a secure,
particularly in critical areas like diagnostics or treatment planning. trustworthy, and ethically sound virtual environment. It assists to pro-
tect user data, mitigate risks, and build self-confidence in the AI systems
Case 4. Secure smart cities framework
that power the metaverse experience.
AI applications play a significant role in transforming smart cities by
optimizing resource allocation, enhancing efficiency, and improving the 6. Challenges of AI TRiSM
quality of life for citizens. The concept of a smart city stands as a highly
favored implementation, embodying a wide array of pervasive services In spite of the current existence of the AI TRiSM framework and its
collaborating and orchestrating their efforts to enhance the living important, there are challenges that must be addressed before it is
standards and overall life experience for urban inhabitants (Habbal widely implemented in the near future. The reason behind this is that AI
et al., 2019). As cities become more interconnected and reliant on AI TRiSM framework its self requires reliability and significant advance-
technologies, ensuring the trustworthiness, mitigating risks, and ment in AI systems. In the following sections, we examine a number of
ensuring the security of these systems become paramount. Smart cities these obstacles that must be undertaken to ensure successful imple-
produce substantial volumes of data originating from diverse channels, mentation, along with some potential paths for future directions. These
including sensors, cameras, and interactions among citizens. AI systems challenges encompass numerous aspects such as the ever-changing
must be designed to handle this data securely and protect individual threat landscape, adversarial attacks, compliance with regulations and
privacy (Veselov, 2021; Kamruzzaman et al., 2022). skilled expertise gap.
To ensure the privacy and safeguarding of data, methods like data
anonymization, encryption, and access controls can be utilized through
6.1. Monitoring AI models
AI TRiSM framework. By addressing privacy concerns, managing risks,
and implementing robust security measures, cities can build trust among
Monitoring AI models is a crucial aspect of ensuring trust, risk and
citizens and stakeholders, fostering a safe and secure environment for
security in their deployment and may presents several challenges that
their residents, so AI TRisM framework is fundamental in ensuring the
organizations developers need to address effectively. These may include,
9
A. Habbal et al. Expert Systems With Applications 240 (2024) 122442
but not limited to, data drift because AI models depend on the training Services, Finance, and Manufacturing sectors (Alekseeva, 2021). More-
data they receive, and if the characteristics of the incoming data change over, for successful implementation of AI TRiSM, it is essential to form a
over time, it can result in a phenomenon known as data drift. Continuous collaborative cross-functional team consisting of experts from various
monitoring and timely model updates are necessary to detect and disciplines such as legal, security, compliance, computer science, and
resolve data drift effectively (Zhao, 2021). Moreover, bias and fairness data analytics. These professionals collectively contribute their skills
monitoring in AI models are susceptible to biases, which have the po- and knowledge to effectively manage and address the challenges asso-
tential to result in unjust or discriminatory results. To ensure fair ciated with AI TRiSM. In order to maximize outcomes, it is advisable to
treatment of individuals, it is crucial to meticulously choose suitable form a dedicated team, or alternatively, a task force if necessary. It is of
fairness metrics and continuously evaluate the model’s performance utmost importance to ensure that each AI project has adequate repre-
(Liang, 2022). To tackle these challenges, it requires a combination of sentation from the business side. This ensures that the team possesses the
methods, including ongoing data collection, monitoring pipelines, ver- necessary expertise and perspectives to effectively drive the project to-
sioning of models, automated alerts, and establishing feedback loops wards success, so the scarcity of individuals with these combined skills
involving human experts (Asan et al., 2020; Liyanage, 2023). Applying can present challenges in establishing effective AI TRiSM practices.
robust monitoring practices ensures the trustworthiness, risk mitigation,
and security of AI models throughout their lifecycle. 6.5. Rapidly evolving threat landscape
6.2. Regulatory compliance The domain of AI TRiSM operates within a dynamic and ever-
changing environment of potential threats. Constantly, new vulnera-
According to existing Controlling bodies and governments, the bilities, attack methods, and privacy concerns emerge, posing ongoing
connection between AI TRiSM and like the GDPR (General Data Pro- challenges for organizations. Staying well-informed about the latest
tection Regulation) (van Mil and Quintais, 2022), which is of significant security practices and effectively addressing emerging risks becomes a
in policy spheres, and the corporate segment, is another important continuous task. To tackle the challenges presented by this rapidly
aspect to consider. Navigating through regulatory landscapes and evolving threat landscape, organizations must adopt a proactive and
meeting necessary requirements can be a complex task for organizations adaptable approach to cybersecurity. This entails staying updated on the
as they struggle to obey to applicable regulations and compliance most recent threats and vulnerabilities, implementing strong security
frameworks. These include data protection laws of GDPR, industry- measures, regularly assessing risks, educating employees about security
specific regulations in sectors such as healthcare or finance, and best practices, and fostering a culture of cybersecurity awareness and
ethical guidelines. It is crucial for organizations to ensure that their AI attentiveness (Jang-Jaccard and Nepal, 2014). Collaboration between
systems comply with these regulations and frameworks. AI TRiSM serves organizations, sharing information, and investing in research and
as a framework designed to handle the risks linked to AI systems and development are also vital in effectively countering the ever-evolving
facilitate governance, reliability, fairness, efficacy, and privacy. Never- threat landscape.
theless, the establishment of AI model governance and trustworthiness is
a complex task due to the lack of agreement regarding the appropriate 7. Transformation features
definition of controllers.
In addition to the obstacles mentioned above, there are numerous
6.3. Adversarial attacks transformation features and characteristics that will make AI TRiSM
framework very attractive in future and may include, providing enter-
Adversarial attacks often aim to exploit weaknesses in AI models. prise risk management and security orchestration in AI systems, offering
Ensuring the robustness of AI models against such attacks is a significant a comprehensive unified approach for responsible and sustainable
challenge. Adversarial examples can be crafted to mislead the model’s development and deployment of AI systems, adaptive strategies to
predictions or bypass security measures, potentially leading to false address challenges, market coalition and guaranteeing adherence to
positives or negatives in risk detection. Another challenge for Adver- regulations and ethical concerns. In the following sections we discussed
saries can manipulate training data or inject malicious samples into the in detail the AI TRiSM market directions that are expected to progress in
dataset used to train AI models. This can result in the model learning the future.
incorrect patterns or making biased decisions, compromising the
integrity and accuracy of the AI TRiSM system. The tactics used by ad- 7.1. Elimination fragmentation
versaries are continuously changing and growing more sophisticated.
They may employ advanced algorithms, use gradient-based optimiza- In September 2023, The European Commission claims that the EU is
tion methods, or explore novel attack strategies. Keeping up with the well-placed to promote artificial intelligence that is focused on hu-
evolving landscape of adversarial attacks requires continuous research, manity, sustainability, security, inclusivity, and trustworthiness. The
monitoring, and adaptation. Addressing these challenges requires a report anticipates the emergence of new generations of integrated fea-
multi-faceted approach, including robust model training, ongoing tures over time, ultimately leading to the availability of comprehensive
monitoring for adversarial behavior, implementing defense mechanisms governance or framework AI systems (Francisco and Linnér, 2023).
like anomaly detection and input validation, conducting regular By tackling a range of challenges and concerns surrounding trust,
vulnerability assessments and penetration testing (Karim, 2022). Orga- risk, and security in the realm of AI, AI TRiSM plays a crucial role in
nizations must continually update and improve their AI TRiSM systems mitigating the fragmentation of AI systems. This is because AI providers
to stay ahead of adversarial threats and ensure the security and trust- typically do not offer a comprehensive set of features to effectively and
worthiness of their AI deployments. consistently manage trust, risk, and security in AI. Consequently, users
are compelled to select from a variety of suppliers specializing in
6.4. Skill gap and expertise different categories of AI TRiSM solutions to meet their specific
requirements.
Recent studies demonstrate a significant surge in the need for AI
expertise. The demand for roles necessitating AI skills has grown by a 7.2. Market coalition
factor of ten between 2010 and 2019 and has quadrupled in terms of the
overall job market share. Although a substantial portion of the demand AI TRiSM also brings the potential for enhancing and coordinating AI
for AI skills is concentrated in the Information Technology, Professional systems by incorporating alerts and remediation processes of ModelOps.
10
A. Habbal et al. Expert Systems With Applications 240 (2024) 122442
These aspects will be seamlessly integrated into the existing enterprise attention and are anticipated to influence its trajectory, including the
risk management and security orchestration systems, offering a Internet of Things (IoT), Quantum-Safe AI TRiSM, Federated Learning
comprehensive and unified approach. To accomplish this, the platform and Edge Computing for Security and other domains. Here are a few
administration of ModelOps will incorporate the enterprise’s utilization specific aspects that require exploration.
of third-party models. Additionally, alerts for adversarial attacks and
corresponding corrective measures will be integrated into the current 8.1. IoT security and privacy
Security Orchestration systems (Sethi, 2021; Zhang, 2022). The inte-
gration of AI TRiSM into the core offerings and operations will occur The Internet of Things (IoT) comprises gadgets that produce,
through a collaborative coalition of platform-neutral vendors in the analyze, and share extensive quantities of data related to security,
market of ModelOps. These vendors aim to provide organizations with safety, and privacy-sensitive details, making them attractive to diverse
transparent, fair, and secure solutions for operationalizing models. By cyber threats (Dorri, 2017; Askar, 2023). AI TRiSM is set to have a
utilizing this approach, organizations can confidently implement AI crucial impact on enhancing the security of the Internet of Things (IoT).
models while efficiently mitigating risks and guaranteeing adherence to As the IoT network grows, involving various interconnected devices and
regulations and ethical concerns. As a result, the number of vendors systems, prioritizing trust and security becomes crucial. AI TRiSM
exclusively dedicated to ModelOps is expected to decrease, as integrated framework utilizes AI to enhance reliability by utilizing machine
platforms encompass these capabilities. These integrated platforms will learning algorithms to recognize typical patterns in device behavior and
coexist with advanced solutions that enhance composite AI and gener- promptly identify deviations that could suggest potential security risks.
ative AI functionalities. The methods utilized for safeguarding data This proactive strategy facilitates timely evaluation of risks and their
outside of AI applications will continue to evolve to address data pro- management, contributing to a stronger and more adaptable IoT infra-
tection concerns pertaining specifically to AI model data. structure. Moreover, AI has the potential to automate incident response
protocols, guaranteeing swift and effective actions in the face of a se-
7.3. AI TRiSM adaptivity curity breach. By promoting a trustworthy environment through inclu-
sive security measures, AI TRiSM aims to inspire trust among users,
According to recent studies, organizations that prioritize AI trans- manufacturers, and stakeholders within the IoT ecosystem. This multi-
parency, trust, and security in their operations are projected to witness a faceted strategy, which merges AI and security oversight, is critical in
substantial 50 % improvement in the adoption, successful attainment of constructing a durable and dependable IoT setting, imperative for the
business objectives, and user acceptance of their AI models by the year smooth assimilation and realization of the potential advantages offered
2026. The predictions also indicate that by 2028, AI-driven machines by IoT technology.
will account for approximately 20 % of the global workforce and
contribute to 40 % of overall economic productivity (Groombridge, 8.2. Quantum-Safe in AI TRiSM
2022).
In broad terms, the AI TRiSM framework will provide support for a Quantum computing technology presents fundamentally distinct
system of checks and promote a high level of transparency in docu- approaches to computational challenges, allowing for superior problem-
mentation. A strong documentation structure will be implemented, with solving efficiency compared to conventional classical computations
a particular focus on AI training data, to ensure trustworthiness and (Rieffel and Polak, 2000; Gyongyosi and Imre, 2019). AI TRiSM, within
assist in conducting technical audits in case of any issues. The docu- the realm of Quantum computing will addresses the changing landscape
mentation system will have automated functionalities and the ability to of AI merging with quantum computing advancements. The progress in
identify incomplete, missing, or abnormal records. It is crucial for the quantum computing presents a notable risk to conventional crypto-
documentation system to preserve reliability and user-friendliness to graphic systems, making existing security measures potentially ineffec-
effectively support AI TRiSM and enable the utilization of AI technology. tive. AI TRiSM is dedicated to securing and fostering trust in quantum
Lastly, by enabling non-technical users to understand the data gathering computing by preemptively managing risks associated with quantum
process and the decision-making system, businesses can enhance AI technology. Moreover, AI TRiSM in the Quantum-Safe context priori-
accountability and transparency. tizes evaluating risks and implementing strategies to minimize them,
taking into account the possible weaknesses that may arise from quan-
8. Discussions and future research directions tum computing. By promoting a culture of ongoing vigilance, adjust-
ment, and effective response, Quantum-Safe AI TRiSM guarantees the
Our review highlight’s comprehensive structure that integrates trust, resilience and reliability of quantum technologies amid the changing
risk assessment, and security management within the AI landscape, threat environment presented by quantum computing.
promoting a holistic strategy to tackle the increasing concerns related to
AI implementation. This AI TRiSM framework we primarily categorized 8.3. AI TRiSM in Federated learning
into four fundamental pillars that can foster trust among its AI de-
velopers and users base while leveraging the forthcoming advancements AI TRiSM is pivotal in the realm of Federated Learning, an approach
in AI technologies and also addressing unique security challenges posed to machine learning that is decentralized, involving local model training
by AI, such as adversarial attacks, model explainability, and privacy on edge devices or servers. Only aggregated updates are then shared
preservation. The framework is intended to provide a structured with a central server. AI TRiSM frameworks will play a crucial role in
approach, offering guidance and strategies to establish trust, mitigate establishing trust by incorporating strong security measures like
risks, and enhance security in AI applications. Moreover, our research encryption, secure aggregation protocols, and access controls. Safe-
also demonstrates real-world applications of the proposed frameworks guarding the privacy and confidentiality of sensitive data is a corner-
and methodologies in diverse domains to showcase innovative appli- stone for building trust among participants, fostering broader
cations of AI TRiSM in areas like healthcare, finance, smart cities, and acceptance of Federated Learning across diverse sectors such as
metaverse to highlight the effectiveness and possibilities of trust en- healthcare, finance, and smart devices. Nonetheless, despite the ad-
hancements, risk mitigations and privacy protection offered by AI vantages of Federated Learning (Zhang, 2021; Fedorchenko et al.,
TRiSM. 2022), it brings certain risks. Effectively handling these risks within the
To advance knowledge and technology in this field, numerous po- AI TRiSM framework entails recognizing possible threats, weaknesses,
tential directions for future research should be examined. In the realm of and consequences connected to the federated learning setup. Moreover,
AI TRiSM, there are abundant upcoming opportunities that demand AI TRiSM should incorporate systems for overseeing and examining
11
A. Habbal et al. Expert Systems With Applications 240 (2024) 122442
federated learning procedures to guarantee adherence to regulatory challenges associated with AI TRiSM. Finally, there is a need for public
guidelines and industry benchmarks. This will fortify the general secu- education and awareness initiatives aimed at enlightening users about
rity readiness and reliability of the federated learning setting. the advantages, potential risks, and optimal approaches linked to AI,
enabling them to make informed decisions and participate in shaping AI
8.4. Multidisciplinary research workforce policies and regulations.
12
A. Habbal et al. Expert Systems With Applications 240 (2024) 122442
Ma, Y., et al. (2020). Artificial intelligence applications in the development of de Ágreda, Á. G. (2020). Ethics of autonomous weapons systems and its applicability to
autonomous vehicles: A survey. IEEE/CAA Journal of Automatica Sinica, 7(2), any AI systems. Telecommunications Policy, 44(6), Article 101953.
315–329. Kumar, D. and K.P. Kumar. Artificial Intelligence based Cyber Security Threats Identification
Yu, K.-H., Beam, A. L., & Kohane, I. S. (2018). Artificial intelligence in healthcare. Nature in Financial Institutions Using Machine Learning Approach. in 2023 2nd International
Biomedical Engineering, 2(10), 719–731. Conference for Innovation in Technology (INOCON). 2023. IEEE.
Sanz, J.L. and Y. Zhu. Toward scalable artificial intelligence in finance. in 2021 IEEE Song, F., et al. (2020). Smart collaborative balancing for dependable network
International Conference on Services Computing (SCC). 2021. IEEE. components in cyber-physical systems. IEEE Transactions on Industrial Informatics, 17
Bharadiya, J. (2023). Artificial Intelligence in Transportation Systems A Critical Review. (10), 6916–6924.
American Journal of Computing and Engineering, 6(1), 34–45. Solomon, A., et al. (2022). Contextual security awareness: A context-based approach for
Lv, Z., Lou, R., & Singh, A. K. (2020). AI empowered communication systems for assessing the security awareness of users. Knowledge-Based Systems, 246, Article
intelligent transportation systems. IEEE Transactions on Intelligent Transportation 108709.
Systems, 22(7), 4579–4587. Schiliro, F., Moustafa, N., & Beheshti, A. (2020). Cognitive privacy: AI-enabled privacy
Mahbooba, B., et al. (2021). Explainable artificial intelligence (XAI) to enhance trust using EEG signals in the internet of things. in 2020 IEEE 6th International Conference
management in intrusion detection systems using decision tree model. Complexity, on Dependability in Sensor, Cloud and Big Data Systems and Application (DependSys).
2021, 1–11. IEEE.
Elahi, H., et al. (2021). On the Characterization and Risk Assessment of AI-Powered Schneier, B. (2015). Secrets and lies: Digital security in a networked world. John Wiley &
Mobile Cloud Applications. Computer Standards & Interfaces, 78, Article 103538. Sons.
Lamsal, P., Understanding trust and security. Department of Computer Science, University Weingart, S. H. (1987). Physical security for the μABYSS system. in 1987 IEEE Symposium
of Helsinki, Finland, 2001. on Security and Privacy. IEEE.
Roski, J., et al. (2021). Enhancing trust in AI through industry self-governance. Journal of Brundage, M., et al., The malicious use of artificial intelligence: Forecasting, prevention, and
the American Medical Informatics Association, 28(7), 1582–1590. mitigation. arXiv preprint arXiv:1802.07228, 2018.
Kumar, P., Chauhan, S., & Awasthi, L. K. (2023). Artificial intelligence in healthcare: Dhiman, A., & Toshniwal, D. (2022). AI-based Twitter framework for assessing the
Review, ethics, trust challenges & future research directions. Engineering Applications involvement of government schemes in electoral campaigns. Expert Systems with
of Artificial Intelligence, 120, Article 105894. Applications, 203, Article 117338.
Bedemariam, R., & Wessel, J. L. (2023). The roles of outcome and race on applicant Bazarkina, D., & Pashentsev, E. (2020). Malicious use of artificial intelligence. Russia in
reactions to AI systems. Computers in Human Behavior, 148, Article 107869. Global Affairs, 18(4), 154–177.
Zhang, C., et al. (2020). AIT: An AI-enabled trust management system for vehicular Khan, M. K. (2020). Technological advancements and 2020 (pp. 1–2). Springer.
networks using blockchain technology. IEEE Internet of Things Journal, 8(5), Ansari, M. F., et al. (2022). The Impact and Limitations of Artificial Intelligence in
3157–3169. Cybersecurity: A Literature Review. International Journal of Advanced Research in
Cabiddu, F., et al. (2022). Why do users trust algorithms? A review and Computer and Communication Engineering.
conceptualization of initial trust and trust over time. European Management Journal, Di Vaio, A., et al. (2020). Artificial intelligence and business models in the sustainable
40(5), 685–706. development goals perspective: A systematic literature review. Journal of Business
Ferrer, X., et al. (2021). Bias and discrimination in AI: A cross-disciplinary perspective. Research, 121, 283–314.
IEEE Technology and Society Magazine, 40(2), 72–80. Lukyanenko, R., Maass, W., & Storey, V. C. (2022). Trust in artificial intelligence: From a
Malek, M. A. (2022). Criminal courts’ artificial intelligence: The way it reinforces bias Foundational Trust Framework to emerging research opportunities. Electronic
and discrimination. AI and Ethics, 2(1), 233–245. Markets, 32(4), 1993–2020.
Mujtaba, D.F. and N.R. Mahapatra. Ethical considerations in AI-based recruitment. in 2019 Rehman, A., et al. (2022). CTMF: Context-aware trust management framework for
IEEE International Symposium on Technology and Society (ISTAS). 2019. IEEE. internet of vehicles. IEEE Access, 10, 73685–73701.
Asan, O., Bayrak, A. E., & Choudhury, A. (2020). Artificial intelligence and human trust Kingston, J. (2017). Using artificial intelligence to support compliance with the general
in healthcare: Focus on clinicians. Journal of Medical Internet Research, 22(6), data protection regulation. Artificial Intelligence and Law, 25(4), 429–443.
e15154. Mitrou, L., Data protection, artificial intelligence and cognitive services: is the general data
Nicodeme, C. Build confidence and acceptance of AI-based decision support systems- protection regulation (GDPR)‘artificial intelligence-proof’? Artificial Intelligence and
Explainable and liable AI. in 2020 13th International Conference on Human System Cognitive Services: Is the General Data Protection Regulation (GDPR)‘Artificial
Interaction (HSI). 2020. IEEE. Intelligence-Proof, 2018.
Esmaeilzadeh, P. (2020). Use of AI-based tools for healthcare purposes: A survey study Žigienė, G., Rybakovas, E., & Alzbutas, R. (2019). Artificial intelligence based
from consumers’ perspectives. BMC Medical Informatics and Decision Making, 20(1), commercial risk management framework for SMEs. Sustainability, 11(16), 4501.
1–19. Jing, H., et al. (2021). An Artificial Intelligence Security Framework. Journal of Physics:
Wazan, A.S., et al., Trust management for public key infrastructures: Implementing the X. 509 Conference Series. IOP Publishing.
trust broker. Security and Communication Networks, 2017. 2017. Chauhan, C., & Gullapalli, R. R. (2021). Ethics of AI in pathology: Current paradigms and
Park, C., et al. (2022). An enhanced ai-based network intrusion detection system using emerging issues. The American Journal of Pathology, 191(10), 1673–1683.
generative adversarial networks. IEEE Internet of Things Journal, 10(3), 2330–2345. Wickramasinghe, C.S., et al. Trustworthy AI development guidelines for human system
Benmoussa, A., et al. (2022). Interest Flooding Attacks in Named Data Networking: interaction. in 2020 13th International Conference on Human System Interaction (HSI).
Survey of Existing Solutions, Open Issues, Requirements, and Future Directions. ACM 2020. IEEE.
Computing Surveys, 55(7), 1–37. Sobb, T., Turnbull, B., & Moustafa, N. (2020). Supply chain 4.0: A survey of cyber
Nelson, G. S. (2019). Bias in artificial intelligence. North Carolina Medical Journal, 80(4), security challenges, solutions and future directions. Electronics, 9(11), 1864.
220–222. De, T., et al. (2020). Explainable AI: A hybrid approach to generate human-interpretable
Zhu, T., et al. (2020). More than privacy: Applying differential privacy in key areas of explanation for deep learning prediction. Procedia Computer Science, 168, 40–48.
artificial intelligence. IEEE Transactions on Knowledge and Data Engineering, 34(6), Smith, H. (2021). Clinical AI: Opacity, accountability, responsibility and liability. Ai &
2824–2843. Society, 36(2), 535–545.
Van Bekkum, M., & Borgesius, F. Z. (2023). Using sensitive data to prevent Gerke, S., Minssen, T., & Cohen, G. (2020). Ethical and legal challenges of artificial
discrimination by artificial intelligence: Does the GDPR need a new exception? intelligence-driven healthcare. In Artificial intelligence in healthcare (pp. 295–336).
Computer Law & Security Review, 48, Article 105770. Elsevier.
Hadj-Mabrouk, H. (2019). Contribution of artificial intelligence to risk assessment of Taddeo, M. (2019). Three ethical challenges of applications of artificial intelligence in
railway accidents. Urban Rail Transit, 5(2), 104–122. cybersecurity. Minds and machines, 29, 187–191.
Peters, U. (2022). Algorithmic political bias in artificial intelligence systems. Philosophy Hummer, W., et al. Modelops: Cloud-based lifecycle management for reliable and trusted ai.
& Technology, 35(2), 25. in 2019 IEEE International Conference on Cloud Engineering (IC2E). 2019. IEEE.
King, T. C., et al. (2020). Artificial intelligence crime: An interdisciplinary analysis of Jain, H., et al. Weapon detection using artificial intelligence and deep learning for security
foreseeable threats and solutions. Science and Engineering Ethics, 26, 89–120. applications. in 2020 International Conference on Electronics and Sustainable
Marr, B. (2018). Is Artificial Intelligence dangerous? 6 AI risks everyone should know Communication Systems (ICESC). 2020. IEEE.
about. Forbes. Gopalan, S.S., A. Raza, and W. Almobaideen. IoT security in healthcare using AI: A survey.
Ienca, M. (2023). On Artificial Intelligence and Manipulation. Topoi, 1–10. in 2020 International Conference on Communications, Signal Processing, and their
Westerlund, M. (2019). The emergence of deepfake technology: A review. Technology Applications (ICCSPA). 2021. IEEE.
Innovation Management Review, 9(11). Norori, N., et al. (2021). Addressing bias in big data and AI for health care: A call for
Vaccari, C. and A. Chadwick, Deepfakes and disinformation: Exploring the impact of open science. Patterns, 2(10), Article 100347.
synthetic political video on deception, uncertainty, and trust in news. Social Media+ Giudici, P., & Raffinetti, E. (2023). SAFE artificial intelligence in finance. Finance
Society, 2020. 6(1): p. 2056305120903408. Research Letters, Article 104088.
Pedron, S. M., & da Cruz, J.d. A. (2020). The future of wars: Artificial intelligence (ai) Meden, B., et al. (2023). Face deidentification with controllable privacy protection.
and lethal autonomous weapon systems (laws). International Journal of Security Image and Vision Computing, Article 104678.
Studies, 2(1), 2. Kadykov, V., Levina, A., & Voznesensky, A. (2021). Homomorphic encryption within
Wogu, I., et al. (2018). Super-Intelligent Machine Operations in Twenty-First-Century lattice-based encryption system. Procedia Computer Science, 186, 309–315.
Manufacturing Industries: A Boost or Doom to Political and Human Development? Zhao, J., et al. (2023). Efficient and privacy-preserving tree-based inference via additive
Towards Extensible and Adaptable Methods in Computing, 209–224. homomorphic encryption. Information Sciences, 650, Article 119480.
Surber, R., Artificial intelligence: autonomous technology (AT), lethal autonomous weapons Tonyali, S., et al. (2018). Privacy-preserving protocols for secure and reliable data
systems (LAWS) and peace time threats. ICT4Peace Foundation and the Zurich Hub for aggregation in IoT-enabled smart metering systems. Future Generation Computer
Ethics and Technology (ZHET) p, 2018. 1: p. 21. Systems, 78, 547–557.
13
A. Habbal et al. Expert Systems With Applications 240 (2024) 122442
Hassan, M. U., Rehmani, M. H., & Chen, J. (2019). Differential privacy techniques for Cheng, S., et al. (2022). Roadmap toward the metaverse: An AI perspective. The Innovation, 3
cyber physical systems: A survey. IEEE Communications Surveys & Tutorials, 22(1), (5).
746–789. Huynh-The, T., et al. (2023). Artificial intelligence for the metaverse: A survey.
Habbal, A., et al. (2017). Assessing experimental private cloud using web of system Engineering Applications of Artificial Intelligence, 117, Article 105581.
performance model. International Journal of Grid and High Performance Computing Lin, Y., et al. (2023). Blockchain-aided secure semantic communication for ai-generated
(IJGHPC), 9(2), 21–35. content in metaverse. IEEE Open Journal of the Computer Society, 4, 72–83.
Vollmer, S., et al., Machine learning and AI research for patient benefit: 20 critical questions Yang, Q., et al. (2022). Fusing blockchain and AI with metaverse: A survey. IEEE Open
on transparency, replicability, ethics and effectiveness. arXiv preprint arXiv: Journal of the Computer Society, 3, 122–136.
1812.10404, 2018. Goodfellow, I., et al. (2020). Generative adversarial networks. Communications of the
Tan, L., et al. (2021). Secure and resilient artificial intelligence of things: A HoneyNet ACM, 63(11), 139–144.
approach for threat detection and situational awareness. IEEE Consumer Electronics Zhao, Z., et al. (2021). Challenges and opportunities of AI-enabled monitoring, diagnosis
Magazine, 11(3), 69–78. & prognosis: A review. Chinese Journal of Mechanical Engineering, 34(1), 1–29.
Eluwole, O.T. and S. Akande. Artificial Intelligence in Finance: Possibilities and Threats. in Liang, W., et al. (2022). Advances, challenges and opportunities in creating data for
2022 IEEE International Conference on Industry 4.0, Artificial Intelligence, and trustworthy AI. Nature Machine Intelligence, 4(8), 669–677.
Communications Technology (IAICT). 2022. IEEE. Liyanage, M., et al. (2023). Open RAN security: Challenges and opportunities. Journal of
González-Gonzalo, C., et al. (2022). Trustworthy AI: Closing the gap between Network and Computer Applications, 214, Article 103621.
development and integration of AI systems in ophthalmic practice. Progress in Retinal van Mil, J., & Quintais, J. P. (2022). A Matter of (Joint) control? Virtual assistants and
and Eye Research, 90, Article 101034. the general data protection regulation. Computer Law & Security Review, 45, Article
Desolda, G., et al. (2023). Explanations in warning dialogs to help users defend against 105689.
phishing attacks. International Journal of Human-Computer Studies, 176, Article Karim, S. M., et al. (2022). Architecture, protocols, and security in IoV: Taxonomy,
103056. analysis, challenges, and solutions. Security and Communication Networks.
Kong, H., et al. (2023). The impact of trust in AI on career sustainability: The role of Alekseeva, L., et al. (2021). The demand for AI skills in the labor market. Labour
employee–AI collaboration and protean career orientation. Journal of Vocational economics, 71, Article 102002.
Behavior, Article 103928. Jang-Jaccard, J., & Nepal, S. (2014). A survey of emerging threats in cybersecurity.
Pechegin, D. (2022). Judicial evaluation of data from artificial intelligence systems and Journal of Computer and System Sciences, 80(5), 973–993.
other innovative technologies in transport. Transportation Research Procedia, 63, Francisco, M., & Linnér, B.-O. (2023). AI and the governance of sustainable development.
86–91. An idea analysis of the European Union, the United Nations, and the World
Kumar, G., et al. (2022). Investigation and analysis of implementation challenges for Economic Forum. Environmental Science & Policy, 150, Article 103590.
autonomous vehicles in developing countries using hybrid structural modeling. Sethi, K., et al. (2021). Attention based multi-agent intrusion detection systems using
Technological Forecasting and Social Change, 185, Article 122080. reinforcement learning. Journal of Information Security and Applications, 61, Article
Pantic, M., & Rothkrantz, L. J. M. (2000). Automatic analysis of facial expressions: The 102923.
state of the art. IEEE Transactions on pattern analysis and machine intelligence, 22(12), Zhang, D., et al. (2022). Orchestrating artificial intelligence for urban sustainability.
1424–1445. Government Information Quarterly, 39(4), Article 101720.
Zhang, T., et al. (2019). The roles of initial trust and perceived risk in public’s acceptance Dorri, A., et al. Blockchain for IoT security and privacy: The case study of a smart home.
of automated vehicles. Transportation research part C: emerging technologies, 98, in 2017 IEEE international conference on pervasive computing and communications
207–220. workshops (PerCom workshops). 2017. IEEE.
Jacobsson, A., Boldt, M., & Carlsson, B. (2016). A risk analysis of a smart home Askar, N., et al. (2023). Forwarding Strategies for Named Data Networking based IOT:
automation system. Future Generation Computer Systems, 56, 719–733. Requirements, Taxonomy, and Open Research Challenges. IEEE Access.
Chen, Y., et al. (2021). Trust calibration of automated security IT artifacts: A multi- Rieffel, E., & Polak, W. (2000). An introduction to quantum computing for non-
domain study of phishing-website detection tools. Information & Management, 58(1), physicists. ACM Computing Surveys (CSUR), 32(3), 300–335.
Article 103394. Groombridge, D. (2022). Gartner Top 10 Strategic Technology Trends for 2023. Gartner.
Sebastian, A. M., & Peter, D. (2022). Artificial intelligence in cancer research: Trends, https://www.gartner.com/en/articles/gartner-top-10-strategic-technology-trends-
challenges and future directions. Life, 12(12), 1991. for-2023.
Bohr, A., & Memarzadeh, K. (2020). The rise of artificial intelligence in healthcare Gyongyosi, L., & Imre, S. (2019). A survey on quantum computing technology. Computer
applications. In Artificial Intelligence in healthcare (pp. 25–60). Elsevier. Science Review, 31, 51–71.
Habbal, A., Goudar, S. I., & Hassan, S. (2019). A Context-aware Radio Access Technology Zhang, C., et al. (2021). A survey on federated learning. Knowledge-Based Systems, 216,
selection mechanism in 5G mobile network for smart city applications. Journal of Article 106775.
Network and Computer Applications, 135, 97–107. Fedorchenko, E., Novikova, E., & Shulepov, A. (2022). Comparative review of the
Veselov, G., et al. (2021). Applications of artificial intelligence in evolution of smart intrusion detection systems based on federated learning: Advantages and open
cities and societies. Informatica, 45(5). challenges. Algorithms, 15(7), 247.
Kamruzzaman, M., Alrashdi, I., & Alqazzaz, A. (2022). New opportunities, challenges, and
applications of edge-AI for connected healthcare in internet of medical things for smart
cities. Journal of Healthcare Engineering.
14