Study Guide - AIGP 03262024
Study Guide - AIGP 03262024
(source)
● Understand what it means that an AI system is a socio-technical system.
○ A socio-technical systemis a type of system in whichboth social and
technical elements are intertwined with each other.AI Systems are
considered social-technical because they are not just technical tools but also
have asocial impact on the people who use them andare affected by them.
(s ource)
● Understand the need forcross-disciplinary collaboration(ensure UX, anthropology,
sociology, linguistics experts are involved and valued).
○ “allows complex problems to be solved that a single discipline alone could not
handle.” (source) (reference)
● Knowledge of the OECD framework for the classification of AI systems.
○ https://oecd.ai/en/classification
○ The Framework is used to:
■ Promote a common understanding of AI:Identify featuresof AI
systems that matter most, to help governments and others tailor policies
to specific AI applications and help identify or develop metrics to assess
more subjective criteria (such as well-being impact).
■ Inform registries or inventories:Help describe systemsand their basic
characteristics in inventories or registries of algorithms or automated
decision systems.
■ S upport sector-specific frameworks:Provide the basis for more
detailed application or domain-specific catalogs of criteria, in sectors such
as healthcare or in finance.
■ Support risk assessment:Provide the basis for related work to develop
a risk assessment framework to help with de-risking and mitigation and to
develop a common framework for reporting about AI incidents that
facilitates global consistency and interoperability in incident reporting.
■ Support risk management:Help inform related workon mitigation,
compliance and enforcement along the AI system lifecycle, including as it
pertains to corporate governance.
The framework classifies AI systems and applications along the following
○
dimensions:
■ People & Planet-This considers the potential of appliedAI systems to
promote human-centric,trustworthy AI that benefits people and planet. In
each context, it identifies individuals and groups that interact with or are
affected by an applied AI system. Core characteristics include users and
impacted stakeholders, as well as the application’s optionality and how it
impacts human rights, the environment, well-being, society and the world
of work.
■ Economic Context-This describes the economic andsectoral
environment in which an applied AI system is implemented. It usually
pertains to an applied AI application rather than to a generic AI system,
and describes the type of organization and functional area for which an AI
system is developed. Characteristics include the sector in which the
system is deployed (e.g. healthcare, finance, manufacturing), its business
function and model; its critical (or non-critical) nature; its deployment,
impact and scale, and its technological maturity.
■ Data & Input-This describes the data and/or expertinput with which an
AI model builds a representation of the environment. Characteristics
include the provenance of data and inputs, machine and/or human
collection method, data structure and format, and data properties. Data &
Input characteristics can pertain to data used to train an AI system (“in the
lab”) and data used in production (“in the field”).
■ AI Model-This is a computational representation ofall or part of the
external environment of an AI system – encompassing, for example,
processes, objects, ideas, people and/or interactions that take place in
that environment. Core characteristics include technical type, how the
model is built (using expert knowledge, machine learning or both) and
how the model is used (for what objectives and using what performance
measures).
■ Task & Output-This refers to the tasks the systemperforms, e.g.
personalisation, recognition,forecasting or goal-driven optimisation; its
outputs; and the resulting action(s) that influence the overall context.
Characteristics of this dimension include system task(s); action autonomy;
ystems that combine tasks and actions like autonomous vehicles; core
s
application areas like computer vision; and evaluation methods.
● U
nderstand the use cases and benefits of AI(recognition, event detection,
forecasting, personalization, interaction support, goal-driven optimization,
recommendation).
○ “By producing fresh content,generative AIis usedto augment but not replace
the work of writers, graphic designers, artists, and musicians. It is particularly
useful in the business realm in areas like product descriptions, variations to
existing designs, or helping commercial artists explore different concepts. Among
its most common use cases, generative AI can:
■ Text: Generate credible text on various topics. It can compose business
letters, provide rough drafts of articles, and compose annual reports.
■ Images: Output realistic images from text prompts, create new scenes,
and simulate a new painting.
■ Video: Compile video content from text automatically and put together
short videos using existing images.
■ Music: Compile new musical content by analyzing a music catalog and
rendering a new composition.
■ Product design: Can be fed inputs from previous versions of a product
and produce several possible changes that can be considered in a new
version.
■ Personalization: Personalize experiences for users such as product
recommendations, tailored experiences, and new material that closely
matches their preferences.” (source)
○ “P redictive AIis finding innumerable use cases acrossa wide range of
industries. If managers knew the future, they would always take appropriate
steps to capitalize on how things were going to turn out. Anything that improves
the likelihood of knowing the future has high value in business. Predictive AI use
cases include financial forecasting, fraud detection, healthcare, and marketing.
Predictive AI can:
■ Financial services:Enhances financial forecasts.By pulling data from a
wider data set and correlating financial information with other
forward-looking business data, forecasting accuracy can be greatly
improved.
■ Fraud detection:Spot potential fraud by sensing anomalousbehavior. In
banking and e-commerce, there might be an unusual device, location, or
request that doesn’t fit with the normal behavior of a specific user. A login
from a suspicious IP address, for example, is an obvious red flag.
■ Healthcare:Find use cases such as predicting diseaseoutbreaks,
identifying higher-risk patients, and spotting the most successful
treatments.
■ Marketing:More closely define the most appropriatechannels and
messages to use in marketing. It can provide marketing strategists with
the data they need to write impactful campaigns and thereby bring about
greater success.” (source)
Programmed for fixed function ave the ability to learn, think and perform new
H
activities like humans
It doesn’t have any consciousness or awareness It poses creativity, common sense and logic like
of its own. humans.
(source)
○ S upervised- work with pre-labeled data. Classificationand regression are the
most common types of supervised learning algorithms.
■ “Classification algorithms decide the categoryofan entity, object or
event as represented in the data. The simplest classification algorithms
answer binary questions such as yes/no, sales/not-sales or cat/not-cat.
More complicated algorithms lump things into multiple categories like cat,
dog or mouse.Popular classification algorithms includedecision
trees, logistic regression, random forest and support vector
machines.” (source)
■ “R egression algorithms identify relationships withinmultiple
variablesrepresented in a data set. This approachis useful when
analyzing how a specific variable such as product sales correlates with
changing variables like price, temperature, day of week or shelf location.
Popular regression algorithms include linear regression,
multivariate regression, decision tree and least absolute shrinkage
and selection operator (lasso) regression.” (source)
○ Unsupervised-Automates the process of finding patternsin a dataset.
Clustering and dimensional reductionare two commonunsupervised learning
algorithmic types.
■ “C lustering algorithmshelp group similar sets ofdata together based on
various criteria. Practitioners can segment data into different groups to
identify patterns within each group.” (source)
■ “D imension reduction algorithmsexplore ways to compactmultiple
variables efficiently for a specific problem.” (source)
○ S emi-supervised-characterize processes that use unsupervised learning
algorithms to automatically generate labels for data that can be consumed by
supervised techniques.
○ Reinforcement- used to improve models after they’ve been deployed.The most
common reinforcement learning algorithms use various neural networks. (source)
Understand deep learning, generative AI, multi-modal models, transformer models, and
●
the major providers.
○ Deep learning
■ “D eep learning is a type ofmachine learningand artificialintelligence(A
I)
that imitates the way humans gain certain types of knowledge. Deep
learning models can be taught to perform classification tasks and
recognize patterns in photos, text, audio and other various data. It is also
used to automate tasks that would normally need human intelligence,
such as describing images or transcribing audio files.”(Source)
■ Enables a computer to learn by example.
■ Can be used for digital assistants, fraud detection, and facial recognition
■ Is able to create accurate predictive models from large amounts of
unlabeled, unstructured data
(source)
○ Generative AI
■ Generates content, like images and text. Newer version of AI compared to
predictive AI, which recognizes patterns across time. This is a type of
deep learning that generates content to resemble existing data.
○ Multi-modal models
■ A subset of deep learning that deals with the fusion and analysis of data
from multiple modalities, such as text, images, video, audio, and sensor
data. Combines the strengths of different modalities to create a more
complete representation of the data, leading to better performance on
various machine learning tasks. (source)
■ Multimodal learning presents two primary benefits (source):
● Multiple sensors observing the same data can make more robust
predictions, because detecting changes in it may only be possible
when both modalities are present.
● The fusion of multiple sensors can facilitate the capture of
complementary information or trends that may not be captured by
individual modalities
■ In general, multimodal architectures consist of three parts (source):
● Unimodal encoders encode individual modalities. Usually, one for
each input modality.
● A fusion network that combines the features extracted from each
input modality, during the encoding phase.
● A classifier that accepts the fused data and makes predictions.
○ Transformer models
■ A type of deep learning model, used for Natural Language Processing
(NLP). These models can translate text and speech in near-real-time.
Transformer models work by processing input data, which can be
sequences of tokens or other structured data, through a series of layers
that contain self-attention mechanisms and feedforward neural networks.
Transformer models are trained usingsupervised learning,where they
learn to minimize a loss function that quantifies the difference between
the model's predictions and the ground truth for the given task. (source)
Understandnatural language processing: text as inputand output.
●
○ “the input is a block oftext(either written textor text converted from
s peech), and the output is somedesired characteristicof the text,such
as its purpose (search query, command, review, etc.) and meaning or tone
(positive, negative, angry, biased).” (source)
● U
nderstand the difference between robotics androboticprocessing automation
(RPA).
○ “The term 'robotics' specifically relates to machines that can see, sense, actuate
and, with varying degrees of autonomy, make decisions.” (source)
○ “RPA is a software robot thatmimics human actions,whereas artificial
intelligence is the simulation of human intelligence using computer software.”
(s ource)
Understand the AI technology stack
● “The AI stack is a structural framework comprisinginterdependent layers, each serving
a critical function to ensure the system’s efficiency and effectiveness. Unlike a monolithic
architecture, where each component is tightly coupled and entangled, the AI stack’s
layered approach allows for modularity, scalability, and easy troubleshooting.This
architecture comprises critical components such asdata ingestion, data storage, data
processing, machine learning algorithms, APIs, and user interfaces. These layers
act as the foundational pillars that support the intricate web of algorithms, data pipelines,
and application interfaces in a typical AI system. Let’s understand these layers in depth.”
(source)
● “The Generative AI tech stack comprises infrastructure, ML models (e.g., GANs,
transformers), programming languages, and deployment tools. It's structured in three
layers—Applications, Model, and Infrastructure—guiding tech choices for efficient
development, cost reduction, and tailored outputs.” (source)
● Platforms and applications.
○ “A well-architected AI tech stack fundamentally comprises multifaceted
application frameworks that offer an optimized programming paradigm, readily
adaptable to emerging technological evolutions. Such frameworks, including
LangChain, Fixie, Microsoft’s Semantic Kernel, and Vertex AI by Google Cloud,
equip engineers to build applications equipped for autonomous content creation,
semantic comprehension for natural language search queries, and task
execution through intelligent agents.”
● Model types.
○ Foundation Models (FMs), essentially serving as thecognitive layer that
enables complex decision-making and logical reasoning. Large scale, pre-trained
models.
○ Closed-Source Foundation Models- obscures or protectsthe AI models,
provenance of training data, and/or the underlying code. Tends to be faster and
can be used via various cloud services.(source)
○ Open-Source Foundation Models- openly sharing AI models,the provenance
of training data and the underlying code. While not as fast, it enables greater
scrutiny of underlying code, models and data and often results in improved
explainability and security. (source)
(source)
● Compute infrastructure: software and hardware (servers and chips).
○ Software
■ “our AI infrastructure will need a software stack that includesmachine
learning libraries and frameworks(like TensorFlow,PyTorch, or
Scikit-learn),a programming language(like Python),and possibly a
distributed computing platform(like Apache Sparkor Hadoop). You'll
also need tools fordata preparation and cleaning,as well as for
monitoring and managing your AI workloads.” (source)
○ Hardware
■ “Machine learning and AI tasks are often computationally intensive and
may requirespecialized hardwaresuch as GPUs or TPUs.These
resources can be in-house, but increasingly, organizations leverage
cloud-based resources which can be scaled up or down as needed,
providing flexibility and cost-effectiveness.” (source)
■ “Accelerator chips optimized for model training and inference workloads”
(s ource)
Understand the history of AI and the evolution of data science
● 1956 Dartmouth summer research project on AI (source)
○ Birth of AI as a field of research
○ “the conference was “to proceed on the basis of the conjecture that every aspect
of learning or any other feature of intelligence can in principle be so precisely
described that a machine can be made to simulate it.”
● Summers, winters and key milestones.
○ “The peaks, or AI summers, see innovation and investment. The troughs, or AI
winters, experience reduced interest and funding.” (source)
○ 1956–1974: THE GOLDEN YEARS
■ During the Golden Years of AI, the programs – including computers
solving algebra word problems and learning to speak English – seem
"astonishing" to most people.
○ 1974–1980: 20TH CENTURY AI WINTER
■ The first AI winter occurs as the capabilities of AI programs remain
limited, mostly due to the lack of computing power at the time. They can
still only handle trivial versions of the problems they were supposed to
solve.
○ 1987–1993: A RENEWED INTEREST
■ The business community's fascination and expectations of AI, particularly
expert systems, rise. But they are quickly confronted by the reality of their
limitations.
● Understand how the current environment isfueled byexponential growth in
computing infrastructure and tech megatrends(cloud,mobile, social, IOT, PETs,
blockchain, computer vision, AR/VR, metaverse).
○ Increasing computing and storage capacities (source)
○ Enormous growth in the amount of data available for learning and
analysis.(source)
○ The development of learning machines based on artificial and neural networks.
(s ource)
omain 2: Understanding AI Impacts and Responsible AI Principles
D
Understand the core risks and harms posed by AI systems
● Understand the potential harms to anindividual (civilrights, economic opportunity,
safety).
○ “An inaccurate system will implicate people for crimes they did not commit. And it
will shift the burden onto defendants to show they are not who the system says
they are.” (Source)
○ “Face recognition uniquely impacts civil liberties.The accumulation of
identifiable photographs threatens important free speech and freedom of
association rights under the First Amendment, especially because such data can
be captured without individuals’ knowledge.”(Source)
○ ”T he collection and retention of face recognitiondata poses special
security risks.All collected data is at risk of breachor misuse by external and
internal actors, and there are many examples of misuse of law enforcement data
in other contexts.4Face recognition poses additionalrisks because, unlike a
social security number or driver’s license number, we can’t change our faces.
Law enforcement must do more to explain why it needs to collect so much
sensitive biometric and biographic data, why it needs to maintain it for so long,
and how it will safeguard it from breaches.”(Source)
○ ”O ur biometrics are unique to each of us, can’t bechanged, and often are easily
accessible. Face recognition, though, takes the risks inherent in other biometrics
to a new level because it is much more difficult to prevent the collection of an
image of your face. We expose our faces to public view every time we go
outside, and many of us share images of our faces online with almost no
restrictions on who may access them. Face recognition therefore allows for
covert, remote, and mass capture and identification of images.9 The photos that
may end up in a database could include not just a person’s face but also how she
is dressed and possibly whom she is with.” (Source)
○ ”G overnment surveillance like this can have a realchilling effect on Americans’
willingness to engage in public debate and to associate with others whose
values, religion, or political views may be considered different from their own. For
example, researchers have long studied the “spiral of silence”— the significant
chilling effect on an individual’s willingness to publicly disclose political views
when they believe their views differ from the majority.11”(Source)
● Understand thepotential harms to a group (discriminationtowards sub-groups).
○ ”F ace recognition disproportionately impacts peopleof color.Face
recognition misidentifies African Americans and ethnic minorities, young people,
and women at higher rates than whites, older people, and men, respectively.2
Due to years of well-documented, racially biased police practices, all criminal
databases—including mugshot databases—include a disproportionate number of
African Americans, Latinos, and immigrants.3 Thesetwo facts mean people of
color will likely shoulder significantly more of the burden of face recognition
systems’ inaccuracies than whites.”(S
ource)
○ ” D ue to years of well-documented racially-biased police practices, all criminal
databases—including mugshot databases—include a disproportionate number of
African Americans, Latinos, and immigrants.17 Thesetwo facts mean people of
color will likely shoulder exponentially more of the burden of face recognition
inaccuracies than whites.”
○ ”If job seekers’ faces are matched mistakenly to mug shots in the criminal
database, they could be denied employment through no fault of their own. Even if
job seekers are properly matched to a criminal mug shot, minority job seekers will
be disproportionately impacted due to the notorious unreliability of FBI records as
a whole”
● Understand thepotential harms to society(democraticprocess, public trust in
governmental institutions, educational access, jobs redistribution).
○ Emily Bender claims the real risks and harms are more “about concentration of
power in the hands of people, about reproducing systems of oppression, about
damage to the information ecosystem, and about damage to the natural
ecosystem (through profligate use of energy resources).”
Understand thepotential harms to a company or institution(reputational, cultural,
●
economic, acceleration risks). (source)
○ Sensitive Data Exposure:Handling large datasets canlead to unintended
exposure of confidential information, including customer and business data,
posing risks of identity theft, financial fraud, and loss of public trust. Protection of
privacy and data security have been prioritized and will be subject to closer
scrutiny as a result of the recent executive order.
○ Cybersecurity Vulnerabilities:Integration of AI withentities’ institutional
platforms can create entry points for hackers, risking not just data theft but also
potential disruption of operations, particularly supply chains.
○ Data Control Concerns:Relying on external AI solutionscan lead to issues with
data control and governance, and can potentially expose companies to additional
risks if vendors do not meet ESG or cybersecurity standards.
○ Opaque Decision Processes:The complexity of AI algorithms,especially in
deep learning, often results in a lack of transparency and explainability, making it
difficult for stakeholders to understand how decisions are made. This “black box”
nature of AI can hinder accountability and trust in AI-driven ESG initiatives.
○ Accountability Challenges:In cases where AI-drivendecisions lead to adverse
ESG outcomes, it can be difficult to attribute responsibility, complicating legal and
ethical accountability.
○ Compliance Complexity:AI systems utilized in an effortto enhance ESG
performance may not account or keep up with the rapidly expanding number of
ESG-related laws, regulations and standards developing across different regions,
increasing the risk of inadvertent non-compliance.
○ Legal Uncertainties:Rapidly evolving AI technologiescan outpace existing
legal frameworks, creating uncertainties about liability for collection, maintenance
and use of data, intellectual property rights, and other legal issues.
● U
nderstand thepotential harms to an ecosystem(natural resources, environment,
supply chain). (source)
○ High Energy Consumption:The computation-intensivenature of training and
running AI, particularly large models, can lead to high energy consumption and
significant carbon footprints, potentially contradicting environmental sustainability
efforts.
○ Life Cycle Impact of AI Hardware:The production,operation, and disposal of
the hardware necessary for AI (e.g., servers, data centers) contribute to
environmental concerns such as electronic waste and resource depletion.
nderstand the similarities and differences among existing and emerging ethical
U
guidance on AI
● Understand how the ethical guidance is rooted in Fair Information Practices (FIPPs),
European Court of Human Rights, and Organization for Economic Cooperation and
Development principles.
○ Fair Information Practicies (FIPPS)-a collectionof widely accepted principles
that agencies use when evaluating information systems, processes, programs,
and activities that affect individual privacy. The FIPPs arenot requirements;
rather, they are principles that should be applied by each agency according to the
agency’s particular mission and privacy program requirements. (source)
■ Access and Amendment-Agencies should provide individualswith
appropriate access to PII and appropriate opportunity to correct or amend
PII.
■ Accountability-Agencies should be accountable forcomplying with
these principles and applicable privacy requirements, and should
appropriately monitor, audit, and document compliance. Agencies should
also clearly define the roles and responsibilities with respect to PII for all
employees and contractors, and should provide appropriate training to all
employees and contractors who have access to PII.
■ Authority-Agencies should only create, collect, use,process, store,
maintain, disseminate, or disclose PII if they have authority to do so, and
should identify this authority in the appropriate notice.
■ Minimization-Agencies should only create, collect,use, process, store,
maintain, disseminate, or disclose PII that is directly relevant and
necessary to accomplish a legally authorized purpose, and should only
maintain PII for as long as is necessary to accomplish the purpose.
■ Quality and Integrity-Agencies should create, collect,use, process,
store, maintain, disseminate, or disclose PII with such accuracy,
relevance, timeliness, and completeness as is reasonably necessary to
ensure fairness to the individual.
■ Individual Participation- Agencies should involvethe individual in the
process of using PII and, to the extent practicable, seek individual
onsent for the creation, collection, use, processing, storage,
c
maintenance, dissemination, or disclosure of PII. Agencies should also
establish procedures to receive and address individuals’ privacy-related
complaints and inquiries.
■ Purpose Specification and Use Limitation-Agenciesshould provide
notice of the specific purpose for which PII is collected and should only
use, process, store, maintain, disseminate, or disclose PII for a purpose
that is explained in the notice and is compatible with the purpose for
which the PII was collected, or that is otherwise legally authorized.
■ Security-Agencies should establish administrative,technical, and
physical safeguards to protect PII commensurate with the risk and
magnitude of the harm that would result from its unauthorized access,
use, modification, loss, destruction, dissemination, or disclosure.
■ Transparency-Agencies should be transparent aboutinformation
policies and practices with respect to PII, and should provide clear and
accessible notice regarding creation, collection, use, processing, storage,
maintenance, dissemination, and disclosure of PII.
○ European Court of Human Rights (source)
■ Rules on individual or State applications alleging violations of the civil and
political rights set out in the European Convention on Human Rights.
Organization for Economic Cooperation and Development principles
○
(s
ource)
■ Ensuring the basis of an effective corporate governance framework
● The corporate governance framework should promote transparent
and efficient markets, be consistent with the rule of law and clearly
articulate the division of responsibilities among different
supervisory, regulatory and enforcement authorities.
■ The rights and equitable treatment of shareholders and key
ownership functions
● ‘The corporate governance framework should protect and facilitate
the exercise of shareholders’ rights and ensure the equitable
treatment of all shareholders, including minority and foreign
shareholders. All shareholders should have the opportunity to
obtain effective redress for violation of their rights.’
● Basic shareholder rights should include the right to:
○ Secure methods of ownership registration;
○ Convey or transfer shares;
○ Obtain relevant and material information on the corporation
on a timely and regular basis;
○ Participate and vote in general shareholder meetings;
○ Elect and remove members of the board; and
○ Share in the profits of the corporation.
■ Institutional investors, stock markets, and other intermediaries
● ‘The corporate governance framework should provide sound
incentives throughout the investment chain and provide for stock
markets to function in a way that contributes to good corporate
governance.’
○ All shareholders of the same series of a class should be
treated equally
○ Insider trading and abusive self-dealing should be
prohibited
○ Members of the board and key executives should be
required to disclose to the board whether they, directly,
indirectly or on behalf of third parties, have a material
interest in any transaction or matter directly affecting the
corporation.
■ The role of stakeholders in corporate governance
● The corporate governance framework should recognize the rights
of stakeholders established by law or through mutual agreements
and encourage active co-operation between corporations and
stakeholders in creating wealth, jobs, and the sustainability of
financially sound enterprises.
■ Disclosure and transparency
● The corporate governance framework should ensure that timely
and accurate disclosure is made on all material matters regarding
the corporation, including the financial situation, performance,
ownership, and governance of the company.
■ The responsibilities of the board
● The corporate governance framework should ensure the strategic
guidance of the company, the effective monitoring of management
by the board, and the board’s accountability to the company and
the shareholders.
OECD AI Principles(source)
●
○ Promotes AI that is innovative and trustworthy and that respects human rights
and democratic values. Value-based principles
■ Inclusive growth, sustainable development and well-being
● Stakeholders should proactively engage in responsible
stewardship of trustworthy AI in pursuit of beneficial outcomes for
people and the planet, such as augmenting human capabilities
and enhancing creativity, advancing inclusion of underrepresented
populations, reducing economic, social, gender and other
inequalities, and protecting natural environments, thus invigorating
inclusive growth, sustainable development and well-being.
■ Human-centred values and fairness
● AI actors should respect the rule of law, human rights and
democratic values, throughout the AI system lifecycle. These
include freedom, dignity and autonomy, privacy and data
protection, non-discrimination and equality, diversity, fairness,
social justice, and internationally recognised labor rights.
● To this end, AI actors should implement mechanisms and
safeguards, such as capacity for human determination, that are
appropriate to the context and consistent with the state of art.
■ Transparency and explainability
● AI Actors should commit to transparency and responsible
disclosure regarding AI systems. To this end, they should provide
meaningful information, appropriate to the context, and consistent
with the state of art:
○ to foster a general understanding of AI systems,
○ to make stakeholders aware of their interactions with AI
systems, including in the workplace,
○ to enable those affected by an AI system to understand the
outcome, and,
○ to enable those adversely affected by an AI system to
challenge its outcome based on plain and
easy-to-understand information on the factors, and the
logic that served as the basis for the prediction,
recommendation or decision.
Robustness, security and safety
■
● AI systems should be robust, secure and safe throughout their
entire lifecycle so that, in conditions of normal use, foreseeable
use or misuse, or other adverse conditions, they function
appropriately and do not pose unreasonable safety risk.
● To this end, AI actors should ensure traceability, including in
relation to datasets, processes and decisions made during the AI
system lifecycle, to enable analysis of the AI system’s outcomes
and responses to inquiry, appropriate to the context and consistent
with the state of art.
● AI actors should, based on their roles, the context, and their ability
to act, apply a systematic risk management approach to each
phase of the AI system lifecycle on a continuous basis to address
risks related to AI systems, including privacy, digital security,
safety and bias.
■ Accountability
● AI actors should be accountable for the proper functioning of AI
systems and for the respect of the above principles, based on their
roles, the context, and consistent with the state of art.
● W hite House Office of Science and Technology PolicyBlueprint for an AI Bill of
Rights(s ource)
○ 5 principles:
■ Safe and Effective Systems
● You should be protected from unsafe or ineffective systems.
■ Algorithmic Discrimination Protections
● You should not face discrimination by algorithms and systems
should be used and designed in an equitable way
■ Data Privacy
● You should be protected from abusive data practices via built-in
protections and you should have agency over how data about you
is used.
■ Notice and Explanation
● You should know that an automated system is being used and
understand how and why it contributes to outcomes that impact
you.
■ Human Alternatives, Considerations, and Fallback
● You should be able to opt out, where appropriate, and have
access to a person who can quickly consider and remedy
problems you encounter.
○ From Principles to Practice—a handbook for anyone seeking to incorporate
these protections into policy and practice
● High-level Expert Group AI(source)
○ European Commission appointed a group of experts to provide advice on its
artificial intelligence strategy.
○ Deliverable 1:Ethics Guidelines for Trustworthy AI
■ The document puts forward a human-centric approach on AI and list 7
key requirements that AI systems should meet in order to be trustworthy.
○ Deliverable 2:Policy and Investment Recommendationsfor Trustworthy AI
■ Building on its first deliverable, the group put forward 33
recommendations to guide trustworthy AI towards sustainability, growth,
competitiveness, and inclusion. At the same time, the recommendations
will empower, benefit and protect European citizens.
○ Deliverable 3:The final Assessment List for TrustworthyAI (ALTAI)
■ A practical tool that translates the Ethics Guidelines into an accessible
and dynamic self-assessment checklist. The checklist can be used by
developers and deployers of AI who want to implement the key
requirements. This new list is available as a prototype web based tool and
in PDF format.
○ Deliverable 4:Sectoral Considerations on the Policyand Investment
Recommendations
■ The document explores the possible implementation of the
recommendations, previously published by the group, in three specific
reas of application: Public Sector, Healthcare and Manufacturing & the
a
Internet of Things.
● UNESCO Principles(s ource)
○ “provide a basis to make AI systems work for the good of humanity,
individuals, societies and the environment and ecosystems, and to prevent harm.
It also aims at stimulating the peaceful use of AI systems.”
○ “In addition to the existing ethical frameworks regarding AI around the world, this
Recommendation aims to bring a globally accepted normative instrument
that focuses not only on the articulation of values and principles, but also
on theirpractical realization, viaconcrete policyrecommendations, with a
strong emphasis on inclusion issues ofgender equalityand protection of the
environment and ecosystems.”
○ “ protect, promote and respect human rights and fundamental freedoms,
human dignity and equality, including gender equality; to safeguard the interests
of present and future generations; to preserve the environment, biodiversity
and ecosystems; and to respect cultural diversity in all stages of the AI
system life cycle”
Asilomar AI Principles
●
○ 23 principles divided into 3 categories developed at a conference sponsored by
the Future of Life Institute (nonprofit) (source)
(source)
● TheInstitute of Electrical and Electronics EngineersInitiative(IEEE) on Ethics of
Autonomous and Intelligent Systems (reference)
○ “To ensure every stakeholder involved in the design and development of
autonomous and intelligent systems is educated, trained, and empowered to
prioritize ethical considerations so that these technologies are advanced for the
benefit of humanity.” (source)
○ E ight general principles: human rights and well-being, transparency,
accountability, effectiveness, competence and “awareness of misuse” in addition
to “data agency,” giving individuals control over their data
○ Ethics-by-design approach
CNIL AI Action Plan.(s ource) (reference)
●
○ Understanding the functioning of AI systems and their impacts on people:
Through its new artificial intelligence service, the CNIL will focus on addressing
key data protection issues relevant to the design and operation of AI applications.
These issues include the protection of publicly available data on the web against
scraping, the protection of data transmitted by users of AI systems, and
consequences for the rights of individuals to their data with respect to data
collected for training AI applications and the outputs produced by AI systems,
among other issues.
○ Guiding the development of AI that respects personal data:To support
organizations innovating in the field of AI and to prepare for the potential passage
of the EU AI Act, the CNIL will publish guidance and best practices on several AI
topics, including a guide on rules applicable to the sharing and re-use of data as
well as recommendations for the design of generative AI systems.
○ Supporting innovative players in the AI ecosystem in France and Europe:
The CNIL will support actors in the AI ecosystem to innovate in a way that
ensures protection of French and European fundamental rights and freedoms. In
particular, the CNIL will soon open a new call for projects for participation in its
2023 regulatory sandbox and plans to engage in increased dialogue with
research teams, R&D centers and organizations developing AI systems.
○ Auditing and controlling AI systems:The CNIL plansto develop a tool to audit
AI systems and will continue to investigate complaints lodged with its office
related to AI, including generative AI.
omain 3: Understanding How Current Laws Apply to AI Systems
D
Understand the existing laws that interact with AI use
● Know the laws that addressunfair and deceptive practices.
○ Federal Trade Commission (FTC) Act (US) (Wheeler-Lea Act of 1938)
○ EU Directive on unfair commercial practices from 2005 (source)
○ The Children's Online Privacy Protection Act (COPPA),which governs the
collection of information about minors
○ The Gramm Leach Bliley Act (GLBA), which governs personalinformation
collected by banks and financial institutions
○ Telemarketing Sales Rule (TSR), Telephone Consumer Protection Act of
1991, and the Do-Not-Call Registry
○ Junk Fax Protection Act of 2005 (JFPA)
○ Controlling the Assault of Non-Solicited Pornography and Marketing Act of
2003 (CAN-SPAM) and the Wireless Domain Registry
○ Telecommunications Act of 1996 and Customer Proprietary Network
Information (CPNI)
○ Cable Communications Policy Act of 1984
○ Video Privacy Protection Act of 1998 (VPPA) and Video Privacy Protection
Act Amendments of 2012
○ Driver's Privacy Protection Act (DPPA)
● Know relevantnon-discrimination laws(credit, employment,insurance, housing, etc.).
○ “The FCRA regulates "consumer reporting agencies" and people who use the
reports generated by consumer reporting agencies. Crucially, a generative AI
service potentially could meet the definition of "consumer reporting agency" if the
service regularly produces reports about individuals' "character, general
reputation, personal characteristics, or mode of living" and these reports are used
for employment purposes.”(S ource)
○ Confidentiality of Substance Use Disorder Patient Records Rule
■ Prohibits patient information from being used to initiate criminal charges
or as a predicate to conduct a criminal investigation of the patient
○ The Fair Credit Reporting Act(FCRA), which regulatesthe collection and use of
credit information
○ Fair and Accurate Credit Transactions Act of 2009 (FACTA)
■ contains protections against identity theft, “red flags” rules
○ Privacy Protection Act of 1980 (PPA)
■ The PPA requires law enforcement to obtain a subpoena in order to
obtain First Amendment-protected materials
○ Title VII of the Civil Rights Act of 1964(“CRA”)prohibits employment
discrimination on the basis of race, color, religion, sex, or national origin
○ Title I of theAmericans With Disabilities Act(“ADA”)prohibits employment
discrimination against “qualified” individuals with disabilities
○ Genetic Information Nondiscrimination Act of 2008 (GINA)
○ Illinois Artificial Intelligence Video Interview Act– Requires that any
employer relying on AI technology to analyze a screening interview must
roviding information to candidates and obtain consent; must also report
p
demographic data to the state to analyze bias
○ Maryland HB 1202– Prohibits the use of facial recognitiontechnology in the
hiring process without consent of applicant
○ NYC Regulation– A bias audit must be conducted onany use of automated
employment decision tools requires; notice must be provided to applicants and
alternative selection process must be provided
○ The Wiretap Act
● Know relevantproduct safety laws. Know relevant IPlaw.
○ Consumer Product Safety Act (CPSA)in 1972 for thepurposes of protecting
consumers against the risk of injury due to consumer products, enabling
consumers to evaluate product safety, establishing consistent safety standards,
and promoting research into the causes and prevention of injuries and deaths
associated with unsafe products.(source)
○ The General Product Safety Regulation (GPSR)requiresthat all consumer
products on the EU markets are safe and it establishes specific obligations for
businesses to ensure it. It applies to non-food products and to all sales channels.
(s ource)
○ “U .S. Patent and Trademark Office (USPTO), U.S. CopyrightOffice, and courts
have yet to fully establish clear guidelines concerning AI-created content or
inventions. However,they generally recognize rightsonly for human authors
and inventors. In Europe, theEuropean Patent Office(EPO) and European
Union Intellectual Property Office (EUIPO) have similar stances, though
discussions are ongoing about potential changes.” (source)
Understand the basic requirements of theEU DigitalServices Act(transparency of
●
recommender systems). (source)
○ The DSA imposes obligations on all information society services that offer an
intermediary service to recipients who are located or established in the EU,
regardless of whether that intermediary service provider is incorporated or
located within the EU.
○ Transparency obligations:Advertising, user profiling,and recommender
systems
■ Under Article 26, providers of online platforms must supply users with
information relating to any online advertisements on its platform so that
the recipients of the services can clearly identify that such information
constitutes an advertisement. Providers of online platforms are prohibited
from presenting targeted advertisements based on profiling using either
the personal data of minors or special category data (as defined in the
GDPR).
■ Article 27 requires providers of online platforms that use recommendation
systems to set out in their T&Cs the main parameters they use for such
systems, including any available options for recipients to modify or
influence them. Under Article 38, VLOPs and VLOSEs must provide at
least one option (not based on profiling) for users to modify the
parameters used.
Know relevantprivacy laws concerning the use of data.
●
○ The Federal (US) Privacy Act of 1974 and the E-Government Act of 2002 require
agencies to address the privacy implications of any system that collects
identifiable information on the public
○ The Health Insurance Portability and Accounting Act (HIPAA), which governs the
collection of health information
○ Health Insurance Technology for Economic and Clinical Health Act of 2009
(H ITECH), which increased penalties under HIPAA andprovided greater access
rights to individuals.
○ The Family Educational Rights and Privacy Act (FERPA),which protects the
privacy of student education records
○ Protection of Pupil Rights Amendment of 1978 (PPRA),which prevents the sale
of student information for commercial purposes
○ CCPA/CPRA, Virginia Consumer Data Protection Act (VCDPA), CPA, CTDPA,
Montana’s Consumer Data Privacy Act, Delaware Personal Data Privacy Act,
Utah Consumer Privacy Act (UCPA), Oregon Consumer Privacy Act (OCPA),
Iowa’s Consumer Data Protection Act (ICDPA), New Jersey Data Privacy Act
(NJDPA), Indiana Consumer Data Protection Act, Tennessee Information
Protection Act, Texas Data Privacy and Security Act (TDPSA)
(Source)
● Understand procedures for testing innovative AI and exemptions for research.
○ An exception within the AI Act to process special categories of personal data to
detect and correct bias within AI applies to providers of AI systems.
■ “the AI Act's exception applies to developers and entities outsourcing the
development of AI systems, for non-private use. The exception does not
seem to apply to organizations renting a fully developed AI system as a
service, for example.” (source)
● Understandtransparency requirements, i.e., registration database.
○ High-risk AI systems must be registered in anEU-wide public database
○ obligation to warn people that they are interacting with an AI system.