Ethical Issues - Print Formatted
Ethical Issues - Print Formatted
1 2
By the end of this lecture series, you will be able to: • Discuss practical implementation of ethical principles
• Describe practical implementation of ethical principles in AI and HCI
in AI and HCI, focusing on case studies and real- • Apply guidelines for effective and ethical human-AI
world applications interaction
• Apply principles, guidelines, and strategies proposed • Identify what dark patterns are, including their
by the HCI community for AI-infused systems intention to deceive or manipulate users.
• Justify importance of ethics in HCI, including ethical • Differentiate between limbic and surveillance
principles and their application in difficult situations. capitalism
3 4
Importance of Ethics
5 6
1
3/28/24
• Ethical principals are tools which are used to think • Computers are involved to some extent in almost
through difficult situations. every aspect of our lives
• Three useful ethical principals: • They often perform life-critical tasks
• An act is ethical if all of society benefits from act. • Computer science is not regulated to the extent of
• An act is ethical if people are treated as an end medicine, air travel, or construction zoning
and not as a means to an end. • We need to carefully consider the issues of ethics
• An act is ethical if it is fair to all parties involved.
• Computer ethics are morally acceptable use of
computers (i.e. using computers appropriately)
• Standards or guidelines are important in this
industry, because technology changes are
outstripping the legal system’s ability to keep up
7 8
9 10
11 12
2
3/28/24
13 14
• I work as a Computing Professional affects people’s • According to the Association for Computing
lives, both now and into the future. Machinery (ACM) code, a computing professional:
• I bear moral and ethical responsibilities to society. • Contributes to society and human well-being
• Avoids harm to others
• I pledge to practice my profession with the highest
level of integrity and competence. • Is honest and trustworthy
• I always use my skills for the public good. • Is fair and takes action not to discriminate
• Honors property rights, including copyrights and
• I shall be honest about my limitations, continuously
patents
seeking to improve my skills through life-long
• Gives proper credit when using the intellectual
learning.
property of others
• I shall engage only in honorable and upstanding
• Respects other individuals’ rights to privacy
endeavors.
• Honors confidentiality
15 16
• Strive to achieve high quality in both the processes • Perform work only in areas of competence.
and products of professional work. • Foster public awareness and understanding of
• Maintain high standards of professional computing, related technologies, and their
competence, conduct, and ethical practice. consequences.
• Know and respect existing rules pertaining to • Access computing and communication resources
professional work. only when authorized or when compelled by the
• Accept and provide appropriate professional review. public good.
• Give comprehensive and thorough evaluations of • Design and implement systems that are robustly and
computer systems and their impacts, including usably secure.
analysis of possible risks.
17 18
3
3/28/24
• Ensure that public good is the central concern during • Articulate, apply, and support policies and processes
all professional computing work. that reflect the principles of the Code.
• Articulate, encourage acceptance of, and evaluate • Create opportunities for members of the
fulfillment of social responsibilities by members of organization or group to grow as professionals.
the organization or group. • Use care when modifying or retiring systems.
• Manage personnel and resources to enhance quality • Recognize and take special care of systems that
of working life. become integrated into the infrastructure of society.
19 20
21 22
23 24
4
3/28/24
25 26
Defining AI
27 28
29 30
5
3/28/24
Reference
• Ryan, M. and Stahl, B.C. (2021), "Artificial
intelligence ethics guidelines for developers and
users: clarifying their content and normative
implications", Journal of Information,
Communication and Ethics in Society, Vol. 19 No. 1,
pp. 61-86. https://doi.org/10.1108/JICES-12-2019-
0138
31 32
Transparency
33 34
Transparency Transparency
35 36
6
3/28/24
Transparency
37 38
39 40
• Inclusion: Attention should be given to under- • Equity: AI should aim to empower and benefit
represented and vulnerable groups to prevent AI individuals and provide equal opportunities while
from becoming a tool for exclusion within society. distributing the rewards from its use in a fair and
• Equality: AI should promote the equality of equitable manner.
individuals in respect to their rights, dignity and • Non-discrimination: AI should be designed for
freedom to flourish. universal usage and not discriminate against people,
• Non-bias: Organizations should invest in ways to or groups of people, based on gender, race, culture,
identify, address and mitigate unfair biases. religion, age or
41 42
7
3/28/24
43 44
• Remedy: When AI holds the possibility of creating • Challenge: There should be clear policies to protect
harm, There needs to be preemptive steps in place conscientious objectors, employees to voice their
to trace these issues and deal with them in a prompt concerns and whistle-blowers to feel protected,
and responsible manner. when it is in the public interest and safety
• Redress: People affected by harmful and/or unjust • Access and Distribution: Organizations should
event resulting from using AI should be addressed in ensure that their technologies are fair and
a appropriate and timely manner. accessible among a diversity of user groups.
45 46
Non-Maleficence
47 48
8
3/28/24
Non-Maleficence Non-Maleficence
• Non-maleficence: AI should be designed with the • Harm: Organizations should encourage "algorithmic
intent of not causing foreseeable harm to human accountability“. They need to assess and document
beings. the objectives and impact of AI in the development
• Security: AI should be robust, secure, and safe stage.
throughout its life cycle, posing no unreasonable • Protection: Developers should implement protection
safety risks. mechanisms and safeguards.
• Safety: Organizations need to ensure AI does not • Precaution: Those who develop AI must have the
infringe on human rights and assess public safety necessary skills to understand how they function
risks. and their potential impacts, and security precautions
must be well documented.
49 50
Non-Maleficence
51 52
Responsibility
53 54
9
3/28/24
Responsibility Privacy
• Liability: For legal reasons, there needs to be • Privacy: Using de-identification, anomaly-detection,
distinction between the designer and the and effective cybersecurity, organizations should
organization to be able to attribute liability. ensure the security of their databases, storage, and
• Acting with Integrity: Organizations need to ensure AI systems. Current data protection laws should be
that their data meets quality and integrity standards followed
at every stage. • Personal or Private Information: Development and
use of AI should ensure strong adherence to privacy
and data protection standards.
55 56
57 58
Beneficence Beneficence
• Benefits: AI should be designed to benefit humans. • Peace: Organizations should aim to avoid an arms
• Beneficence: Organizations should use AI to race in lethal autonomous weapons.
address global issues • Social Good: AI should improve opportunities for
• Well-Being: Organizations should ensure their AI is society, help cultivate a healthy AI industry
fit-for-purpose and does not prohibit individual ecosystem, and avoid causing conflict to non-users.
development and access to primary goods. • Common Good: AI should support the common
good and the service of people. Steps should be
taken to ensure that humanity is protected from
potentially harmful impacts resulting from it.
59 60
10
3/28/24
Solidarity
61 62
63 64
65 66
11
3/28/24
Sustainability
67 68
Sustainability Dignity
• Energy: The use of AI should be respectful of • Dignity: AI should be developed and used in a way
energy efficiency, mitigate greenhouse gas that respects, serves, and protects humans' physical
emissions, and protect biodiversity. and mental integrity, personal and cultural sense of
• Resources (Energy): AI should be created in a way identity, and satisfaction of their essential needs.
that ensures effective energy and resource
consumption, promotes resource efficiency, use of
renewable materials, and reduction of use of scarce
materials with minimal waste.
69 70
71 72
12
3/28/24
• Synthesize over 20 years of learning in AI design 1. Make clear what system can do. Help user
into generally applicable guidelines. understand what the AI system is capable of doing.
• Codify over 150 AI-related design recommendations 2. Make clear how well system can do what it can
into a set of 18 guidelines. do. Help user understand how often AI system may
• Systematically validate and refine the guidelines make mistakes.
through iteration and testing.
• Provide a resource for designers working with AI
and facilitate future research in human-AI
interaction.
73 74
3. Time services based on context. Time when to 7. Support efficient invocation. Make it easy to
act or interrupt based on user’s current task and invoke or request AI system’s services when
environment. needed.
4. Show contextually relevant information. Display 8. Support efficient dismissal. Make it easy to
information relevant to the user’s current task and dismiss or ignore undesired AI system services.
environment.
9. Support efficient correction. Make it easy to edit,
5. Match relevant social norms. Ensure experience refine, or recover when AI system is wrong.
is delivered in a way that users would expect, given
their social and cultural context.
6. Mitigate social biases. Ensure the AI system’s
language and behaviors don't reinforce undesirable
and unfair stereotypes and biases.
75 76
10. Scope services when in doubt. Engage in 12. Remember recent interactions. Maintain short-
disambiguation or gracefully degrade the AI term memory and allow user to make efficient
system’s services when uncertain about a user’s references to that memory.
goals. 13. Learn from user behavior. Personalize user’s
11. Make clear why system did what it did. Enable experience by learning from their actions over time.
user to access an explanation of why the AI system
14. Update and adapt cautiously. Limit disruptive
behaved as it did. changes when updating and adapting AI system’s
behaviors.
77 78
13
3/28/24
79 80
Reference
• http://teevan.org/publications/papers/chi19-
guidelines.pdf
81 82
83 84
14
3/28/24
85 86
87 88
Surveillance Capitalism
89 90
15
3/28/24
• A market-driven process where the commodity for • Tracking online behavior, cookies, location tracking,
sale is personal data, and capture and production of interaction with IoT devices.
this data relies on mass surveillance of Internet. • Creating personalized advertisements, predicting
• Coined by Shoshana Zuboff, it originated from consumer behavior, selling data to third parties.
monetization strategies of companies like Google • Designed to Impact Consumer Behavior: How
and Facebook.
constant data collection can subtly influence
• Practiced by tech firms like Google, Facebook, consumer choices.
Amazon, and others that collect and analyze user
data.
91 92
93 94
• Tools for Privacy Protection: Introduce various • Advocating for Stronger Laws: Discussion about
tools and methods consumers can use to protect the need for more comprehensive laws and
their privacy online, such as VPNs, ad blockers, and regulations that govern data privacy and corporate
privacy-focused browsers. responsibility in data handling.
• Public Awareness Campaigns: Stress the need for • Global Standards for Data Privacy: The necessity
educational campaigns to inform the public about for establishing universal data privacy standards that
the risks of surveillance capitalism, and how to transcend national boundaries, catering to the global
safeguard their digital footprint. nature of the internet.
• Consumer Advocacy: Encourage active
participation in advocacy for stronger data protection
laws and corporate accountability.
95 96
16
3/28/24
Limbic Capitalism
97 98
• Limbic Capitalism refers to the exploitation of • Data Privacy: How user data is collected, used, and
limbic system (the part of the brain dealing with shared in HCI systems, dilemma of balancing
motivation, emotions and memories) by capitalistic personalization and privacy.
ventures, particularly in digital technology. • User Addiction: Design strategies that potentially
• In HCI, this concept becomes crucial as designs lead to addictive behaviors, creating habit-forming
often target emotional engagement to increase user products.
interaction and dependency. • Mental Health Impact: Responsibility to consider
psychological effects of their products, including
anxiety, depression, and social isolation.
• Societal Impact: Understanding the long-term
societal impacts of HCI designs, the duty to act in
best interests of users and society.
99 100
Solutions
101 102
17