0% found this document useful (0 votes)
14 views

Ethical Issues - Print Formatted

Uploaded by

Armando Arratia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Ethical Issues - Print Formatted

Uploaded by

Armando Arratia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

3/28/24

Ethical Issues in HCI


Dr. Robert Atkinson

1 2

Objectives Objectives Continued

By the end of this lecture series, you will be able to: • Discuss practical implementation of ethical principles
• Describe practical implementation of ethical principles in AI and HCI
in AI and HCI, focusing on case studies and real- • Apply guidelines for effective and ethical human-AI
world applications interaction
• Apply principles, guidelines, and strategies proposed • Identify what dark patterns are, including their
by the HCI community for AI-infused systems intention to deceive or manipulate users.
• Justify importance of ethics in HCI, including ethical • Differentiate between limbic and surveillance
principles and their application in difficult situations. capitalism

3 4

Importance of Ethics

• Ethics are standards of moral conduct


• Standards of right and wrong behavior
Importance of Ethics • A gauge of personal and professional integrity
• The basis of trust and cooperation in relationships
with others

5 6

1
3/28/24

Ethical Principles Ethical Issues in CS

• Ethical principals are tools which are used to think • Computers are involved to some extent in almost
through difficult situations. every aspect of our lives
• Three useful ethical principals: • They often perform life-critical tasks
• An act is ethical if all of society benefits from act. • Computer science is not regulated to the extent of
• An act is ethical if people are treated as an end medicine, air travel, or construction zoning
and not as a means to an end. • We need to carefully consider the issues of ethics
• An act is ethical if it is fair to all parties involved.
• Computer ethics are morally acceptable use of
computers (i.e. using computers appropriately)
• Standards or guidelines are important in this
industry, because technology changes are
outstripping the legal system’s ability to keep up

7 8

Ethical Issues in CS Motivation for ”Code of Ethics”

• Privacy – responsibility to protect data about • Historical


individuals • professional associations use mechanism to
• Accuracy - responsibility of data collectors to establish status as a profession
authenticate information and ensure its accuracy • regulate their membership
• Property - who owns information and software and • convince public that associate deserves to be
how can they be sold and exchanged self-regulated
• Access - responsibility of data collectors to control • Self-regulation (one solution)
access and determine what information a person • apply code of ethics
has right to obtain about others and how information • ethics review board
can be used • deter unethical behavior of members

9 10

Ethical Principles for Computer


Ethics for Computer Professionals Professionals

• Are experts in their field, • Competence– Professionals keep up with latest


• Know customers rely on their knowledge, expertise, knowledge in their field and perform services only in
and honesty, their area of competence.
• Understand their products (and related risks) affect • Responsibility– Professionals are loyal to their
many people, clients or employees, and they won’t disclose
confidential information.
• Follow good professional standards and practices,
• Integrity– Professionals express their opinions
• Maintain an expected level of competence and are based on facts, and they are impartial in their
up-to-date on current knowledge and technology judgments.
• Educate the non-computer professional

11 12

2
3/28/24

Professional Codes of Conduct

13 14

Pledge of Computer Professionals ACM Code of Conduct

• I work as a Computing Professional affects people’s • According to the Association for Computing
lives, both now and into the future. Machinery (ACM) code, a computing professional:
• I bear moral and ethical responsibilities to society. • Contributes to society and human well-being
• Avoids harm to others
• I pledge to practice my profession with the highest
level of integrity and competence. • Is honest and trustworthy

• I always use my skills for the public good. • Is fair and takes action not to discriminate
• Honors property rights, including copyrights and
• I shall be honest about my limitations, continuously
patents
seeking to improve my skills through life-long
• Gives proper credit when using the intellectual
learning.
property of others
• I shall engage only in honorable and upstanding
• Respects other individuals’ rights to privacy
endeavors.
• Honors confidentiality

15 16

Professional Responsibilities Professional Responsibilities

• Strive to achieve high quality in both the processes • Perform work only in areas of competence.
and products of professional work. • Foster public awareness and understanding of
• Maintain high standards of professional computing, related technologies, and their
competence, conduct, and ethical practice. consequences.
• Know and respect existing rules pertaining to • Access computing and communication resources
professional work. only when authorized or when compelled by the
• Accept and provide appropriate professional review. public good.
• Give comprehensive and thorough evaluations of • Design and implement systems that are robustly and
computer systems and their impacts, including usably secure.
analysis of possible risks.

17 18

3
3/28/24

Professional Leadership Principles Professional Leadership Principles

• Ensure that public good is the central concern during • Articulate, apply, and support policies and processes
all professional computing work. that reflect the principles of the Code.
• Articulate, encourage acceptance of, and evaluate • Create opportunities for members of the
fulfillment of social responsibilities by members of organization or group to grow as professionals.
the organization or group. • Use care when modifying or retiring systems.
• Manage personnel and resources to enhance quality • Recognize and take special care of systems that
of working life. become integrated into the infrastructure of society.

19 20

Importance of Human-AI Interaction

21 22

Importance of Human-AI Interaction Challenges in AI-Infused Systems

• AI advancements enable integration of AI • Inherent inconsistencies in AI components due to


capabilities into user-facing systems. probabilistic behaviors.
• AI-infused systems can exhibit unpredictable • Changes in AI systems over time, impacting user
behaviors, leading to disruption and confusion. experience.
• Inconsistency and unpredictability in AI-infused • Variations in responses based on external factors
systems can erode user confidence. like lighting or noise conditions.
• Traditional usability guidelines may not fully address • Errors and need for human verification to prevent
the challenges of AI-infused systems. unwarranted actions.
• Human-AI interaction design plays a crucial role in • Lack of understanding and control leading to user
creating intuitive and effective systems. confusion and abandonment.

23 24

4
3/28/24

Existing HCI Guidelines for AI

• HCI community has proposed principles, guidelines,


and strategies for AI-infused systems.
• Commercial conversational agents and their impact
on user engagement and usability.
• Reported failures and challenges show ongoing
struggles in creating intuitive AI-infused systems.
• Ongoing advances in AI technologies require
ongoing studies and vigilance.
• Aim is to help people understand theoretical and
practical implementation of AI ethics and further
discussions relating to AI ethics.

25 26

Defining AI

• “We define AI as a system’s ability to interpret


external data correctly, to learn from such data and
Human-AI Interaction: Ethical Principles to use those learnings to achieve specific goals and
tasks through flexible adaptation” (Kaplan &
Haenlein, 2019, p. 17).
• With further growth we expect that “AI will not only
impact our personal lives but also fundamentally
transform how firms take decisions and with their
external stakeholders (e.g. employees, customers)
(Haenlein & Kaplan, 2019, p. 9).

27 28

Current Issues in AI Ethics Overview of Principles

• As the use of AI expands, we need guidelines to


provide help with ethical issues faced when using AI.
• While some ethics guidelines have been discussed
there still some issues:
• There is a lack of integration among AI ethics
guidelines which make it hard to apply them
• More details needed. Although, referencing
general principles like fairness, transparency, or
sustainability is a good starting point
• Guidelines need to expand the targeted
audiences beyond policymakers they can be
understood by non-technical users.

29 30

5
3/28/24

Reference
• Ryan, M. and Stahl, B.C. (2021), "Artificial
intelligence ethics guidelines for developers and
users: clarifying their content and normative
implications", Journal of Information,
Communication and Ethics in Society, Vol. 19 No. 1,
pp. 61-86. https://doi.org/10.1108/JICES-12-2019-
0138

31 32

Transparency

• Transparency: AI developers need to ensure


transparency to protect human rights, privacy,
dignity, autonomy, and well-being. Intentions,
Transparency
benefits, harms and potential outcomes should be
disclosed.
• Explainability: AI must be continuously monitored
to ensure it is producing accurate results. AI
organizations should be able to explain their
algorithms to auditing bodies.

33 34

Transparency Transparency

• Explicability: AI organizations should ‘intelligibly’ • Interpretability: Organizations should be able to


explain input and output data. Also, a strong degree understand how decisions are reached, address
of traceability is necessary to trace harms back to harms caused by algorithmic opaqueness, and
their cause. continually carry out algorithmic reviews to assess
• Understandability: Organizations should compliance to regulations.
understand how their AI works and be able to • Communication: End users should be provided
explain its technical functioning and decisions. AI with accurate information in a clear and jargon-free
actions should be comprehensible to humans. manner to avoid manipulation, deception, or
coercion by AI.

35 36

6
3/28/24

Transparency

• Disclosure: AI should be designed to retrieve


minimal personal data, or if required, the data should
be anonymized and securely processed.
• Showing: Data should be accurate, up-to-date, and
fit-for-purpose. The data quality should be
transparent and available for periodic assessment.

37 38

Justice and Fairness

• Justice: AI practitioners should consider levels of


justice and fairness during the design process.
Accountability should still lie with the human user.
Justice and Fairness
• Fairness: AI developers should take steps to avoid
developing algorithms with historically unfair
prejudices.
• Consistency: To prevent harmful actions in the
decision-making process, organizations should
ensure that accurate and representative sample
data is collected, analyzed and used.

39 40

Justice and Fairness Justice and Fairness

• Inclusion: Attention should be given to under- • Equity: AI should aim to empower and benefit
represented and vulnerable groups to prevent AI individuals and provide equal opportunities while
from becoming a tool for exclusion within society. distributing the rewards from its use in a fair and
• Equality: AI should promote the equality of equitable manner.
individuals in respect to their rights, dignity and • Non-discrimination: AI should be designed for
freedom to flourish. universal usage and not discriminate against people,
• Non-bias: Organizations should invest in ways to or groups of people, based on gender, race, culture,
identify, address and mitigate unfair biases. religion, age or

41 42

7
3/28/24

Justice and Fairness Justice and Fairness

• Diversity: Organizations should instill an • Accessibility: Organizations should protect the


inclusionary working environment, hire teams from a rights of data subjects, such as the right of
range of backgrounds and disciplines, conduct information access about them.
regular diversity sessions, and incorporate the • Reversibility: Organizations using AI need to
viewpoints from a wide range of stakeholders to ensure that the autonomy of AI is restricted, and the
promote diversity. outcomes are reversible when there is a harm
• Plurality: Developers should consider the range of caused.
social and cultural viewpoints within society and
should attempt to prevent societal homogenization
of behavior and practices

43 44

Justice and Fairness Justice and Fairness

• Remedy: When AI holds the possibility of creating • Challenge: There should be clear policies to protect
harm, There needs to be preemptive steps in place conscientious objectors, employees to voice their
to trace these issues and deal with them in a prompt concerns and whistle-blowers to feel protected,
and responsible manner. when it is in the public interest and safety
• Redress: People affected by harmful and/or unjust • Access and Distribution: Organizations should
event resulting from using AI should be addressed in ensure that their technologies are fair and
a appropriate and timely manner. accessible among a diversity of user groups.

45 46

Non-Maleficence

47 48

8
3/28/24

Non-Maleficence Non-Maleficence

• Non-maleficence: AI should be designed with the • Harm: Organizations should encourage "algorithmic
intent of not causing foreseeable harm to human accountability“. They need to assess and document
beings. the objectives and impact of AI in the development
• Security: AI should be robust, secure, and safe stage.
throughout its life cycle, posing no unreasonable • Protection: Developers should implement protection
safety risks. mechanisms and safeguards.
• Safety: Organizations need to ensure AI does not • Precaution: Those who develop AI must have the
infringe on human rights and assess public safety necessary skills to understand how they function
risks. and their potential impacts, and security precautions
must be well documented.

49 50

Non-Maleficence

• Prevention: An AI system must be manageable


throughout lifetime and its control must be made
possible.
• Integrity: Attacks against AI should not compromise
the bodily and mental integrity of people. AI should
“fail gracefully” (e.g. shutdown safely or go into safe
mode).
• Non-subversion: AI systems should be used to
respect and improve the lives of citizens, rather than
“subvert, the social and civic processes on which
health of society depends”.

51 52

Responsibility

• Responsibility: There should be clear and concise


allocation of responsibilities. Primary responsibilities
lies with developers when harm and errors are
Responsibility and Privacy
caused by the design. If issues arise from use and
implementation, the responsibility shifts to
organization.
• Accountability: Involves personnel's need to be
aware of issues that can arise. Blames cannot be
placed on the tools.

53 54

9
3/28/24

Responsibility Privacy

• Liability: For legal reasons, there needs to be • Privacy: Using de-identification, anomaly-detection,
distinction between the designer and the and effective cybersecurity, organizations should
organization to be able to attribute liability. ensure the security of their databases, storage, and
• Acting with Integrity: Organizations need to ensure AI systems. Current data protection laws should be
that their data meets quality and integrity standards followed
at every stage. • Personal or Private Information: Development and
use of AI should ensure strong adherence to privacy
and data protection standards.

55 56

Beneficence and Solidarity

57 58

Beneficence Beneficence

• Benefits: AI should be designed to benefit humans. • Peace: Organizations should aim to avoid an arms
• Beneficence: Organizations should use AI to race in lethal autonomous weapons.
address global issues • Social Good: AI should improve opportunities for
• Well-Being: Organizations should ensure their AI is society, help cultivate a healthy AI industry
fit-for-purpose and does not prohibit individual ecosystem, and avoid causing conflict to non-users.
development and access to primary goods. • Common Good: AI should support the common
good and the service of people. Steps should be
taken to ensure that humanity is protected from
potentially harmful impacts resulting from it.

59 60

10
3/28/24

Solidarity

• Peace: Organizations should aim to avoid an arms


race in lethal autonomous weapons.
• Social Good: AI should improve opportunities for
society, help cultivate a healthy AI industry
ecosystem, and avoid causing conflict to non-users.
• Common Good: AI should support the common
good and the service of people. Steps should be
taken to ensure that humanity is protected from
potentially harmful impacts resulting from it.

61 62

Freedom and Autonomy

• Freedom: Organizations should ensure that the end


users’ freedoms are not infringed during the use of
their AI.
Freedom and Autonomy
• Autonomy: End users need to be informed, and not
deceived or manipulated by AI, and should be
allowed to exercise their autonomy.
• Consent: The use of personal data must be clearly
articulated and agreed upon before its use.
• Choice: AI should protect users’ power to decide
about decisions in their lives.

63 64

Freedom and Autonomy

• Self-determination: There needs to be a balance


between decision-making power freely given by the
user to the autonomous systems and when this
option is taken away or undermined by the system.
• Liberty: AI organizations need to ensure that their
AI protects individuals’ liberties, as outlined in
human rights legislations.
• Empowerment: AI should be used to empower and
strengthen human rights rather than curtailing or
infringing upon them.

65 66

11
3/28/24

Sustainability

• Sustainability: Organizations need to ensure that


they are environmentally sustainable and
incorporate environmental outcomes within their
Sustainability and Dignity
decision-making.
• Environment (Nature): Developed AI should be
used in an environmentally conscious manner.

67 68

Sustainability Dignity

• Energy: The use of AI should be respectful of • Dignity: AI should be developed and used in a way
energy efficiency, mitigate greenhouse gas that respects, serves, and protects humans' physical
emissions, and protect biodiversity. and mental integrity, personal and cultural sense of
• Resources (Energy): AI should be created in a way identity, and satisfaction of their essential needs.
that ensures effective energy and resource
consumption, promotes resource efficiency, use of
renewable materials, and reduction of use of scarce
materials with minimal waste.

69 70

Guidelines for Human-AI Interaction

71 72

12
3/28/24

Overview of Guidelines Initial Phase

• Synthesize over 20 years of learning in AI design 1. Make clear what system can do. Help user
into generally applicable guidelines. understand what the AI system is capable of doing.
• Codify over 150 AI-related design recommendations 2. Make clear how well system can do what it can
into a set of 18 guidelines. do. Help user understand how often AI system may
• Systematically validate and refine the guidelines make mistakes.
through iteration and testing.
• Provide a resource for designers working with AI
and facilitate future research in human-AI
interaction.

73 74

During Interaction When the System is Wrong

3. Time services based on context. Time when to 7. Support efficient invocation. Make it easy to
act or interrupt based on user’s current task and invoke or request AI system’s services when
environment. needed.
4. Show contextually relevant information. Display 8. Support efficient dismissal. Make it easy to
information relevant to the user’s current task and dismiss or ignore undesired AI system services.
environment.
9. Support efficient correction. Make it easy to edit,
5. Match relevant social norms. Ensure experience refine, or recover when AI system is wrong.
is delivered in a way that users would expect, given
their social and cultural context.
6. Mitigate social biases. Ensure the AI system’s
language and behaviors don't reinforce undesirable
and unfair stereotypes and biases.

75 76

When the System is Wrong Over Time

10. Scope services when in doubt. Engage in 12. Remember recent interactions. Maintain short-
disambiguation or gracefully degrade the AI term memory and allow user to make efficient
system’s services when uncertain about a user’s references to that memory.
goals. 13. Learn from user behavior. Personalize user’s
11. Make clear why system did what it did. Enable experience by learning from their actions over time.
user to access an explanation of why the AI system
14. Update and adapt cautiously. Limit disruptive
behaved as it did. changes when updating and adapting AI system’s
behaviors.

77 78

13
3/28/24

Over Time Summary

15. Encourage granular feedback. Enable user to


provide feedback indicating their preferences during
regular interaction with the AI system.
16. Convey consequences of user actions.
Immediately update or convey how user actions will
impact future behaviors of the AI system.
17. Provide global controls. Allow user to globally
customize what AI system monitors and how it
behaves.
18. Notify users about changes. Inform user when
the AI system adds or updates its capabilities.

79 80

Reference

• http://teevan.org/publications/papers/chi19-
guidelines.pdf

81 82

What are Dark Patterns?

• Refers to deliberate design choices made by digital


products, websites, or applications, which are meant
Dark Patterns to be deceptive, misleading, or manipulative to user.
• Such design patterns are created with the intention
of coercing or tricking users into taking actions that
they would not have taken otherwise.
• Often, these actions are in the best interest of the
website or product owner while being detrimental to
the user’s interests.

83 84

14
3/28/24

Types of Dark Patterns Responsibility of Persuasive Design

• Persuasive design is a design approach that


aims to influence behavior of users in a
particular way.
• It involves leveraging various design
elements and principles, such as visual
design, copywriting, and user psychology, to
encourage users to take specific actions.
• While persuasive design has potential to
empower users by helping them make better
decisions, it also raises ethical concerns.

85 86

Responsibility of Persuasive Design User Autonomy and Business Objective

• It is important for designers to approach


design process with a commitment to ethical
principles and a respect for users’ autonomy.
• Achieving a balance between user autonomy
and business objectives is a critical task for
UX designers seeking to create ethical and
effective digital products

87 88

Surveillance Capitalism

89 90

15
3/28/24

What is Surveillance Capitalism? How Does it Work?

• A market-driven process where the commodity for • Tracking online behavior, cookies, location tracking,
sale is personal data, and capture and production of interaction with IoT devices.
this data relies on mass surveillance of Internet. • Creating personalized advertisements, predicting
• Coined by Shoshana Zuboff, it originated from consumer behavior, selling data to third parties.
monetization strategies of companies like Google • Designed to Impact Consumer Behavior: How
and Facebook.
constant data collection can subtly influence
• Practiced by tech firms like Google, Facebook, consumer choices.
Amazon, and others that collect and analyze user
data.

91 92

Ethical Concerns Ethical Data Practices

• Privacy Concerns: How constant surveillance • Anonymization of Data: Emphasize importance of


impacts individual privacy rights. techniques like data masking and pseudonymization
• Consent and Awareness: The lack of transparency to protect individual identities.
in how data is collected and used. • Transparent Data Policies: Companies should
• Potential Misuse: Risks of data breaches, provide clear, understandable privacy policies,
informing users about what data is collected, how it's
unauthorized surveillance by governments or
corporations. used, and with whom it's shared.
• Utilitarian Perspective: Does societal benefit of big • User Consent Mechanisms: Implementing and
respecting robust user consent protocols, including
data outweigh individual privacy concerns?
opt-in and opt-out options, for data collection and
• Corporate Responsibility: Ehical duty of usage.
companies in handling user data.

93 94

Empowering Users Regulatory and Policy Recommendations

• Tools for Privacy Protection: Introduce various • Advocating for Stronger Laws: Discussion about
tools and methods consumers can use to protect the need for more comprehensive laws and
their privacy online, such as VPNs, ad blockers, and regulations that govern data privacy and corporate
privacy-focused browsers. responsibility in data handling.
• Public Awareness Campaigns: Stress the need for • Global Standards for Data Privacy: The necessity
educational campaigns to inform the public about for establishing universal data privacy standards that
the risks of surveillance capitalism, and how to transcend national boundaries, catering to the global
safeguard their digital footprint. nature of the internet.
• Consumer Advocacy: Encourage active
participation in advocacy for stronger data protection
laws and corporate accountability.

95 96

16
3/28/24

Limbic Capitalism

97 98

What is Limbic Capitalism? Key Ethical Considerations

• Limbic Capitalism refers to the exploitation of • Data Privacy: How user data is collected, used, and
limbic system (the part of the brain dealing with shared in HCI systems, dilemma of balancing
motivation, emotions and memories) by capitalistic personalization and privacy.
ventures, particularly in digital technology. • User Addiction: Design strategies that potentially
• In HCI, this concept becomes crucial as designs lead to addictive behaviors, creating habit-forming
often target emotional engagement to increase user products.
interaction and dependency. • Mental Health Impact: Responsibility to consider
psychological effects of their products, including
anxiety, depression, and social isolation.
• Societal Impact: Understanding the long-term
societal impacts of HCI designs, the duty to act in
best interests of users and society.

99 100

Solutions

• Design Accountability: Need for HCI professionals


to recognize their role in creating ethically sound and
psychologically safe interfaces.
• Transparent Algorithms: Making the workings of
algorithms visible and understandable to users,
promoting transparency and trust.
• User-Centric Ethics: Prioritizing user well-being
over engagement metrics and profits.
• Ethical Guidelines: Developing and adhering to a
set of ethical guidelines tailored to HCI design,
focusing on user well-being and informed consent.

101 102

17

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy