CS158-2 Activity Instructions
CS158-2 Activity Instructions
CS158-2 Activity Instructions
Instructions:
Provide 10 legal issues stated in the article “Legal and human rights issues of AI: Gaps, challenges and
vulnerabilities” by Rowena Rodrigues. For each legal issue, provide answers on the below questions.
The lack of Algorithmic Transparency became the forefront of the discussion on the legal
parameters on Artificial Intelligence. The issue is greatly emphasized as today’s AI applications are
slowly being augmented to almost everything, which poses a high-risk on information sensitive
topics of discussion. The risk of what AI does on a certain aspect is greatly elevated by a glaring
lack of useful and accessible information for the people. The lack of accessibility and transparency
to the inner workings of an applied AI system is a very volatile subject as accountability and
fairness is inevitably compromised.
The lack of Algorithmic Transparency is an issue because the lack of accessibility and
transparency of information, whether intentional or not, compromises what the public knows
regarding an AI algorithm’s functionality, parameters, and operating procedure. The obscuring of
these necessary information not only sows seeds of distrust from the general public, but most
importantly violates legal and ethical issues. If people do not know what went wrong with a
transaction, loan application, or booking a flight aside from a notice that the decision was
processed through a software, doubt and distrust will hamper the future of AI-related endeavors.
The issue is currently being addressed, according to an EU Parliament study, through four
options. The first is raising awareness and knowledge about AI and the assigning of watchdogs and
whistleblowers that would keep AI on check. Second is holding accountability for using AI on public-
sector decision making which coincides with the third one which involves overseeing the application
of AI and checking its legal parameters. The final option involves a call for a global-scale algorithmic
governance, Aside from these four is the implementation of an algorithmic impact assessment and
a transparency standard.
4. Will it likely to be resolved soon? Or will it be a long-time problem?
The issue of transparency is a subjective and touchy subject which will not be agreed upon
in unison. Although the aforementioned solutions are indeed viable, these are all relatively new
implementing strategies that is subject to a lot of improvement. Along with this is the problem that
since the proposed solutions are something new for all of us, assessing and evaluating its efficacy
and effectivity on regulating algorithm transparency will take time to mature, refine, and be
standardly implemented. Resolving the issue of algorithm transparency will take time, and a step-
by-step mindset.
The first and foremost action to undertake in addressing information transparency issues on
AI is to disseminate information on how AI works. Information is crucial in building a society that is
self-aware of the potential benefits and risks of AI. This would prevent the public from exhibiting a
distrust and doubt on AI which would greatly affect how it would be developed and augmented to
society. The solutions of amending laws or a standardized evaluation for AI and its algorithm
transparency will only be attainable and viable if the general public knows what these laws and
regulations stand for.
Cyber security vulnerability is an issue that stems from how the augmentation of AI to
surveillance or for national security purposes can open a new method of attacking which can
compromise safety and security from local levels up to a global scale. AI and network-based
augmentation and intervention poses a looming concern as methods of containing potential
problems connected to cyber security is evenly paced with emergence of new methods of attacking
using cyber security.
Attacks involving cyber security comes in many forms, often which is left unnoticed. This is
the primary reason why cyber security vulnerabilities is a heavy issue. More often than not,
problems that are related to cyber security are intentionally left hidden and contained from the
general public until after the issue is resolved. This filtering of knowledge is emphasized by the
versatility on how cyber security impacts an individual’s rights – strategic targeting of political
messages, predictive policing algorithms, surveillance of civilians and many others. This breaching
of personal security greatly affects how people see Ai and cyber security.
3. How is it currently being addressed?
Some of the approaches towards addressing cyber security vulnerability issues involves
developing a recovery mechanism for sensitive data and information along with including and
conversing with human analysts in critical decision making. Risk management programs are also
being used and developed in order to have proper evaluation on the prospect of utilizing AI in cyber
security. Software upgrades and updates have been always the straightforward way of preventing
cyber security networks to be compromised, especially in today’s time that cyber warfare is as
frequent and dangerous as conventional warfare.
Resolving the issue of cyber security vulnerabilities involves both a proactive and responsive
approach in developing cybersecurity policies and regulations, along with developing design and
methods of implementation tailored to reduce the risks of cyber security compromise. Although the
aforementioned methods are already in place, reality begs to differ from the ideal. A lot of things
have to be taken into consideration in finding a long-term solution to an ever-evolving and adapting
problem. As such, it will take time for it to be addressed in a manner that is adaptable and viable
for the years to come.
Addressing cyber security vulnerability is a herculean task. Instead focusing on solving this
issue in single take, we need to first manage that most obvious but overlooked ones. The solving of
this issue needs to start at the locals, then would slowly build up to a city, country, and even the
world. Regulating how predictive algorithms work in social media is a viable first step on how to
address cyber security vulnerability. Through this, we can then understand the nature of cyber-
attacks; which would then be used in designing a cyber architecture that would be safe for
everyone.
The issue of unfairness, bias, and discrimination of Artificial Intelligence is directly caused by
the use of algorithms and automated decision-making systems (Hacker 2018). Specifically, the use
of these systems and decision-making algorithms in processing and or evaluating big data and
other sensitive information bears the possibility of discrimination and infringement of basic human
rights.
The first proposal in addressing the matter is by conducting regular assessments that
evaluate the credibility of data and examining if these data are affected by biased elements. To
make algorithmic and technological advancements of AI to be less problematic, there is also a
proposal to include a human to intervene and be part of the system (Berendt, Preibusch, 2017),
along with providing a transparent algorithm so that users would know how the system operates
and the justification behind a certain decision. IEEE P7003 is an Algorithmic Bias Consideration
standard that is in development to address the issue.
While there are laws and amendments that are being developed and enforced today to
minimize the risks of discriminatory and biased display among AI decision making systems, it still
falls short. Particularly in cases where such anti-discriminatory failsafe fail to extend in areas that
expressly protected. Human inclusion in the system may also deem detrimental rather than helpful
and a transparent algorithm doesn’t equate to better public understanding either. The nature of this
issue calls for a holistic, interdisciplinary approach that is science-backed and ethical which will take
time to spread and executed properly.
Unfairness, bias, and discrimination among AI systems inevitably falls on the responsibility of
the developer. As such, there is an emphasized need for accountability among the developers of AI
systems and algorithms. A general agreement and consensus should be laid down for all developers
to understand the gravity and responsibility of their work. Developments should be regulated, but
not hampered, in such a way that subsequent developments would confide within the spectrum of
technicality and ethical viability. A regulation for the developers and their AI developments will
minimize discriminatory tendencies and would also shape how future endeavors will form.
Lack of Contestability
1. Artificial Intelligence issue and description
Lack of contestability is an AI legal issue that involves the presently lacking means of
challenging AI-based systems and algorithms with which the aforementioned technologies produce
and generate unexpected, unfair, and discriminatory results. Results that which inhibit individual
dignity and fundamental rights (Bayamlıoğlu, 2018).
According to Almana (2019) in the given article, the lack of contestability is being addressed
through “contestability by design”. It is a proposal aimed at better protection at decisions made by
automated processing given at each stage of an Artificial Intelligence systems’ lifecycle.
This issue will take a long time to be resolved. This is caused by the inefficacy of general
safeguards that protect individuals to better express his/her point of view and the right to challenge
a decision and obtain an explanation of that decision is not particularly applicable automated
processing of data. Another reason is that contesting an automated decision is hard to challenge
without a clear explanation why it reached to that certain decision – which would need involving
professionals that are capable of determining false positives and existence of discriminatory
outcomes, which is not cost effective and efficient.
This issue must be addressed at many different levels of governance. However, given that
laws regarding these new technologies will take time to be better deigned, the temporary solution
to mend this issue is to provide updated regulations and supplementary regulations that does not
necessarily act as a separate law, but rather augment existing ones to better adapt to incidents
that involve newer technologies. By doing so, it empowers individuals to a certain degree to be able
to give them a legal means of challenging automated decisions.
Establishing a legal personhood for an AI system have been deemed by an AI group (AI
HLEG) as something that fundamentally distorts the concept of human accountability and
responsibility – posing a significant moral hazard should the endeavor be pushed. Whilst those in
favor see this legal personhood as a pragmatic solution where an AI should be held accountable
and support its moral rights. Treating non-biological intelligence as a new legal personality impacts
future discussions of AI as relating AI systems to something akin to human personhood will
inevitably involve morality and ethics which would further complicate the matter.
According to the paper, here have been no significant measures and breakthroughs in
properly addressing the legal personhood of AI at an international, EU or national level. Although
this issue has been brought up for discussion and debate, an agreement (international or regional
level) has still not been made on how to address this relatively new development. This is primarily
because of the sensitive and political nature of the issue.
Discussions that involve societal morals and ethics is something that would linger and
persist as long as society exists. The same goes, if not more, for the issue of legal personality of AI.
Even discussions about this new entity is divided, which reflects how every person has a different
interpretation of the matter at hand. Decisions regarding this issue will always involve a certain
degree of subjectivity, which inhibits a proper deliberation on how to deal with it. This issue will be
very difficult and will require a lot of time to come to a resolution.
Addressing the legal personhood of AI would inevitably rest upon at a national level. Nations
will have different methods of approaching this matter and unless a national-level comprehension
and resolution has been made for virtually all nations, an international-level of agreement will not
come to fruition. Emotional and economic appeal in dealing with this issue will play a big role on the
approach of different nations, which is both necessary and troublesome. However, superficial and
fictitious the issue may appear at face value, nations need to come into terms with its own
approach before addressing the AI personhood internationally.
Exercised approaches in dealing with this issue involve creation of laws that protect
computer-generated literary, dramatic, musical or artistic works such as that of UK. The creator of
an AI design automatically holds accountability and ownership of the AI except when the work was
commissioned or created within an individual’s employment term, which would then be owned by
the employer or the commissioner of the work. Registered trade marks are also being used to deem
an AI system to be a personal property, unless a personal property right is given to the AI system
itself.
Although there are advancements in addressing the intellectual property issues of AI, a large
number of intellectual property issues is still yet to undergo an adequate and conclusive
agreement. This is especially true today that AI developments and its subsequent generated works,
inventions, or breakthroughs become more intuitive, which challenges conventional classification of
intellectual property. Further research and understanding the present state of AI-generated works is
needed to be able t achieve a proper consensus of intellectual property issues, which will take time
to mature, but not in a such a long time frame.
The issue was depicted by the IBA Global Employment Institute report on 2017 as a trend
that correlates AI development and augmentation to production and work towards the workplace
situation of traditional human workers. The issue delves into economic and social consequences of
the adverse and rapid inclusion of AI to the workplace – raising moral and ethical dilemmas in
addressing this issue.
The most prominent approach in addressing the issue is retraining the existing workforce to
adapt to the new work environment. Aside from this, the educational system is also eyed to be re-
focused and modernized in all educational levels so people will have the skills that the new work
landscape requires. As AI continually transforms jobs, there is also movements towards supporting
workers whose jobs are about to change or be removed. Social Security Systems are also being
updated to better support the workers.
Although relatively new, the issue of AI affecting workers have been addressed rather
effectively in the past years. Resolution towards this issue takes many forms but share a general
success – effectively adapting human workforce to better adjust to new innovations and
developments. While there is a general consensus that AI will still have a degree of disruption
towards the work environment, the risk is not enough to require critical changes in educational and
economic policing measures. Taking the adaptive approach of governments, this AI issue will be
resolved in a shorter timeframe.
5. In your own opinion, how the issue can be addressed?
Existing approaches in addressing the issue is already showing its effectivity and
adaptability. One approach that could have some room for improvement is the softer transition of
AI-related workplace and production augmentation. This is particularly true for less-developed
countries where workers are already in a detrimental standpoint. Even if there are fall back and
support mechanisms in place for workers, if the transition is too rapid and sudden, it will still disrupt
the work environment in these less-developed countries, which may result to cascading
consequences that may prove to be more what the nation can handle.
Privacy and data Protection issues highlights possible risks of infringements in the data
protection rights of individuals. This issue, as described by Wachter Mittelstadt (2019) concerns
algorithmic accountability over these sensitive data processing and underlining how individuals
have little control on the data being gathered or proper insight and knowledge on how these
personal data is being utilized – effectively invading personal privacy or damaging reputation.
Privacy and data protection is a critical and sensitive issue that risks violating individual
rights and privacy. The implications imposed by the sheer technological capabilities of these new AI
systems in terms of privacy and informed surveillance, display and highlight the intrusive nature of
systems that are involved in big data processing and handling of data. Potential damage to rights
and reputation is the key factor why concerns are developing in AI systems which correlates to
demand for transparency and accountability for the AI systems and the entities behind it.
European Union has already placed a privacy and data protection law that provides
safeguards against privacy infringements especially in the form of transparency and information
access of any concerned individual. Informed consent is also being upheld through notices to users
of potential harms relating to the use of a particular AI system or service. There is also a
suggestion to form a secure multi-party computation (MPC) that allow multiple parties to compute
functions together while keeping each party’s input in the function as private. Anonymization,
privacy notices and impact assessments, privacy by design are implemented approaches to the
issue.
4. Will it likely to be resolved soon? Or will it be a long-time problem?
There is still a glaring lack of protection against sensitive data-based inferences and
uninformed surveillance among individuals up to this day. Added to this, the context of AI and how
it functions is rapidly changing, which inhibits any presence of proper data protection to actually
take placed and enforced in an ideal level of efficacy. Among other concerns is that the already
implemented measures might fall short because of AI’s nature to conflict with societal values and
human rights. Even if there are existing protection measures, proper resolution to this issue will
take a long time.
Privacy and data protection issues is something that should be addressed legally, through
proper laws and its subsequent enforcement. We have seen implications of uninformed surveillance
and data use even in social media applications but there is more to it. Oversight and regulation of
these potential private data misuse will only be effective if conventional laws addressing this are
replaced by laws designed to tackle this issue. Aside from this is information dissemination on an
individual’s responsibilities of using particular automated systems like social media applications, for
the public to have a certain degree of means to protect themselves.
The issue for damage liability of any AI-related incident revolves around the context of
damage done to a person or property. Given that there are many parties that are involved in the
development of an AI system or technology, proper establishment on who should be held liable
becomes more complex, if not convoluted. Added to this are several other factors that needs to be
taken into consideration before determining who is held liable.
Although there are efforts to upholding damage liability of AI related incidents to ensure
safety and protection of the victims, the specific characteristics of these new technologies will
continually become more difficult to be handled by conventional laws and regulations. Along with
this is the evolving trend in the development and nature of these technologies which make it more
difficult to properly allocate liability to responsible parties. Designing regulations to better address
this issue will take a long time for research, design, and implementation, which why the temporary
answer by government to address these issues is through supplementary rules.
Liability for damages is a matter that would be resolved by the time a proper intellectual
property legal framework for Artificial Intelligence and its subsequent developments are made. This
is because the nature of such issues delves in the legal aspect of society. No matter how informed
the public about these new technologies are, once an incident happens and there is no legal law to
be applied, then public awareness is futile. Governments must recognize that apart from public
information, proper legal laws and regulations must be made to adapt to these new technologies
and their potential risks.
The issue on the lack of accountability for harms revolve around calls for mechanisms to be
put in place that ensure responsibility in the development and execution of AI related technologies
and systems in a transparent matter. This is primarily caused by an unprecedented gap of
accountability that affects causality, justice, and compensation (Bartlett (2019).
Legal accountability for harms that involve Artificial Intelligence is projected to take the form
of a “right to explanation”. Aside from this is that design of transparency safeguards, data
protection, and reporting obligations. The main gist on how the issue is approached is the emphasis
on explanation the same way that is required to that of human-based incidents and harm.
The issue of the lack of accountability for harms can be addressed not by hastily making
uninformed legal actions and creation of legal parameters that may do more harm than good. Such
is the point raised by the paper that if AI developers are the ones held responsible and accountable
without any legal basis, AI development will be affected and the problem will not be solved at its
root. Understanding and insight on how AI development works must take place before anything of
substance can be executed to ensure that those who should be really accountable is actually found.