Sts Reporting

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 2

Ethical Dilemmas Faced by Robotics

Robotics is the intersection of science, engineering and technology that produces machines,
called robots, that substitute for (or replicate) human actions. Software robots - basically, just
complicated computer programmes - already make important financial decisions. Whose fault is
it if they make a bad investment?

Isaac Asimov was already thinking about these problems back in the 1940s, when he developed
his famous "three laws of robotics".

He argued that intelligent robots should all be programmed to obey the following three laws:

1. A robot may not injure a human being, or, through inaction, allow a human being
to come to harm

2. A robot must obey the orders given it by human beings except where such orders
would conflict with the First Law
3. A robot must protect its own existence as long as such protection does not conflict
with the First or Second Law

These three laws might seem like a good way to keep robots from harming people. But to a
roboticist they pose more problems than they solve. In fact, programming a real robot to follow
the three laws would itself be very difficult.

For a start, the robot would need to be able to tell humans apart from similar-looking things such
as chimpanzees, statues and humanoid robots.

This may be easy for us humans, but it is a very hard problem for robots, as anyone working in
machine vision will tell you.

Ethical dilemma faced by robots is the emotional component. This may seem a little absurd
as of the moment, but looking at how fast technology progresses nowadays, it is not
completely impossible for robots todevelop emotions (Evans, 2007).

So here, the questions become, "What if robots become sentient? Should they be granted
robot rights? Should they have their own set of rights to be upheld, respected, and
protected by humans?" It is interesting to know how people would react if the time comes
when robots can already feel pain and pleasure. Would they act differently or not at all?

Asimov's three laws only address the problem of making robots safe, so even if we could find a
way to program robots to follow them, other problems could arise if robots became sentient. If
robots can feel pain, should they be granted certain rights? If robots develop emotions, as
some experts think they will, should they be allowed to marry humans? Should they be
allowed to own property?
 If robots were to feel emotions, society would need to consider their rights as living beings,
which could be detrimental to humanity. It is unjust and cruel to deny a living, caring thing
certain treatments and activities. Therefore, robots with emotions and specific desires would be a
severe weight on our society. Robots are designed to aid humans, and purely that.  If AI had
emotions, they would have certain needs beyond what is needed for their basic function. If robots
were to suddenly need food or fuel, or leisure time, or certain amenities, there would be a
noticeable cost to society.  Furthermore, if the individual desires of AI conflicted with that of
humans,  the consequences could be severe.

      Robots, enhanced with personal desires and emotion, could seek to destroy humanity if they
felt that humanity’s existence was negatively affecting the earth. Robots are highly logical; they
are created and given intelligence through clear, thought out code, that commands for highly
specific actions. Therefore, to them, it may be rational to eradicate human life, as it is the logical
action to do to improve the wellbeing of the earth. From bombing massive swaths of land to
pumping carbon dioxide into the atmosphere to causing the sixth mass extinction, human impact
on the earth is clearly not positive (NASA). Robots could assess the pros and cons of human
existence, and based on humanity’s current actions, it is likely that AI would find us to be a
plague on earth that they would try to annihilate. 

Highly advanced robots that could be capable of emotion would be an extreme danger to society,
which is why I caution scientists to consider where they are headed, and not make a mistake as
costly as that of Victor Frankenstein; we do not want to create a being or beings that have desires
that conflict with our own, as these conflictions could lead to a negative outcome. While many
scientific inventions have furthered the human race, improved robotic thought and emotion
would not be one of those inventions; it would be the atom bomb of the future. 

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy