Cas 138t Persuassive Essay
Cas 138t Persuassive Essay
Cas 138t Persuassive Essay
Sarah Dragon
CAS 138T
Dr. Freymiller
11 April 2017
Are we fully in control of the technology we are creating? This ever-growing fear, as
researchers get closer and closer to making superintelligent machines, continues to scare some
of the most prominent names in the science and technology industry. Artificial Intelligence, or
AI, is the development of computers that are able to do things normally done by people,
specifically things associated with people acting intelligently. As most scientists recognize that
they cannot stop the advancement of Artificial Intelligence, they want to see it monitored so that
it will remain beneficial to humanity. The United States should regulate the advancements being
made in Artificial Intelligence (AI) within the next couple of decades. AI poses a serious threat to
the economy, the middle class in particular. Fully autonomous robots and computers are a
genuine hazard to humanity, and cause moral and ethical issues within the defense industry. AI
also poses a serious threat to exceed human intelligence, which would negatively affect millions
of U.S. citizens.
The economy is one of the largest areas that will be affected by the advances in Artificial
Intelligence in the decades to come. Aspects of the economy, like unemployment, are set to be
drastically affected by AI in the next couple of decades. A report put out in February 2016 by
Citibank in partnership with the University of Oxford predicted that 47% of US jobs are at risk
of automation (Williams-Grut). While a good chuck of this percentage isnt highly-skilled jobs,
there is a very good chance that those highly-skilled jobs will be able to be done by machines in
Dragon 2
just a few short years. The rise of driverless cars and trucks is just the beginning of AIs reach.
New AI techniques are being aimed to reinvent everything from manufacturing to healthcare to
Wall Street. The next big thing in financial technology at the moment is "roboadvice", which is
an algorithm that can recommend savings and investment products to someone in the same way a
financial advisor would. If these roboadvisors take off it could lead to huge upheavals in that
highly-skilled profession (Metz). Therefore, its not just blue-collar jobs that AI endangers, it will
endanger all different types of jobs all the way up to the white-collar jobs that people do not want
to lose to AI
Another major area of the economy AI will affect is the ever-growing inequality gap. One
of the major things that is wrong with the United States economy right now is that the inequality
gap isnt diminishing, in fact, its growing. Artificial Intelligence will only add to the growing
gap. AI will allow the upper management of companies to operate with less employees, therefore
they will be able to keep more of the wealth for themselves. A White House report put out in
December 2016 said, If labor productivity increases do not translate into wage increases, then
the large economic gains brought about by AI could accrue to a select few, (Kharpal). AI will
allow productivity within companies to increase, but the workforce will not be reaping the
benefits of this productivity increase. The consumers will benefit because products will become
more rapidly available, but those who have jobs in those industries will lose their jobs because
AI is faster, cheaper, and easier than having human workers. Inequality between the 1% and the
99% may widen as workforce automation continues because there would be less people required
to start and/or maintain a successful company, more of the money will go to and stay at the top
positions of major companies and even the upper management of small businesses. The White
markets means that only a few may come to dominate markets, (Kharpal). The people of the
United States are already very unhappy with the inequality gap. Although AI may increase
productivity, the United States does not need to see an even further increase in the inequality gap
to further divide this country. That is much more important than an increase in productivity.
Completely autonomous computer robots are a real hazard to the future of humanity.
Would you feel comfortable knowing that a computer drone has the capability to target and kill
human lives without human approval? That is what completely autonomous robots are capable in
this digital age. Most weapons in the defense industry are not equipped with this form of
Artificial Intelligence yet, but this is the future of weapons within the defense industry. BAE
Systems is an international technology company that provides some of the worlds most
advanced, technology-led defense, aerospace, and security solutions. They have developed a
specific drone that is equipped with AI that can locate, target, and execute human lives at its own
will. There is no human behind the controls of this drone, in a sense, it makes the decisions on its
own (Sreenivasan). What if it gets a target wrong? Then who becomes responsible for this
technology? There is a lot of what ifs with this type of AI and it will continue to cause
problems for the United States as our defense industry continues to grow more autonomous.
Autonomous weapons make decisions for themselves, and that is a capability that the
United States simply cannot give to a computer. Artificial Intelligence will continue to evolve,
but the United States needs to understand where to draw the line. This is no longer a question of
impeding scientific breakthroughs, rather a question of how ethical these breakthroughs really
are in the ways that they are being used. Because technology like this already exists, some major
scientists are fighting further advancements in fear that machines are being given too much
Dragon 4
power. Another issue scientists have with fully autonomous weapons is that if a robot messes up,
there is no one to blame. This lack of accountability is disrespectful to our enemies and the rules
of war since this could amount to going to war with a complete disregard for international laws.
Bradley J. Strawser who is an assistant professor of philosophy at the Naval Postgraduate School
as well as a research associate at the Ethics and Law of Armed Conflict Center at Oxford
University speaks to this issue, It is tantamount, you might think, to simply pledging
beforehand not to prosecute any of your soldiers who break the law. Its that bad. And since that
would be unconscionable, so too would using killer robots (Lin). This is a major conflict of
interest that the United State will face if they continue to integrate autonomous weapons into its
defense industry.
The decision to take a human life needs to be very carefully considered and calculated.
Robots will never be able to truly deliberate and appreciate the weight of a decision like that,
where a human can. Some argue that if we can teach a machine to make decisions like a human
would, then we can teach a machine to be perfect and that it would learn. Yet, Strawser believes
that no matter how complicated a machine becomes, it will never truly be able to act for the right
reasons where a human would. This plays into a different type of AI known as machine learning.
According to McKinsey & Company, Machine learning is based on algorithms that can learn from
data without relying on rules-based programming, meaning computers have the ability to learn
without being explicitly programmed to do so (Pyle and San Jose). Despite machines being taught to
act like humans without being specifically programmed to do so, experts believe that machines will
never have the moral capabilities that humans do. Strawser compares deploying machines that are
unable to act for good reasons to deploying human soldiers that we know to be psychopaths. He says,
If were comfortable deploying machines that cant act for good reasons, then we should be
Dragon 5
comfortable with deploying soldiers that we know to be psychopathic, even if theyre well-
behaved, (Lin). This idea of AI being used in autonomous weapons is exactly what the United
States should fear and regulate, because machines shouldnt have the capability to take a human
intelligence. There are several ways that AI could exceed human intelligence, beginning with
speed. Our brains axons carry signals at 75 meters per second or slower, whereas a machine can
pass signals along about 4 million times quicker. Another would be serial depth, because the
human brain can't rapidly perform any computations that require more than 100 sequential steps.
It relies on massively parallel computation, so more is possible when both parallel and deep
serial computations can be performed like they can on a machine (Muehlhauser). Machine
intelligence has its perks, like speed and serial, but machines simply cant be taught to feel like
humans do. Machines simply cant act based on a notion of what is morally right or wrong. If
machine intelligence surpasses human intelligence it will cause serious problems with millions of
lives.
This future where machines have the capabilities to surpass humans is not nearly as far
off as people think. Ray Kurtzweil, an Inventor and Futurist, believes that in less than 15 years
machine intelligence will be on par with human intelligence (Sreenivasan). If the U.S. does not
regulate AI, it will continue to advance past the point of human intervention. This could lead to
any number of things. If researchers continue to teach machines to think for themselves they
actually will learn think for themselves, and not listen to their human counterparts. Stephen
artificial intelligence because he believes it might overtake and replace humans. Hawking said in
an interview in late 2014, "The development of artificial intelligence could spell the end of the
human race. It would take off on its own, and redesign itself at an ever-increasing rate. Humans,
who are limited by slow biological evolution, couldn't compete, and would be superseded
(Price). Stephen Hawking is not the only world-renowned scientist to call for regulations on
Artificial Intelligence. Other big name scientists that fear the advancement of AI are Elon Musk
and Bill Gates. Bill Gates has previously said, I am in the camp that is concerned about super
intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That
should be positive if we manage it well. A few decades after that though the intelligence is strong
enough to be a concern, (Sofge). Bill Gates is a big name in the computer science industry, and
has repeatedly stated that he agrees with Elon Musk when it comes to fearing AI advancements
and superintelligence. All three of these scientists have one thing in common. They do not
believe the U.S. should stop researching artificial intelligence, rather they believe the U.S. should
regulate how far the advancements go to make sure they remain beneficial to humanity.
but this is not the case. Scientists are very close to outfitting AI machines and robots with
algorithms that will allow them to make decisions and analyze situations without data to go off.
Once you teach a machine to learn on its own, there is no going back. If a machine decides to
ignore certain data or criteria in situations there is nothing humans can do. As Roman
probability of existential risk becomes very impactful once multiplied by all the people it will
affect. Nothing could be more important than avoiding the extermination of humanity, (Conn).
superintelligent that much more frightening. This is why so many people call for regulations to
be made. People want AI will continue to make groundbreaking advancements, but only those
According to the New York Times, each robot added to the industrial labor force will cost
as many as six human workers their jobs. Artificial Intelligence is making more advancements
now than ever before, and once we start pressing further into this research there will be no
turning back. AI will very rapidly evolve over the next couple of decades, but just how much can
humans risk before it impacts millions of Americans in a very negative way? The economy
would take a huge hit if the U.S. does not choose to regulate AI before it is too late.
Unemployment would skyrocket and the inequality gap would increase beyond repair, leaving
millions devastated. AI will also continue to push the limits of the defense industry with fully
autonomous weapons. These weapons will bring about ethical and moral questions that will call
into question everything that the U.S. is doing to defend our country, because a nation cannot
have a computer targeting and executing foreign enemies at its own discretion. There is also the
possibility of superintelligence, a form of AI, surpassing human intelligence. This could cause
major problems for the country because millions would be affected, which is why this fear is a
reality to so many people. The United States needs to regulate advancements being made in
Artificial Intelligence, so researchers can still make great strides in this field while remaining
beneficial to humanity. Bradley J. Strawser makes a great point, Even if we are fallible decision
makers with flawed consciences, it could be that simply grappling with difficult moral decisions
is one of the things that makes our lives valuable and meaningful.
Dragon 8
Works Cited
Conn, Ariel. "Can We Properly Prepare for the Risks of Superintelligent AI?" Future of Life
Dowd, Maureen. "Elon Musk's Billion-Dollar Crusade to Stop the A.I. Apocalypse." The Hive.
Kharpal, Arjun. "AI could boost productivity but increase wealth inequality, the White House
Lin, Patrick. "Killer Robots: New Reasons to Worry About Ethics." Forbes. Forbes Magazine,
Metz, Cade. "The AI Threat Isnt Skynet. Its the End of the Middle Class." Wired. Conde Nast,
link.galegroup.com/apps/doc/EJ3010899218/OVIC?u=down32095&xid=3ebcd866.
Accessed 10 Apr. 2017. Originally published in Facing the Intelligence Explosion, 2013.
Dragon 9
Price, Rob. "Stephen Hawking: Automation and AI is going to decimate middle class jobs."
Pyle, Dorian, and Cristina San Jose. "An executive's guide to machine learning." McKinsey &
Sofge, Eric. "Bill Gates Fears A.I., But A.I. Researchers Know Better." Popular Science. Popular
Sreenivasan, Hari. "How smart is today's artificial intelligence?" PBS. Public Broadcasting
Williams-Grut, Oscar. "Robots will steal your job: How AI could increase unemployment and
inequality." Business Insider. Business Insider, 15 Feb. 2016. Web. 04 Apr. 2017.