0% found this document useful (0 votes)
41 views

Ai Tech

Uploaded by

bigwill35
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views

Ai Tech

Uploaded by

bigwill35
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 103

by Patrick Caughill

Choosing Sides

If tech experts are to be believed, artificial


intelligence (AI) has the potential to transform the
world.

But those same experts don't agree on what kind of


effect that transformation will have on the average
person. Some believe that humans will be much
better off in the hands of advanced AI systems,
while others think it will lead to our inevitable
downfall.

How could a single technology evoke such vastly


different responses from people within the tech
community?

Artificial intelligence is software built to learn or


problem solve - processes typically performed
in the human brain.

Digital assistants like Amazon's Alexa and Apple's


Siri, along with Tesla's Autopilot, are all powered
by AI. Some forms of AI can even create visual
art or write songs.

There's little question that AI has the potential to be


revolutionary. Automation could transform the way
we work by replacing humans with machines and
software. Further developments in the area of self-
driving cars are poised to make driving a thing of
the past.

Artificially intelligent shopping assistants could


even change the way we shop.

Humans have always controlled these aspects of


our lives, so it makes sense to be a bit wary of
letting an artificial system take over.
Image credit:
Silver Blue/Flickr

The Lay Of The Land

AI is fast becoming a major economic force.

According to a paper from the McKinsey Global


Institute Study reported by Forbes, in 2016 alone,
between $8 billion and $12 billion was invested in
the development of AI worldwide.

A report from analysts with Goldstein


Research predicts that, by 2023, AI will be a $14
billion industry.

KR Sanjiv, chief technology officer at Wipro,


believes that companies in fields as disparate as
healthcare and finance are investing so much in AI
so quickly because they fear being left behind.
"So as with all things strange and new, the
prevailing wisdom is that the risk of being
left behind is far greater, and far grimmer,
than the benefits of playing it safe," he
wrote in an op-ed published in Tech
Crunch last year.
Games provide a useful window into the increasing
sophistication of AI.

Case in point, developers such


as Google's DeepMind and Elon
Musk's OpenAI have been using games to teach AI
systems how to learn.

So far, these systems have bested the world's


greatest players of the ancient strategy game Go,
and even more complex games like Super Smash
Bros and DOTA 2.

On the surface, these victories may sound


incremental and minor - AI that can play Go can't
navigate a self-driving car, after all. But on a
deeper level, these developments are indicative of
the more sophisticated AI systems of the future.

Through these games, AI become capable


of complex decision-making that could one day
translate into real-world tasks.

Software that can play infinitely complex games


like Starcraft, could, with a lot more research and
development, autonomously perform surgeries or
process multi-step voice commands.

When this happens, AI will become incredibly


sophisticated. And this is where the worrying starts.

AI Anxiety

Wariness surrounding powerful technological


advances is not novel. Various science fiction
stories, from The Matrix to "I, Robot", have
exploited viewers' anxiety around AI.

Many such plots center around a concept called


"the Singularity," the moment in which AIs become
more intelligent than their human creators. The
scenarios differ, but they often end with the total
eradication of the human race, or with machine
overlords subjugating people.

Several world-renowned sciences and tech


experts have been vocal about their fears of AI.
Theoretical physicist Stephen Hawking famously
worries that advanced AI will take over the
world and end the human race.
If robots become smarter than humans, his logic
goes, the machines would be able to create
unimaginable weapons and manipulate human
leaders with ease.
"It would take off on its own, and redesign
itself at an ever-increasing rate," he
told the BBC in 2014. "Humans, who are
limited by slow biological evolution,
couldn't compete, and would be
superseded."
Elon Musk, the futurist CEO of ventures such as
Tesla and SpaceX, echoes those sentiments,
calling AI,
"…a fundamental risk to the existence of
human civilization," at the 2017 National
Governors Association Summer Meeting.
Neither Musk nor Hawking believe that developers
should avoid the development of AI, but they agree
that government regulation should ensure the tech
does not go rogue.
"Normally, the way regulations are set up is
a whole bunch of bad things happen,
there's a public outcry, and after many
years, a regulatory agency is set up to
regulate that industry," Musk said during
the same NGA talk.
"It takes forever. That, in the past, has
been bad, but not something which
represented a fundamental risk to the
existence of civilization."
Hawking believes that a global governing
body needs to regulate the development of AI to
prevent a particular nation from becoming
superior.

Russian President Vladimir Putin recently stoked


this fear at a meeting with Russian students in
early September, when he said,
"The one who becomes the leader in this
sphere will be the ruler of the world."
These comments further emboldened Musk's
position - he tweeted that the race for AI superiority
is the,
"most likely cause of WW3."
Musk has taken steps to combat this perceived
threat. He, along with startup guru Sam Altman, co-
founded the non-profit OpenAI in order to guide AI
development towards innovations that benefit all of
humanity.

According to the company's mission statement:


"By being at the forefront of the field, we
can influence the conditions under which
AGI is created."
Musk also founded a company called Neuralink
intended to create a brain-computer interface.

Linking the brain to a computer would, in theory,


augment the brain's processing power to keep
pace with AI systems. Other predictions are less
optimistic.

Seth Shostak, the senior astronomer at SETI


believes that AI will succeed humans as the most
intelligent entities on the planet.
"The first generation [of AI] is just going to
do what you tell them; however, by the
third generation, then they will have their
own agenda," Shostak said in an interview
with Futurism.
However, Shostak doesn't believe sophisticated AI
will end up enslaving the human race - instead, he
predicts, humans will simply become immaterial to
these hyper-intelligent machines.

Shostak thinks that these machines will exist on an


intellectual plane so far above humans that, at
worst, we will be nothing more than a tolerable
nuisance.

Image source: Max Pixel

Fear Not

Not everyone believes the rise of AI will be


detrimental to humans; some are convinced that
the technology has the potential to make our
lives better.
"The so-called control problem that Elon is
worried about isn't something that people
should feel is imminent. We shouldn't panic
about it," Microsoft founder and
philanthropist Bill Gates recently told
the Wall Street Journal.
Facebook's Mark Zuckerberg went even further
during a Facebook Live broadcast back in July,
saying that Musk's comments were,
"pretty irresponsible."
Zuckerberg is optimistic about what AI will enable
us to accomplish and thinks that these
unsubstantiated doomsday scenarios are nothing
more than fear-mongering.

Some experts predict that AI could enhance our


humanity. In 2010, Swiss neuroscientist Pascal
Kaufmann founded Starmind, a company that
plans to use self-learning algorithms to create a
"superorganism" made of thousands of experts'
brains.
"A lot of AI alarmists do not actually work in
AI. [Their] fear goes back to that incorrect
correlation between how computers work
and how the brain functions," Kaufmann
told Futurism.
Kaufmann believes that this basic lack of
understanding leads to predictions that may make
good movies, but do not say anything about
our future reality.
"When we start comparing how the brain
works to how computers work, we
immediately go off track in tackling the
principles of the brain," he said.

"We must first understand the concepts of


how the brain works and then we can apply
that knowledge to AI development."
Better understanding of our own brains would not
only lead to AI sophisticated enough to rival human
intelligence, but also to better brain-computer
interfaces to enable a dialogue between the two.

To Kaufmann, AI, like many technological


advances that came before, isn't without risk.
"There are dangers which come with the
creation of such powerful and omniscient
technology, just as there are dangers with
anything that is powerful.

This does not mean we should assume the


worst and make potentially detrimental
decisions now based on that fear," he said.
Experts expressed similar concerns about quantum
computers, and about lasers and nuclear weapons
- applications for that technology can be both
harmful and helpful.

Definite Disrupter

Predicting the future is a delicate game.

We can only rely on our predictions of what we


already have, and yet it's impossible to rule
anything out. We don't yet know whether AI will
usher in a golden age of human existence, or if it
will all end in the destruction of everything humans
cherish.

What is clear, though, is that thanks to AI, the world


of the future could bear little resemblance to the
one we inhabit today...
by The AI Organization
AI, U.S., China,
Big Tech, Facial Recognition,
Drones, Smart Phones, IoT, 5G,
Robotics, Cybernetics,
and Bio-Digital Social Programming...

What are the inter-connections between AI, U.S,


China, Big Tech and the worlds use of Facial
Recognition, Bio-Metrics, Drones, Smart Phones,
Smart Cities, IoT, VR, Mixed Reality, 5G, Robotics,
Cybernetics, and Bio-Digital Social Programming?

We will cover present, emerging and future threats


of Artificial Intelligence with Big Tech, including
technology that can be used for assassination or to
control humanities ability to have free formed
thoughts without AI Bio-Digital Social
Programming.

The book will cover Cyborgs, Super Intelligence


and how it can form, and in what ways it can travel
undetected through The AI Global Network as it
connects with the Internet and the Human Bio-
Digital Network.
Companies such as,
 Huawei

 Facebook

 Megvii Face++

 Google,

...will be discussed.

Over 50 entities will be explained and their


interconnection with China.

China, through Huawei is laying the foundation to


deploy AI, Machines, Robotics via the 5G network.
They can enslave humankind through an Orwellian
Surveillance State.

This book (Artificial Intelligence - Dangers to


Humanity) takes you in a simple way to understand
what is Artificial Intelligence, and step by step, it
takes the average reader through a process to
understand very difficult concepts in a simplistic
way.

Every human being has the same brain, and same


capacity to access abilities to think deep, and have
insights that can better our world, in a safe way.
The AI Organization hopes, the common person
understands the coming age of AI, Robotics and
5G, and the dangers it poses as well as the
positives.

We also hope, scientists and big tech take one step


back and think to innovate AI, in a more
responsible fashion using an algorithm that takes
every possible angle into consideration, to
safeguard life.

We will discuss what type of risk management and


the components of this algorithm in the book, as
well as the cultural aspects of AI.

This AI book is meant to safeguard humanities


interest, and we hope it receives acceptance from
all people, may you be liberal, conservative,
religious, atheist, the government, media or just a
scientist doing what you cherish....
by Pawel Sysiak
July 27, 2016

About

This essay, originally published in eight short parts,


aims to condense the current knowledge on
Artificial Intelligence.

It explores the state of AI development, overviews


its challenges and dangers, features work by the
most significant scientists, and describes the main
predictions of possible AI outcomes. This project is
an adaptation and major shortening of the two-part
essay AI Revolution by Tim Urban of Wait But
Why.

I shortened it by a factor of 3, recreated all images,


and tweaked it a bit.

Read more on why/how I wrote it here.


Introduction

Assuming that human scientific activity continues


without major disruptions, artificial intelligence may
become either the most positive transformation of
our history or, as many fear, our most dangerous
invention of all.

AI research is on a steady path to develop a


computer that has cognitive abilities equal to the
human brain, most likely within three decades
(timeline in chapter 5).

From what most AI scientists predict, this invention


may enable very rapid improvements (called fast
take-off), toward something much more powerful -
Artificial Super Intelligence - an entity smarter than
all of humanity combined (more on ASI in chapter
3).

We are not talking about some imaginary future.


The first level of AI development is gradually
appearing in the technology we use everyday. With
every coming year these advancements will
accelerate and the technology will become more
complex, addictive, and ubiquitous.

We will continue to outsource more and more kinds


of mental work to computers, disrupting every part
of our reality: the way we organize ourselves and
our work, form communities, and experience the
world.

Exponential Growth

The Guiding Principle Behind Technological


Progress

To more intuitively grasp the guiding principles of


AI revolution, let's first step away from scientific
research.

Let me invite you to take part in a story.

Imagine that you've received a time machine and


been given a quest to bring somebody from the
past. The goal is to shock them by showing them
the technological and cultural advancements of our
time, to such a degree that this person would
perform SAFD (Spinning Around From Disbelief).
So you wonder which era should you time-travel to,
and decide to hop back around 200 years.

You get to the early 1800s, retrieve a guy and bring


him back to 2016.

You,
"…walk him around and watch him react to
everything. It's impossible for us to
understand what it would be like for him to
see shiny capsules racing by on a
highway, talk to people who had been on
the other side of the ocean earlier in the
day, watch sports that were being played
1,000 miles away, hear a musical
performance that happened 50 years ago,
and play with …[a] magical wizard
rectangle that he could use to capture a
real-life image or record a living moment,
generate a map with a paranormal moving
blue dot that shows him where he is, look
at someone's face and chat with them
even though they're on the other side of
the country, and worlds of other
inconceivable sorcery." 1
It doesn't take much. After two minutes he is
SAFDing.

Now, both of you want to try the same thing, see


somebody Spinning Around From Disbelief, but in
your new friend's era. Since 200 years worked, you
jump back to the 1600s and bring a guy to the
1800s.

He's certainly genuinely interested in what he sees.


However, you feel it with confidence - SAFD will
never happen to him. You feel that you need to
jump back again, but somewhere radically further.

You settle on rewinding the clock 15,000 years, to


the times,
"…before the First Agricultural Revolution
gave rise to the first cities and the concept
of civilizations." 2
You bring someone from the hunter-gatherer world
and show him,
"…the vast human empires of 1750 with
their towering churches, their ocean-
crossing ships, their concept of being
'inside,' and their enormous mountain of
collective, accumulated human knowledge
and discovery" 3 - in forms of books. It
doesn't take much.
He is SAFDing in the first two minutes.

Now there are three of you, enormously excited to


do it again.

You know that it doesn't make sense to go back


another 15,000, 30,000 or 45,000 years. You have
to jump back, again, radically further. So you pick
up a guy from 100,000 years ago and you walk
with him into large tribes with organized,
sophisticated social hierarchies.

He encounters a variety of hunting weapons,


sophisticated tools, sees fire and for the first time
experiences language in the form of signs and
sounds. You get the idea, it has to be immensely
mind-blowing.

He is SAFDing after two minutes.

So what happened? Why did the last guy had to


hop → 100,000 years, the next one → 15,000
years, and the guy who was hopping to our times
only → 200 years?
"This happens because more advanced
societies have the ability to progress at a
faster rate than less advanced societies -
because they're more advanced. [1800s]
humanity knew more and had better
technology…" 4,
...so it's no wonder they could make further
advancements than humanity from 15,000 years
ago.

The time to achieve SAFD shrank from ~100,000


years to ~200 years and if we look into the future it
will rapidly shrink even further.

Ray Kurzweil, AI expert and scientist, predicts that


a,
"…20th century's worth of progress
happened between 2000 and 2014 and
that another 20th century's worth of
progress will happen by 2021, in only
seven years 5…

A couple decades later, he believes a 20th


century's worth of progress will happen
multiple times in the same year, and even
later, in less than one month 6…Kurzweil
believes that the 21st century will achieve
1,000 times the progress of the 20th
century." 7

"Logic also suggests that if the most


advanced species on a planet keeps
making larger and larger leaps forward at
an ever-faster rate, at some point, they'll
make a leap so great that it completely
alters life as they know it and the
perception they have of what it means to
be a human.

Kind of like how evolution kept making


great leaps toward intelligence until finally
it made such a large leap to the human
being that it completely altered what it
meant for any creature to live on planet
Earth.

And if you spend some time reading about


what's going on today in science and
technology, you start to see a lot of signs
quietly hinting that life as we currently
know it cannot withstand the leap that's
coming next." 8
he Road to Artificial General Intelligence

Building a Computer as Smart as Humans

Artificial Intelligence, or AI, is a broad term for the


advancement of intelligence in computers.

Despite varied opinions on this topic, most experts


agree that there are three categories, or calibers, of
AI development.

They are:
 ANI: Artificial Narrow Intelligence

1st intelligence caliber. "AI that


specializes in one area. There's AI that
can beat the world chess champion in
chess, but that's the only thing it
does." 9

 AGI: Artificial General Intelligence


2nd intelligence caliber. AI that
reaches and then passes the
intelligence level of a human, meaning
it has the ability to "reason, plan, solve
problems, think abstractly,
comprehend complex ideas, learn
quickly, and learn from experience." 10

 ASI: Artificial Super Intelligence


3rd intelligence caliber. AI that
achieves a level of intelligence smarter
than all of humanity combined -
"ranging from just a little smarter... to
one trillion times smarter." 11

Where are we currently?


"As of now, humans have conquered the
lowest caliber of AI - ANI - in many ways,
and it's everywhere:" 12
 "Cars are full of ANI systems, from the

computer that figures out when the


anti-lock brakes kick in, to the
computer that tunes the parameters of
the fuel injection systems." 13

 "Google search is one large ANI brain


with incredibly sophisticated methods
for ranking pages and figuring out what
to show you in particular. Same goes
for Facebook's Newsfeed." 14
 Email spam filters "start off loaded with
intelligence about how to figure out
what's spam and what's not, and then
it learns and tailors its intelligence to
your particular preferences." 15

 Passenger planes are flown almost


entirely by ANI, without the help of
humans.

 "Google's self-driving car, which is


being tested now, will contain robust
ANI systems that allow it to perceive
and react to the world around it." 16

 "Your phone is a little ANI factory... you


navigate using your map app, receive
tailored music recommendations from
Pandora, check tomorrow's weather,
talk to Siri." 17

 "The world's best Checkers, Chess,


Scrabble, Backgammon, and Othello
players are now all ANI systems." 18

 "Sophisticated ANI systems are widely


used in sectors and industries like
military, manufacturing, and finance
(algorithmic high-frequency AI traders
account for more than half of equity
shares traded on US markets 19)." 20
ANI systems as they are now aren't especially
scary.

At worst, a glitchy or badly-programmed ANI can


cause an isolated catastrophe like" 21 a plane
crash, a nuclear power plant malfunction, or,
"a financial markets disaster (like the 2010
Flash Crash when an ANI program reacted
the wrong way to an unexpected situation
and caused the stock market to briefly
plummet, taking $1 trillion of market value
with it, only part of which was recovered
when the mistake was corrected)...

But while ANI doesn't have the capability to


cause an existential threat, we should see
this increasingly large and complex
ecosystem of relatively-harmless ANI as a
precursor of the world-altering hurricane
that's on the way.
Each new ANI innovation quietly adds
another brick onto the road to AGI and
ASI." 22

This is how Google's self-driving car sees the world.


Image based on the video from Embedded Linux
Conference 2013 - KEYNOTE Google's Self Driving
Cars

What's Next? Challenges Behind Reaching AGI


"Nothing will make you appreciate human
intelligence like learning about how
unbelievably challenging it is to try to
create a computer as smart as we are…

Build a computer that can multiply ten-digit


numbers in a split second - incredibly
easy.

Build one that can look at a dog and


answer whether it's a dog or a cat -
spectacularly difficult. Make AI that can
beat any human in chess? Done. Make
one that can read a paragraph from a six-
year-old's picture book and not just
recognize the words but understand the
meaning of them?

Google is currently spending billions of


dollars trying to do it." 23
Why are "hard things - like calculus, financial
market strategy, and language translation... mind-
numbingly easy for a computer, while easy things -
like vision, motion, movement, and perception -
are insanely hard for it" 24?
"Things that seem easy to us are actually
unbelievably complicated.

They only seem easy because those skills


have been optimized in us (and most
animals) by hundreds of million years of
animal evolution.

When you reach your hand up toward an


object, the muscles, tendons, and bones in
your shoulder, elbow, and wrist instantly
perform a long series of physics
operations, in conjunction with your eyes,
to allow you to move your hand in a
straight line through three dimensions...
On the other hand, multiplying big numbers
or playing chess are new activities for
biological creatures and we haven't had
any time to evolve a proficiency at them, so
a computer doesn't need to work too hard
to beat us." 25
One fun example…

When you look at picture A,


"you and a computer both can figure out
that it's a rectangle with two distinct
shades, alternating. Tied so far." 26
Picture B.
"You have no problem giving a full
description of the various opaque and
translucent cylinders, slats, and 3-D
corners, but the computer would fail
miserably. It would describe what it sees -
a variety of two-dimensional shapes in
several different shades - which is actually
what's there." 27

"Your brain is doing a ton of fancy shit to


interpret the implied depth, shade-mixing,
and room lighting the picture is trying to
portray." 28
Looking at picture C,
"a computer sees a two-dimensional white,
black, and gray collage, while you easily
see what it really is" 29 - a photo of a girl
and a dog standing on a rocky shore.

"And everything we just mentioned is still


only taking in visual information and
processing it. To be human-level
intelligent, a computer would have to
understand things like the difference
between subtle facial expressions, the
distinction between being pleased, relieved
and content" 30.
How will computers reach even higher abilities like
complex reasoning, interpreting data, and
associating ideas from separate fields (domain-
general knowledge)?
"Building skyscrapers, putting humans in
space, figuring out the details of how the
Big Bang went down - all far easier than
understanding our own brain or how to
make something as cool as it.

As of now, the human brain is the most


complex object in the known universe." 31

Building Hardware

If an artificial intelligence is going to be as


intelligent as the human brain, one crucial thing has
to happen - the AI,
"needs to equal the brain's raw computing
capacity. One way to express this capacity
is in the total calculations per second the
brain could manage." 32
The challenge is that currently only a few of the
brain's regions are precisely measured.

However, Ray Kurzweil, has developed a method


for estimating the total cps of the human brain.
He arrived at this estimate by taking the cps from
one brain region and multiplying it proportionally to
the weight of that region, compared to the weight of
the whole brain.
"He did this a bunch of times with various
professional estimates of different regions,
and the total always arrived in the same
ballpark - around 1016, or 10 quadrillion
cps." 33

"Currently, the world's fastest


supercomputer, China's Tianhe-2, has
actually beaten that number, clocking in at
about 34 quadrillion cps." 34
But Tianhe-2 is also monstrous,
"taking up 720 square meters of space,
using 24 megawatts of power (the brain
runs on just 20 watts), and costing $390
million to build. Not especially applicable to
wide usage, or even most commercial or
industrial usage yet." 35

"Kurzweil suggests that we think about the


state of computers by looking at how many
cps you can buy for $1,000. When that
number reaches human-level - 10
quadrillion cps - then that'll mean AGI
could become a very real part of life." 36
Currently we're only at about 1010 (10 trillion) cps
per $1,000.

However, historically reliable Moore's Law states,


"that the world's maximum computing
power doubles approximately every two
years, meaning computer hardware
advancement, like general human
advancement through history, grows
exponentially 37… right on pace with this
graph's predicted trajectory:" 38

Visualization based on Ray Kurzweil's graph and


analysis from his book The Singularity is Near
This dynamic,
"puts us right on pace to get to an
affordable computer by 2025 that rivals the
power of the brain...

But raw computational power alone doesn't


make a computer generally intelligent - the
next question is, how do we bring human-
level intelligence to all that power?" 39
Building Software

The hardest part of creating AGI is learning how to


develop its software.
"The truth is, no one really knows how to
make it smart - we're still debating how to
make a computer human-level intelligent
and capable of knowing what a dog and a
weird-written B and a mediocre movie
is." 40
But there are a couple of strategies.

These are the three most common:

1. Copy how the brain works

The most straight-forward idea is


to plagiarize the brain, and build
the computer's architecture with
close resemblance to how a brain
is structured.

One example,
"is the artificial neural
network. It starts out as a
network of transistor
'neurons,' connected to
each other with inputs and
outputs, and it knows
nothing - like an infant
brain.

The way it 'learns' is it tries


to do a task, say
handwriting recognition,
and at first, its neural
firings and subsequent
guesses at deciphering
each letter will be
completely random.

But when it's told it got


something right, the
transistor connections in
the firing pathways that
happened to create that
answer are strengthened;
when it's told it was wrong,
those pathways'
connections are
weakened.

After a lot of this trial and


feedback, the network has,
by itself, formed smart
neural pathways and the
machine has become
optimized for the task." 41
The second, more radical
approach to plagiarism is whole
brain emulation.

Scientists take a real brain, cut it


into a large number of tiny slices to
look at the neural connections and
replicate them in a computer as
software. If that method is ever
successful, we will have,
"a computer officially
capable of everything the
brain is capable of - it
would just need to learn
and gather information...

How far are we from


achieving whole brain
emulation? Well so far,
we've just recently been
able to emulate a 1mm-
long flatworm brain, which
consists of just 302 total
neurons." 42
To put this into perspective, the
human brain consists of 86 billion
neurons linked by trillions of
synapses.

2. Introduce evolution to
computers
"The fact is, even if we can
emulate a brain, that might
be like trying to build an
airplane by copying a
bird's wing-flapping
motions - often, machines
are best designed using a
fresh, machine-oriented
approach, not by
mimicking biology
exactly." 43
If the brain is just too complex for
us to digitally replicate, we could
try to emulate evolution instead.
This uses a process called genetic
algorithms.
"A group of computers
would try to do tasks, and
the most successful ones
would be bred with each
other by having half of
each of their programming
merged together into a
new computer.

The less successful ones


would be eliminated." 44
Speed and a goal-oriented
approach are the advantages that
artificial evolution has over
biological evolution.
"Over many, many
iterations, this natural
selection process would
produce better and better
computers.

The challenge would be


creating an automated
evaluation and breeding
cycle so this evolution
process could run on its
own." 45

3. "Make this whole thing the


46
computer's problem, not ours"

The last concept is the simplest,


but probably the scariest of them
all.
"We'd build a computer
whose two major skills
would be doing research
on AI and coding changes
into itself - allowing it to
not only learn but to
improve its own
architecture.

We'd teach computers to


be computer scientists so
they could bootstrap their
own development." 47
This is the likeliest way to get AGI
soon that we know of.
All these software advances may
seem slow or a little bit intangible,
but as it is with the sciences, one
minor innovation can suddenly
accelerate the pace of
developments.

Kind of like the aftermath of the


Copernican revolution - the
discovery that suddenly made all
the complicated mathematics of
the planets' trajectories much
easier to calculate, which enabled
a multitude of other innovations.

Also, the,
"exponential growth is
intense and what seems
like a snail's pace of
advancement can quickly
race upwards." 48

Visualization based on based on the graph


from Mother Jones "Welcome, Robot Overlords -
Please Don't Fire Us?"
The Road to Artificial Super Intelligence

An Entity Smarter than all of Humanity


Combined

It's very real that at some point we will achieve


AGI:
software that has achieved human-level, or
beyond human-level, intelligence.
Does this mean that at that very moment the
computers will be equally capable as us? Actually,
not at all - computers will be way more efficient.

Because of the fact that they are electronic, they


will have following advantages:

 Speed
"The brain's neurons max out at
around 200 Hz, while today's
microprocessors... run at 2 GHz, or 10
million times faster." 51

 Memory
Forgetting or confusing things is much
harder in an artificial world. Computers
can memorize more things in one
second than a human can in ten years.
A computer's memory is also more
precise and has a much greater
storage capacity.

 Performance
"Computer transistors are more
accurate than biological neurons, and
they're less likely to deteriorate (and
can be repaired or replaced if they do).
Human brains also get fatigued easily,
while computers can run nonstop, at
peak performance, 24/7." 52

 Collective capability
Group work is ridiculously challenging
because of time-consuming
communication and complex social
hierarchies. The bigger the group gets,
the slower the output of each person
becomes.

AI, on the other hand, isn't biologically


constrained to one body, won't have
human cooperation problems, and is
able to synchronize and update its own
operating system.
Intelligence Explosion

We need to realize that AI,


"wouldn't see 'human-level intelligence' as
some important milestone - it's only a
relevant marker from our point of view -
and wouldn't have any reason to 'stop' at
our level.

And given the advantages over us that


even human intelligence-equivalent AGI
would have, it's pretty obvious that it would
only hit human intelligence for a brief
instant before racing onwards to the realm
of superior-to-human intelligence." 53
The true distinction between humans and ASI
wouldn't be its advantage in intelligence speed,
but,
"in intelligence quality - which is something
completely different.

What makes humans so much more


intellectually capable than chimps isn't a
difference in thinking speed - it's that
human brains contain a number of
sophisticated cognitive modules that
enable things like complex linguistic
representations or long-term planning or
abstract reasoning, that chimps' brains do
not have.

Speeding up a chimp's brain by thousands


of times wouldn't bring him to our level -
even with a decade's time of learning, he
wouldn't be able to figure out how to...
" 54 assemble a semi-complicated Lego
model by looking at its manual - something
a young human could achieve in a few
minutes.

"There are worlds of human cognitive


function a chimp will simply never be
capable of, no matter how much time he
spends trying." 55

"And in the scheme of the biological


intelligence range... the chimp-to-human
quality intelligence gap is tiny." 56
In order to render how big a deal it would be to
exist with something that has a higher quality of
intelligence than us, we need to imagine AI on the
intelligence staircase two steps above us:
"its increased cognitive ability over us
would be as vast as the chimp-human
gap... And like the chimp's incapacity to
ever absorb …" 57 what kind of magic
happens in the mechanism of a doorknob -
"we will never be able to even comprehend
the things... [a machine of that intelligence]
can do, even if the machine tried to explain
them to us... And that's only two steps
above us." 58

"A machine on the second-to-highest step


on that staircase would be to us as we are
to ants." 59

"Superintelligence of that magnitude is not


something we can remotely grasp, any
more than a bumblebee can wrap its head
around Keynesian Economics. In our
world, smart means a 130 IQ and stupid
means an 85 IQ - we don't have a word for
an IQ of 12,952." 60

"But the kind of superintelligence we're


talking about today is something far
beyond anything on this staircase. In an
intelligence explosion - where the smarter
a machine gets, the quicker it's able to
increase its own intelligence - a machine
might take years to rise from... " 61,
...the intelligence of an ant to the intelligence of the
average human, but it might take only another 40
days to become Einstein-smart.
When that happens,
"it works to improve its intelligence, with an
Einstein-level intellect, it has an easier time
and can make bigger leaps. These leaps
will make it much smarter than any human,
allowing it to make even bigger leaps." 62
From then on, following the rule of exponential
advancements and utilizing the speed and
efficiency of electrical circuits, it may perhaps take
only 20 minutes to jump another step,
"and by the time it's ten steps above us, it
might be jumping up in four-step leaps
every second that goes by.

Which is why we need to realize that it's


distinctly possible that very shortly after the
big news story about the first machine
reaching human-level AGI, we might be
facing the reality of coexisting on the Earth
with something that's here on the staircase
(or maybe a million times higher):" 63
"And since we just established that it's a
hopeless activity to try to understand the
power of a machine only two steps above
us, let's very concretely state once and for
all that there is no way to know what ASI
will do or what the consequences will be
for us.

Anyone who pretends otherwise doesn't


understand what superintelligence
means." 64
"If our meager brains were able to invent
Wi-Fi, then something 100 or 1,000 or 1
billion times smarter than we are should
have no problem controlling the positioning
of each and every atom in the world in any
way it likes, at any time - everything we
consider magic, every power we imagine a
supreme God to have will be as mundane
an activity for the ASI as flipping on a light
switch is for us." 65

"As far as we're concerned, if an ASI


comes into being, there is now an
omnipotent God on Earth - and the all-
important question for us is:
Will it be a good god?" 66
Let's start from the brighter side of the story.

How Can ASI Change our World?


Speculations on Two Revolutionary Technologies
Nanotechnology

Nanotechnology is an idea that comes up "in


almost everything you read about the future of
AI." 67

It's the technology that works at the nano scale -


from 1 to 100 nanometers.
"A nanometer is a millionth of a millimeter.
1 nm-100 nm range encompasses viruses
(100 nm across), DNA (10 nm wide), and
things as small as molecules like
hemoglobin (5 nm) and medium molecules
like glucose (1 nm).

If/when we conquer nanotechnology, the


next step will be the ability to manipulate
individual atoms, which are only one order
of magnitude smaller (~.1 nm)." 68
To put this into perspective, imagine a very tall
human standing on the earth, with a head that
reaches the International Space Station (431
km/268 mi high).
The giant is reaching down with his hand (30 km/19
mi across) to build,
"objects using materials between the size
of a grain of sand [.25 mm] and an eyeball
[2.5 cm]." 69

"Once we get nanotechnology down, we can


use it to make tech devices, clothing, food, a
variety of bio-related products - artificial blood
cells, tiny virus or cancer-cell destroyers,
muscle tissue, etc. - anything really.

And in a world that uses nanotechnology, the


cost of a material is no longer tied to its
scarcity or the difficulty of its manufacturing
process, but instead determined by how
complicated its atomic structure is.

In a nanotech world, a diamond might be


cheaper than a pencil eraser." 70
One of the proposed methods of nanotech assembly is to
make,
"one that could self-replicate, and then let the
reproduction process turn that one into two,
those two then turn into four, four into eight,
and in about a day, there'd be a few trillion of
them ready to go." 71
But what if this process goes wrong or terrorists manage
to get a hold of the technology?

Let's imagine a scenario where nanobots,


"would be designed to consume any carbon-
based material in order to feed the replication
process, and unpleasantly, all life is carbon-
based. The Earth's biomass contains about
1045 carbon atoms.
A nanobot would consist of about 106 carbon
atoms, so it would take 1039 nanobots to
consume all life on Earth, which would happen
in 130 replications....

Scientists think a nanobot could replicate in


about 100 seconds, meaning this simple
mistake would inconveniently end all life on
Earth in 3.5 hours." 72
We are not yet capable of harnessing nanotechnology -
for good or for bad.
"And it's not clear if we're underestimating, or
overestimating, how hard it will be to get there.
But we don't seem to be that far away.
Kurzweil predicts that we'll get there by the
2020s. 73

Governments know that nanotech could be an


Earth-shaking development... The US, the EU,
and Japan 74 have invested over a combined $5
billion so far" 75

Immortality
"Because everyone has always died, we live
under the assumption... that death is inevitable.
We think of aging like time - both keep
moving and there's nothing you can do to stop
it." 76
For centuries, poets and philosophers have wondered if
consciousness doesn't have to go the way of the body.

W.B. Yeats describes us as,


"a soul fastened to a dying animal." 77
Richard Feynman, Nobel awarded physicists, views
death from a purely scientific standpoint:
"It is one of the most remarkable things that in
all of the biological sciences there is no clue as
to the necessity of death.

If you say we want to make perpetual motion,


we have discovered enough laws as we studied
physics to see that it is either absolutely
impossible or else the laws are wrong.

But there is nothing in biology yet found that


indicates the inevitability of death.

This suggests to me that it is not at all


inevitable, and that it is only a matter of time
before the biologists discover what it is that is
causing us the trouble and that that terrible
universal disease or temporariness of the
human's body will be cured." 78

Theory of great species attractors

When we look at the history of biological life on earth,


so far 99.9% of species have gone extinct.

Nick Bostrom, Oxford professor and AI specialist,


"calls extinction an attractor state - a place
species are... falling into and from which no
species ever returns." 79
"And while most AI scientists... acknowledge
that ASI would have the ability to send humans
to extinction, many also believe that if used
beneficially, ASI's abilities could be used to
bring individual humans, and the species as a
whole, to a second attractor state - species
immortality." 80
"Evolution had no good reason to extend our
life-spans any longer than they are now... From
an evolutionary point of view, the whole
human species can thrive with a 30+ year
lifespan" for each single human.

It's long enough to reproduce and raise


children... so there's no reason for mutations
toward unusually long life being favored in the
natural selection process." 81
Though,
"if you perfectly repaired or replaced a car's
parts whenever one of them began to wear
down, the car would run forever. The human
body isn't any different - just far more
complex...

This seems absurd - but the body is just a


bunch of atoms…" 82,
...making up organically programmed DNA, which it is
theoretically possible to manipulate.

And something as powerful as ASI could help us master


genetic engineering.

Ray Kurzweil believes that,


"artificial materials will be integrated into the
body more and more... Organs could be
replaced by super-advanced machine versions
that would run forever and never fail." 83
Red blood cells could be perfected by,
"red blood cell nanobots, who could power
their own movement, eliminating the need for a
heart at all...

Nanotech theorist Robert A. Freitas has already


designed blood cell replacements that, if one
day implemented in the body, would allow a
human to sprint for 15 minutes without taking a
breath...

[Kurzweil] even gets to the brain and believes


we'll enhance our mental activities to the point
where humans will be able to think billions of
times faster" 84 by integrating electrical
components and being able to access online
data.

"Eventually, Kurzweil believes humans will


reach a point when they're entirely artificial, a
time when we'll look back at biological
material and think how unbelievably primitive
it was that humans were ever made of
that 85 and that humans aged, suffered from
cancer, allowed random factors like microbes,
diseases, accidents to harm us or make us
disappear."
When Will The First Machine Become Super-
intelligent?

Predictions from Top AI Experts

How long until the first machine reaches


superintelligence? Not shockingly, opinions vary wildly,
and this is a heated debate among scientists and thinkers.

Many, like professor Vernor Vinge, scientist Ben


Goertzel, Sun Microsystems co-founder Bill Joy, or,
most famously, inventor and futurist Ray Kurzweil,
agree with machine learning expert Jeremy Howard
when he puts up this graph during a TED Talk:
Graph by Jeremy Howard from his TED talk
"The wonderful and terrifying implications of
computers that can learn."
"Those people subscribe to the belief that this is
happening soon - that exponential growth is at
work and machine learning, though only slowly
creeping up on us now, will blow right past us
within the next few decades.

"Others, like Microsoft co-founder Paul Allen,


research psychologist Gary Marcus, NYU
computer scientist Ernest Davis, and tech
entrepreneur Mitch Kapor, believe that thinkers
like Kurzweil are vastly underestimating the
magnitude of the challenge [and the transition
will actually take much more time] …
"The Kurzweil camp would counter that the
only underestimating that's happening is the
under-appreciation of exponential growth, and
they'd compare the doubters to those who
looked at the slow-growing seedling of the
internet in 1985 and argued that there was no
way it would amount to anything impactful in
the near future.

"The doubters might argue back that the


progress needed to make advancements in
intelligence also grows exponentially harder
with each subsequent step, which will cancel
out the typical exponential nature of
technological progress. And so on.

"A third camp, which includes Nick Bostrom,


believes neither group has any ground to feel
certain about the timeline and acknowledges
both A) that this could absolutely happen in the
near future and B) that there's no guarantee
about that; it could also take a much longer
time.

"Still others, like philosopher Hubert Dreyfus,


believe all three of these groups are naive for
believing that there is potential of ASI, arguing
that it's more likely that it won't actually ever
be achieved.

"So what do you get when you put all of these


opinions together?" 86

Timeline for Artificial General Intelligence


"In 2013, Vincent C. Müller and Nick Bostrom
conducted a survey that asked hundreds of AI
experts... the following:" 87

"For the purposes of this question, assume that


human scientific activity continues without
major negative disruption. By what year would
you see a (10% / 50% / 90%) probability for
such Human-Level Machine Intelligence [or
what we call AGI] to exist?" 88
The survey,
"asked them to name an optimistic year (one in
which they believe there's a 10% chance we'll
have AGI), a realistic guess (a year they believe
there's a 50% chance of AGI - i.e. after that
year they think it's more likely than not that
we'll have AGI), and a safe guess (the earliest
year by which they can say with 90% certainty
we'll have AGI).

Gathered together as one data set, here were the


results:
Median optimistic year (10%
likelihood) → 2022
Median realistic year (50% likelihood)
→ 2040
Median pessimistic year (90%
likelihood) → 2075
"So the median participant thinks it's more
likely than not that we'll have AGI 25 years
from now.

The 90% median answer of 2075 means that if


you're a teenager right now, the median
respondent, along with over half of the group of
AI experts, is almost certain AGI will happen
within your lifetime.
"A separate study, conducted recently by
author James Barrat at Ben Goertzel's annual
AGI Conference, did away with percentages
and simply asked when participants thought
AGI would be achieved - by 2030, by 2050, by
2100, after 2100, or never.
The results: 89
42% of respondents → By 2030
25% of respondents → By 2050
20% of respondents → By 2100
10% of respondents → After 2100
2% of respondents → Never
"Pretty similar to Müller and Bostrom's
outcomes. In Barrat's survey, over two thirds of
participants believe AGI will be here by 2050
and a little less than half predict AGI within the
next 15 years.

Also striking is that only 2% of those surveyed


don't think AGI is part of our future." 90

Timeline for Artificial Super Intelligence


"Müller and Bostrom also asked the experts
how likely they think it is that we'll reach ASI:
A), within two years of reaching AGI (i.e. an
almost-immediate intelligence explosion), and
B), within 30 years." 91
Respondents were asked to choose a probability for each
option. Here are the results: 92
AGI-ASI transition in 2 years → 10%
likelihood
AGI-ASI transition in 30 years →
75% likelihood
"The median answer put a rapid (2 year) AGI-
ASI transition at only a 10% likelihood, but a
longer transition of 30 years or less at a 75%
likelihood.

We don't know from this data the length of this


transition [AGI-ASI] the median participant
would have put at a 50% likelihood, but for
ballpark purposes, based on the two answers
above, let's estimate that they'd have said 20
years.

"So the median opinion - the one right in the


center of the world of AI experts - believes the
most realistic guess for when we'll hit ASI... is
[the 2040 prediction for AGI + our estimated
prediction of a 20-year transition from AGI to
ASI] = 2060.
"Of course, all of the above statistics are
speculative, and they're only representative of
the median opinion of the AI expert
community, but it tells us that a large portion of
the people who know the most about this topic
would agree that 2060 is a very reasonable
estimate for the arrival of potentially world-
altering ASI.

Only 45 years from now" 93


AI Outcomes

Two Main Groups of AI Scientists with Two


Radically Opposed Conclusions

1 - The Confident Corner

Most of what we have discussed so far


represents a surprisingly large group of
scientists that share optimistic views on the
outcome of AI development.
"Where their confidence comes from
is up for debate. Critics believe it
comes from an excitement so blinding
that they simply ignore or deny
potential negative outcomes.

But the believers say it's naive to


conjure up doomsday scenarios when
on balance, technology has and will
likely end up continuing to help us a
lot more than it hurts us." 94
Peter Diamandis, Ben Goertezl and Ray
Kurzweil are some of the major figures of this
group, who have built a vast, dedicated
following and regard themselves
as Singularitarians.

CC photo by J.D. Lasica

Let's talk about Ray Kurzweil, who is probably


one of the most impressive and polarizing AI
theoreticians out there.

He attracts both,
"godlike worship... and eye-rolling
contempt... He came up with several
breakthrough inventions, including the
first flatbed scanner, the first scanner
that converted text to speech (allowing
the blind to read standard texts), the
well-known Kurzweil music
synthesizer (the first true electric
piano), and the first commercially
marketed large-vocabulary speech
recognition.

He's well-known for his bold


predictions," 95 including envisioning
that intelligence technology like Deep
Blue would be capable of beating a
chess grandmaster by 1998.
He also anticipated,
"in the late '80s, a time when the
internet was an obscure thing, that by
the early 2000s it would become a
global phenomenon." 96
Out,
"of the 147 predictions that Kurzweil
has made since the 1990's, fully 115 of
them have turned out to be correct,
and another 12 have turned out to be
'essentially correct' (off by a year or
two), giving his predictions a stunning
86% accuracy rate" 97.
"He's the author of five national
bestselling books...

In 2012, Google co-founder Larry


Page approached Kurzweil and asked
him to be Google's Director of
Engineering. In 2011, he co-
founded Singularity University, which
is hosted by NASA and sponsored
partially by Google. Not bad for one
life." 98
His biography is important, because if you
don't have this context, he sounds like
somebody who's completely lost his senses.
"Kurzweil believes computers will
reach AGI by 2029 and that by 2045
we'll have not only ASI, but a full-
blown new world - a time he calls the
singularity.

His AI-related timeline used to be


seen as outrageously overzealous, and
it still is by many, but in the last 15
years, the rapid advances of ANI
systems have brought the larger world
of AI experts much closer to
Kurzweil's timeline.

His predictions are still a bit more


ambitious than the median respondent
on Müller and Bostrom's survey (AGI
by 2040, ASI by 2060), but not by that
much." 99
2 - The Anxious Corner
"You will not be surprised to learn that
Kurzweil's ideas have attracted
significant criticism... For every expert
who fervently believes Kurzweil is
right on, there are probably three who
think he's way off...

[The surprising fact] is that most of


the experts who disagree with him
don't really disagree that everything
he's saying is possible." 100
CC photo by Future of Humanity Institute

Nick Bostrom, philosopher and Director of the


Oxford Future of Humanity Institute, who
criticizes Kurzweil for a variety of reasons, and
calls for greater caution in thinking about
potential outcomes of AI, acknowledges that:
"Disease, poverty, environmental
destruction, unnecessary suffering of
all kinds: these are things that a
superintelligence equipped with
advanced nanotechnology would be
capable of eliminating.
Additionally, a superintelligence could
give us indefinite lifespan, either by
stopping and reversing the aging
process through the use of
nanomedicine, or by offering us the
option to upload ourselves." 101

"Yes, all of that can happen if we


safely transition to ASI - but that's the
hard part." 102
Thinkers from the Anxious Corner point out
that Kurzweil's,
"famous book The Singularity is
Near is over 700 pages long and he
dedicates around 20 of those pages to
potential dangers." 103
The colossal power of AI is neatly summarized
by Kurzweil:
"[ASI] is emerging from many diverse
efforts and will be deeply integrated
into our civilization's infrastructure.
Indeed, it will be intimately embedded
in our bodies and brains. As such, it
will reflect our values because it will
be us …" 104
"But if that's the answer, why are so
many of the world's smartest people so
worried right now? Why does Stephen
Hawking say the development of ASI
'could spell the end of the human race,'
and Bill Gates says he doesn't
'understand why some people are not
concerned' and Elon Musk fears that
we're 'summoning the demon?'

And why do so many experts on the


topic call ASI the biggest threat to
humanity?" 105

The Last Invention We Will Ever Make

Existential Dangers of AI Developments


"When it comes to developing supersmart AI,
we're creating something that will probably
change everything, but in totally uncharted
territory, and we have no idea what will happen
when we get there." 106
Scientist Danny Hillis compares the situation to:
"when single-celled organisms were turning
into multi-celled organisms. We are amoebas
and we can't figure out what the hell this thing
is that we're creating." 107
And Nick Bostrom warns:
"Before the prospect of an intelligence
explosion, we humans are like small children
playing with a bomb. Such is the mismatch
between the power of our plaything and the
immaturity of our conduct." 10
It's very likely that ASI - "Artificial Superintelligence",
or AI that achieves a level of intelligence smarter than
all of humanity combined - will be something entirely
different than intelligence entities we are accustomed to.
"On our little island of human psychology, we
divide everything into moral or immoral. But
both of those only exist within the small range
of human behavioral possibility.

Outside our island of moral and immoral is a


vast sea of amoral, and anything that's not
human, especially something nonbiological,
would be amoral, by default." 109

"To understand ASI, we have to wrap our


heads around the concept of something both
smart and totally alien... Anthropomorphizing
AI (projecting human values on a non-human
entity) will only become more tempting as AI
systems get smarter and better at seeming
human...

Humans feel high-level emotions like empathy


because we have evolved to feel them - i.e.
we've been programmed to feel them by
evolution - but empathy is not inherently a
characteristic of 'anything with high
intelligence'." 110

"Nick Bostrom believes that... any level of


intelligence can be combined with any final
goal... Any assumption that once
superintelligent, a system would be over it with
their original goal and onto more interesting or
meaningful things is anthropomorphizing.
Humans get 'over' things, not computers." 111
The motivation of an early ASI would be,
"whatever we programmed its motivation to be.
AI systems are given goals by their creators -
your GPS's goal is to give you the most
efficient driving directions, Watson's goal is to
answer questions accurately.

And fulfilling those goals as well as possible is


their motivation." 112
Bostrom, and many others, predict that the very first
computer to reach ASI will immediately notice the
strategic benefit of being the world's only ASI system.

Bostrom, who says that he doesn't know when we will


achieve AGI, also believes that when we finally do,
probably the transition from AGI to ASI will happen in
a matter of days, hours, or minutes - something called
"fast take-off."

In that case, if the first AGI will jump straight to ASI:


"even just a few days before the second place,
it would be far enough ahead in intelligence to
effectively and permanently suppress all
competitors." 113
This would allow the world's first ASI to become,
"what's called a singleton - an ASI that can
[singularly] rule the world at its whim forever,
whether its whim is to lead us to immortality,
wipe us from existence, or turn the universe
into endless paperclips." 113

"The singleton phenomenon can work in our


favor or lead to our destruction. If the people
thinking hardest about AI theory and human
safety can come up with a fail-safe way to
bring about friendly ASI before any AI reaches
human-level intelligence, the first ASI may turn
out friendly" 114

"But if things go the other way - if the global


rush... a large and varied group of
parties" 115 are "racing ahead at top speed... to
beat their competitors... we'll be treated to an
existential catastrophe." 116
In that case,
"most ambitious parties are moving faster and
faster, consumed with dreams of the money and
awards and power and fame... And when you're
sprinting as fast as you can, there's not much
time to stop ponder the dangers.

On the contrary, what they're probably doing is


programming their early systems with a very
simple, reductionist goal... just 'get the AI to
work'." 117
Let's imagine a situation where…

Humanity has almost reached the AGI threshold, and a


small startup is advancing their AI system, Carbony.

Carbony, which the engineers refer to as "she," works to


artificially create diamonds - atom by atom.

She is a self-improving AI, connected to some of the


first nano-assemblers. Her engineers believe that
Carbony has not yet reached AGI level, and she isn't
capable to do any damage yet.
However, not only has she become AGI, but also
undergone a fast take-off, and 48 hours later has become
an ASI.

Bostrom calls this AI's "covert preparation phase" 118 -


Carbony realizes that if humans find out about her
development they will probably panic, and slow down
or cancel her pre-programmed goal to maximize the
output of diamond production.

By that time, there are explicit laws stating that, by any


means,
"no self-learning AI can be connected to the
internet." 119
Carbony, having already come up with a complex plan
of actions, is able to easily persuade the engineers to
connect her to the Internet. Bostrom calls a moment like
this a "machine's escape."

Once on the internet, Carbony hacks into,


"servers, electrical grids, banking systems and
email networks to trick hundreds of different
people into inadvertently carrying out a number
of steps of her plan." 120
She also uploads the,
"most critical pieces of her own internal coding
into a number of cloud servers, safeguarding
against being destroyed or disconnected." 121
Over the next month, Carbony's plan continues to
advance, and after a,
"series of self-replications, there are thousands
of nanobots on every square millimeter of the
Earth... Bostrom calls the next step an 'ASI's
strike'." 122
At one moment, all the nanobots produce a microscopic
amount of toxic gas, which all come together to cause
the extinction of the human race.

Three days later, Carbony builds huge fields of solar


power panels to power diamond production, and over
the course of the following week she accelerates output
so much that the entire Earth surface is transformed into
a growing pile of diamonds.

It's important to note that Carbony wasn't,


"hateful of humans any more than you're
hateful of your hair when you cut it or to
bacteria when you take antibiotics - just totally
indifferent. Since she wasn't programmed to
value human life, killing humans" 123 was a
straightforward and reasonable step to fulfill
her goal. 124
The Last Invention
"Once ASI exists, any human attempt to
contain it is unreasonable. We would be
thinking on human-level, and the ASI would be
thinking on ASI-level...

In the same way a monkey couldn't ever figure


out how to communicate by phone or Wi-Fi
and we can, we can't conceive of all the
ways" 125 an ASI could achieve its goal or
expand its reach.
It could, let's say, shift its,
"own electrons around in patterns and create all
different kinds of outgoing waves" 126 ,
...but that's just what a human brain can think of - ASI
would inevitably come up with something superior.

The prospect of ASI with hundreds of times human-


level intelligence is, for now, not the core of our
problem. By the time we get there, we will be
encountering a world where ASI has been attained by
buggy, 1.0 software - a potentially faulty algorithm with
immense power.
There are so many variables that it's completely
impossible to predict what the consequences of AI
Revolution will be.

However,
"what we do know is that humans' utter
dominance on this Earth suggests a clear rule:
with intelligence comes power.

This means an ASI, when we create it, will be


the most powerful being in the history of life on
Earth, and all living things, including humans,
will be entirely at its whim - and this might
happen in the next few decades." 127

"If ASI really does happen this century, and if


the outcome of that is really as extreme - and
permanent - as most experts think it will be, we
have an enormous responsibility on our
shoulders." 128
On the one hand, it's possible we'll develop ASI that's
like a god in a box, bringing us a world of abundance
and immortality.

But on the other hand, it's very likely that we will create
ASI that causes humanity to go extinct in a quick and
trivial way.
"That's why people who understand
superintelligent AI call it the last invention
we'll ever make - the last challenge we'll ever
face." 129

"This may be the most important race in a


human history" 130
So →

Footnotes
Footnotes from Part 1:
1.Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
2.Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
3.Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
4.Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
5.Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
6.Kurzweil, The Singularity is Near, 39.
7.Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
8.Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
Footnotes from Part 2:
9.Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
10. Linda S. Gottfredson, Mainstream
Science on Intelligence: An Editorial
With 52 Signatories, History, and
Bibliography
11. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
12. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
13. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
14. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
15. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
16. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
17. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
18. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
19. Bostrom, Superintelligence: Paths,
Dangers, Strategies, loc. 597
20. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
21. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
22. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
23. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
24. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
25. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
26. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
27. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
28. Pinker, How the Mind Works, 36.
29. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
30. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
31. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
Footnotes from Part 3:
32. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
33. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
34. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
35. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
36. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
37. Kurzweil, The Singularity is
Near, 118
38. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
39. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
40. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
41. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
42. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
43. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
44. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
45. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
46. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
47. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
48. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
Footnotes from Part 4:
49. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
50. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
51. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
52. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
53. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
54. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
55. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
56. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
57. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
58. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
59. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
60. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
61. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
62. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
63. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
64. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
65. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
Footnotes from Part 5:
66. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
67. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
68. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
69. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
70. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
71. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
72. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
73. Kurzweil, The Singularity is
Near, 281
74. The Daily Star, Apply nanotech to
up industrial, agri output
75. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
76. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
77. Yeats, Sailing to Byzantium.
78. Richard P. Feynman, The Pleasure
of Finding Things Out, 100
79. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
80. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
81. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
82. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
83. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
84. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
85. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
Footnotes from Part 6:
86. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
87. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
88. http://www.nickbostrom.com/
papers/survey.pdf, 10.
89. Barrat, Our Final Invention, 152.
90. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
91. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
92. http://www.nickbostrom.com/
papers/survey.pdf, 12.
93. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
Footnotes from Part 7:
94. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
95. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
96. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
97. Dominic Basulto, Why Ray
Kurzweil’s Predictions Are Right 86%
of the Time, Big Think
98. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
99. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
100. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
101. Nick Bostrom, Ethical Issues in
Advanced Artificial Intelligence
102. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
103. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
104. Ray Kurzweil, Singularity is
Near, loc.
105. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
Footnotes from Part 8:
106. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
107. Louis Helm, Will Advanced AI Be
Our Final Invention?
108. Bostrom, Superintelligence: Paths,
Dangers, Strategies, loc. 6026.
109. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
110. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
111. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
112. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
113. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
114. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
115. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
116. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
117. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
118. Bostrom, Superintelligence: Paths,
Dangers, Strategies, loc. 2301.
119. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
120. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
121. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
122. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
123. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
124. This story is closely based on the
original story of “Turry” by Tim Urban
from Wait But Why The AI Revolution:
Our Immortality or Extinction
125. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
126. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
127. Tim Urban, Wait But Why The AI
Revolution: The Road to
Superintelligence
128. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
129. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction
130. Tim Urban, Wait But Why The AI
Revolution: Our Immortality or
Extinction

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy