Chapter 1 - AI - Notes
Chapter 1 - AI - Notes
Chapter 1 - AI - Notes
Artificial Intelligence is the branch of computer science concerned with making computers
behave like human.
1.1 Objectives /goals of AI
To Create Expert Systems: The systems which exhibit intelligent behavior, learn,
demonstrate, explain, and advice its users.
Scientific goal: To determine which ideas about knowledge representation, learning, rule
systems, search, and so on, explain various sorts of real intelligence. To understand the
mechanism behind human intelligence.
Engineering goal: design useful, intelligent artifacts. To solves real world problems
using AI techniques such as knowledge representation, learning, rule systems, search, and
so on. To develop concepts and tools for building intelligent agents capable of solving
real world problems. .
Examples:
We call ourselves Homo sapiens—man the wise (Human being)—because our intelligence is so
important to us. For thousands of years, we have tried to understand how we think; thatis, how a
mere handful of matter can perceive, understand, predict, and manipulate a world far larger and
more complicated than itself.
AI is one of the newest fields in science and engineering. Work started in earnest soon after
World War II, and the name itself was coined in 1956. Al currently encompasses a huge variety
of subfields, ranging from the general (learning and perception) to the specific, such as playing
chess, proving mathematical theorems, writing poetry, driving a car on a crowded street, and
diagnosing diseases.
1.2 What Is AI
John McCarthy, who coined the term Artificial Intelligence in 1956, defines it as "the science
and engineering of making intelligent machines, especially intelligent computer programs." It is
the Intelligence of machine and the branch of computer science that aims to create it.
Intelligence
Intelligence is a property/ability attributed to people, such as to know, to think, to talk, to learn.
Intelligence = Knowledge + ability to perceive, feel, comprehend, process, communicate, judge,
learn.
It is the capability of observing, learning remembering & reasoning. AI attempts to develop
intelligent agents.
Eg: speech recognition, Image pattern understanding etc.
Characteristics of Intelligent system
Use vast amount of knowledge
Learn from experience and adopt to changing environment
Interact with human using language and speech
Respond in real time
Tolerate error and ambiguity in communication
1.3 Approaches to AI
AI is the study of – how to make computers do things which at the moment, people do better.
The definitions of AI according to some text books are categorized into four approaches and are
summarized in the table below:
Systems that think like humans Systems that think rationally
“The exciting new effort to make computers “The study of mental faculties through the use
think … machines with minds, in the full and of computer models.”
literal sense.”(Haugeland,1985) (Charniak and McDermont,1985)
Use of Computer models
Systems that act like humans Systems that act rationally
The art of creating machines that perform “Computational intelligence is the study of the
functions that require intelligence when design of intelligent agents.”(Poole et al.,1998)
performed by people.”(Kurzweil,1990) Study of Automation of Intelligent behavior.
Study of how to make computer to do things.
THINKING HUMANLY: THE COGNITIVE MODELING APPROACH
If we are going to say that a given program thinks like a human, we must have some way
of determining how humans think. We need to get inside the actual workings of human minds.
Once we have a sufficiently precise theory of the mind, it becomes possible to express the theory
as a computer program.
(a) Through introspection – trying to capture our own thoughts as they go by;
– Instead of making the best possible chess-playing program, you would make one
that play chess like people do.
Greek Philosopher- Aristotle: provided the correct arguments/ thought structures that always
gave correct conclusions given correct premises.
By 1965, programs existed that could, in principle, solve any solvable problem described in
logical notation. The so-called logicist tradition within artificial intelligence hopes to build on
such programs to create intelligent systems. There are two main obstacles to this approach. First,
it is not easy to take informal knowledge and state it in the formal terms required by logical
notation, particularly when the knowledge is less than 100% certain. Second, there is a big
difference between being able to solve a problem "in principle" and doing so in practice.
Although both of these obstacles apply to any attempt to build computational reasoning systems,
they appeared first in the logicist tradition.
The Turing Test, proposed by Alan Turing (195O), was designed to provide a satisfactory
operational definition of intelligence.
We note that programming a computer to pass the test provides plenty to work on. The computer
would need to possess the following capabilities:
Natural language processing to enable it to communicate successfully in English.
knowledge representation to store what it knows or hears;
Automated reasoning to use the stored information to answer questions and to draw new
conclusions;
Machine learning to adapt to new circumstances and to detect and extrapolate patterns.
Turing's test deliberately avoided direct physical interaction between the interrogator and the
computer, because physical simulation of a person is unnecessary for intelligence. To pass the
total Turing Test, the computer will need,
Computer vision to perceive objects, and
Robotics to manipulate objects and move about.
All the skills needed for the Turing Test are there to allow rational actions. Thus, we need the
ability to represent knowledge and reason with it because this enables us to reach good decisions
in a wide variety of situations. We need to be able to generate comprehensible sentences in
natural language because saying those sentences helps us get by in a complex society. We need
learning not just for erudition, but because having a better idea of how the world works enables
us to generate more effective strategies for dealing with it. We need visual perception not just
because seeing is fun, but to get a better idea of what an action might achieve.
For these reasons, the study of AI as rational-agent design has at least two advantages. First, it is
more general than the "laws of thought" approach, because correct inference is just one of
several possible mechanisms for achieving rationality. Second, it is more amenable to scientific
development than are approaches based on human behavior or human thought because the
standard of rationality is clearly defined and completely general. Human behavior, on the other
hand, is well-adapted for one specific environment and is the product, in part, of a complicated
and largely unknown evolutionary process that still is far from producing perfection.
1.4 Foundations of AI
In this section, we will see a brief history of the disciplines that contributed ideas, viewpoints,
and techniques to AI.
PHILOSOPHY
LINGUISTICS MATHEMATICS
CONTROL THEORY
Artificial ECONOMICS
AND CYBERNETICS Intelligence
NEURO SCIENCE
COMPUTER
ENGINEERING
PSYCHOLOGY
PHILOSOPHY
☺ Can formal rules be used to draw valid conclusions?
☺ How does the mental mind arise from a physical brain?
☺ Where does knowledge come from?
☺ How does knowledge lead to action?
MATHEMATICS
☺ What are the formal rules to draw valid conclusions?
☺ What can be computed?
☺ How do we reason with uncertain information?
ECONOMICS
☺ How should we make decisions so as to maximize payoff?
☺ How should we do this when others may not go along?
☺ How should we do this when the payoff may be far in the future?
NEUROSCIENCE
☺ How do brains process information?
Brains and digital computers perform quite different tasks and have different properties. Figure
1.2 shows that there are 1000 times more neurons in the typical human brain than there are gates
in the CPU of a typical high-end computer. Computer chips can execute an instruction in a
nanosecond, whereas neurons are millions of times slower. Brains more than make up for this,
however, because all the neurons and synapses are active simultaneously, whereas most current
computers have only one or at most a few CPUs. Thus, even though a computer is a million
times faster in raw switching speed, the brain ends up being 100,000 times faster at what it does.
Computer Human Brain
Computational
1 CPU, 108 gates 1011 neurons
Units
1011 bits RAM 1011 neurons
Storage Units
1012 bits disk 1014 synapses
Cycle time 10-9 sec 10-3 sec
Bandwidth 1010 bits/sec 1014 bits/sec
Memory
109 1014
updates/sec
Figure 1.2 a crude comparison of the raw computational resources available to computers and
brains.
PSYCHOLOGY
☺ How do humans and animals think and act?
COMPUTER ENGINEERING
☺ How can we build an efficient computer?
CONTROL THEORY AND CYBERNETICS
☺ How can artifacts operate under their own control?
LINGUISTICS
☺ How does language relate to thought?
1.5 The History of Artificial Intelligence
The gestation of artificial intelligence (1943-1955)
There were a number of early examples of work that can be characterized as AI, but it
was Alan Turing who first articulated a complete vision of AI in his 1950 article "Computing
Machinery and Intelligence." Therein, he introduced the Turing test, machine learning, genetic
algorithms, and reinforcement learning.
The birth of artificial intelligence (1956)
McCarthy convinced Minsky, Claude Shannon, and Nathaniel Rochester to help him
bring together U.S. researchers interested in automata theory, neural nets, and the study of
intelligence. They organized a two-month workshop at Dartmouth in the summer of 1956.
Perhaps the longest-lasting thing to come out of the workshop was an agreement to adopt
McCarthy's new name for the field: artificial intelligence.
Early enthusiasm, great expectations (1952-1969)
The early years of AI were full of successes-in a limited way. General Problem Solver
(GPS) was a computer program created in 1957 by Herbert Simon and Allen Newell to build a
universal problem solver machine. The order in which the program considered sub goals and
possible actions was similar to that in which humans approached the same problems. Thus, GPS
was probably the first program to embody the "thinking humanly" approach. At IBM, Nathaniel
Rochester and his colleagues produced some of the first AI programs. Herbert Gelernter (1959)
constructed the Geometry Theorem Prover, which was able to prove theorems that many students
of mathematics would find quite tricky. Lisp was invented by John McCarthy in 1958 while he
was at the Massachusetts Institute of Technology (MIT). In 1963, McCarthy started the AI lab at
Stanford.
A dose of reality (1966-1973)
From the beginning, AI researchers were not shy about making predictions of their
coming successes. The following statement by Herbert Simon in 1957 is often quoted: “It is not
my aim to surprise or shock you-but the simplest way I can summarize is to say that there are
now in the world machines that think, that learn and that create. Moreover their ability to do
these things is going to increase rapidly until-in a visible future-the range of problems they can
handle will be coextensive with the range to which the human mind has been applied.
Knowledge-based systems: The key to power? (1969-1979)
Dendral was an influential pioneer project in artificial intelligence (AI) of the 1960s, and the
computer software expert system that it produced. Its primary aim was to help organic chemists
in identifying unknown organic molecules, by analyzing their mass spectra and using knowledge
of chemistry. It was done at Stanford University by Edward Feigenbaum, Bruce Buchanan,
Joshua Lederberg, and Carl Djerassi.
AI becomes an industry (1980-present)
In 1981, the Japanese announced the "Fifth Generation" project, a 10-year plan to build
intelligent computers running Prolog. Overall, the AI industry boomed from a few million
dollars in 1980 to billions of dollars in 1988.
The return of neural networks (1986-present)
Psychologists including David Rumelhart and Geoff Hinton continued the study of neural-net
models of memory.
AI becomes a science (1987-present)
In recent years, approaches based on hidden Markov models (HMMs) have come to dominate
the area.
Speech technology and the related field of handwritten character recognition are already making
the transition to widespread industrial and consumer applications.
The Bayesian network formalism was invented to allow efficient representation of, and rigorous
reasoning with, uncertain knowledge.
The emergence of intelligent agents (1995-present)
One of the most important environments for intelligent agents is the Internet.