Artificial Intelligence - Wikipedia
Artificial Intelligence - Wikipedia
Artificial Intelligence - Wikipedia
Artificial intelligence
Artificial intelligence was founded as an academic discipline in 1956.[2] The field went through
multiple cycles of optimism[3][4] followed by disappointment and loss of funding,[5][6] but after 2012,
when deep learning surpassed all previous AI techniques,[7] there was a vast increase in funding and
interest.
The various sub-fields of AI research are centered around particular goals and the use of particular
tools. The traditional goals of AI research include reasoning, knowledge representation, planning,
learning, natural language processing, perception, and support for robotics.[a] General intelligence
(the ability to solve an arbitrary problem) is among the field's long-term goals.[8] To solve these
problems, AI researchers have adapted and integrated a wide range of problem-solving techniques,
including search and mathematical optimization, formal logic, artificial neural networks, and
methods based on statistics, probability, and economics.[b] AI also draws upon psychology,
linguistics, philosophy, neuroscience and many other fields.[9]
Goals
The general problem of simulating (or creating) intelligence has been broken down into sub-
problems. These consist of particular traits or capabilities that researchers expect an intelligent
system to display. The traits described below have received the most attention and cover the scope of
AI research.[a]
Reasoning, problem-solving
Early researchers developed algorithms that imitated step-by-step reasoning that humans use when
they solve puzzles or make logical deductions.[10] By the late 1980s and 1990s, methods were
developed for dealing with uncertain or incomplete information, employing concepts from probability
and economics.[11]
https://en.wikipedia.org/wiki/Artificial_intelligence 1/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
Many of these algorithms are insufficient for solving large reasoning problems because they
experience a "combinatorial explosion": they became exponentially slower as the problems grew
larger.[12] Even humans rarely use the step-by-step deduction that early AI research could model.
They solve most of their problems using fast, intuitive judgments.[13] Accurate and efficient reasoning
is an unsolved problem.
Knowledge representation
Knowledge bases need to represent things such as: objects, properties, categories and relations
between objects; [21] situations, events, states and time;[22] causes and effects;[23] knowledge about
knowledge (what we know about what other people know);[24] default reasoning (things that humans
assume are true until they are told differently and will remain true even when other facts are
changing);[25] and many other aspects and domains of knowledge.
Among the most difficult problems in KR are: the breadth of commonsense knowledge (the set of
atomic facts that the average person knows) is enormous;[26] the difficulty of knowledge acquisition
and the sub-symbolic form of most commonsense knowledge (much of what people know is not
represented as "facts" or "statements" that they could express verbally).[13]
Learning
Machine learning is the study of programs that can improve their performance on a given task
automatically.[29] It has been a part of AI from the beginning.[c]
There are several kinds of machine learning. Unsupervised learning analyzes a stream of data and
finds patterns and makes predictions without any other guidance.[32] Supervised learning requires a
human to label the input data first, and comes in two main varieties: classification (where the
https://en.wikipedia.org/wiki/Artificial_intelligence 2/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
program must learn to predict what category the input belongs in) and regression (where the program
must deduce a numeric function based on numeric input).[33] In reinforcement learning the agent is
rewarded for good responses and punished for bad ones. The agent learns to choose responses that
are classified as "good".[34] Transfer learning is when the knowledge gained from one problem is
applied to a new problem.[35] Deep learning uses artificial neural networks for all of these types of
learning.
Natural language processing (NLP)[37] allows programs to read, write and communicate in human
languages such as English. Specific problems include speech recognition, speech synthesis, machine
translation, information extraction, information retrieval and question answering.[38]
Early work, based on Noam Chomsky's generative grammar, had difficulty with word-sense
disambiguation[d] unless restricted to small domains called "micro-worlds" (due to the common sense
knowledge problem[26]).
Modern deep learning techniques for NLP include word embedding (how often one word appears
near another),[39] transformers (which finds patterns in text),[40] and others.[41] In 2019, generative
pre-trained transformer (or "GPT") language models began to generate coherent text,[42][43] and by
2023 these models were able to get human-level scores on the bar exam, SAT, GRE, and many other
real-world applications.[44]
Perception
Robotics
Social intelligence
Affective computing is an interdisciplinary umbrella that comprises systems that recognize, interpret,
process or simulate human feeling, emotion and mood.[52] For example, some virtual assistants are
programmed to speak conversationally or even to banter humorously; it makes them appear more
sensitive to the emotional dynamics of human interaction, or to otherwise facilitate human–computer
https://en.wikipedia.org/wiki/Artificial_intelligence 3/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
General intelligence
Kismet, a robot with rudimentary
A machine with artificial general intelligence should be able to social skills[51]
solve a wide variety of problems with breadth and versatility
similar to human intelligence.[8]
Tools
AI research uses a wide variety of tools to accomplish the goals above.[b]
AI can solve many problems by intelligently searching through many possible solutions.[55] There are
two very different kinds of search used in AI: state space search and local search.
State space search searches through a tree of possible states to try to find a goal state.[56] For example,
Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target
goal, a process called means-ends analysis.[57]
Simple exhaustive searches[58] are rarely sufficient for most real-world problems: the search space
(the number of places to search) quickly grows to astronomical numbers. The result is a search that is
too slow or never completes.[12] "Heuristics" or "rules of thumb" can help to prioritize choices that are
more likely to reach a goal.[59]
Adversarial search is used for game-playing programs, such as chess or go. It searches through a tree
of possible moves and counter-moves, looking for a winning position.[60]
Local search
Local search uses mathematical optimization to find a numeric solution to a problem. It begins with
some form of a guess and then refines the guess incrementally until no more refinements can be
made. These algorithms can be visualized as blind hill climbing: we begin the search at a random
point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach
the top. This process is called stochastic gradient descent.[61]
Evolutionary computation uses a form of optimization search. For example, they may begin with a
population of organisms (the guesses) and then allow them to mutate and recombine, selecting only
the fittest to survive each generation (refining the guesses).[62]
https://en.wikipedia.org/wiki/Artificial_intelligence 4/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
Logical inference (or deduction) is the process of proving a new statement (conclusion) from other
statements that are already known to be true (the premises).[67] A logical knowledge base also handles
queries and assertions as a special case of inference.[68] An inference rule describes what is a valid
step in a proof. The most general inference rule is resolution.[69] Inference can be reduced to
performing a search to find a path that leads from premises to conclusions, where each step is the
application of an inference rule.[70] Inference performed this way is intractable except for short proofs
in restricted domains. No efficient, powerful and general method has been discovered.[71]
Fuzzy logic assigns a "degree of truth" between 0 and 1 and handles uncertainty and probabilistic
situations.[72] Non-monotonic logics are designed to handle default reasoning.[25] Other specialized
versions of logic have been developed to describe many complex domains (see knowledge
representation above).
Bayesian networks[74] are a very general tool that can be used for
many problems, including reasoning (using the Bayesian inference
algorithm),[e][76] learning (using the expectation-maximization
algorithm),[f][78] planning (using decision networks)[79] and
Expectation-maximization clustering
perception (using dynamic Bayesian networks).[80]
of Old Faithful eruption data starts
from a random guess but then
Probabilistic algorithms can also be used for filtering, prediction,
successfully converges on an
smoothing and finding explanations for streams of data, helping
accurate clustering of the two
perception systems to analyze processes that occur over time (e.g.,
physically distinct modes of
hidden Markov models or Kalman filters).[80]
eruption.
https://en.wikipedia.org/wiki/Artificial_intelligence 5/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
Precise mathematical tools have been developed that analyze how an agent can make choices and
plan, using decision theory, decision analysis,[81] and information value theory.[82] These tools
include models such as Markov decision processes, [83] dynamic decision networks,[80] game theory
and mechanism design.[84]
The simplest AI applications can be divided into two types: classifiers (e.g. "if shiny then diamond"),
on one hand, and controllers (e.g. "if diamond then pick up"), on the other hand. Classifiers[85] are
functions that use pattern matching to determine the closest match. They can be fine-tuned based on
chosen examples using supervised learning. Each pattern (also called an "observation") is labeled with
a certain predefined class. All the observations combined with their class labels are known as a data
set. When a new observation is received, that observation is classified based on previous
experience.[33]
There many kinds of classifiers in use. The decision tree is the simplest and most widely used
symbolic machine learning algorithm.[86] K-nearest neighbor algorithm was the most widely used
analogical AI until the mid-1990s, and Kernel methods such as the support vector machine (SVM)
displaced k-nearest neighbor in the 1990s.[87] The naive Bayes classifier is reportedly the "most
widely used learner"[88] at Google, due in part to its scalability.[89] Neural networks are also used as
classifiers.[90]
A neural network is an Learning algorithms for neural networks use local search to choose
interconnected group of nodes, akin the weights that will get the right output for each input during
to the vast network of neurons in training. The most common training technique is the
the human brain. backpropagation algorithm.[92] Neural networks learn to model
complex relationships between inputs and outputs and find
patterns in data. In theory, a neural network can learn any
function.[93]
In feedforward neural networks the signal passes in only one direction.[94] Recurrent neural networks
feed the output signal back into the input, which allows short-term memories of previous input
events. Long short term memory is the most successful network architecture for recurrent
networks.[95] Perceptrons[96] use only a single layer of neurons, deep learning[97] uses multiple layers.
https://en.wikipedia.org/wiki/Artificial_intelligence 6/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
Convolutional neural networks strengthen the connection between neurons that are "close" to each
other -- this especially important in image processing, where a local set of neurons must identify an
"edge" before the network can identify an object.[98]
Deep learning
In the late 2010s, graphics processing units (GPUs) that were increasingly designed with AI-specific
enhancements and used with specialized TensorFlow software, had replaced previously used central
processing unit (CPUs) as the dominant means for large-scale (commercial and academic) machine
learning models' training.[111] Historically, specialized languages, such as Lisp, Prolog, and others,
had been used.
Applications
AI and machine learning technology is used in most of the essential applications of the 2020s,
including: search engines (such as Google Search), targeting online advertisements,[112]
recommendation systems (offered by Netflix, YouTube or Amazon), driving internet traffic,[113][114]
targeted advertising (AdSense, Facebook), virtual assistants (such as Siri or Alexa),[115] autonomous
vehicles (including drones, ADAS and self-driving cars), automatic language translation (Microsoft
Translator, Google Translate), facial recognition (Apple's Face ID or Microsoft's DeepFace) and image
labeling (used by Facebook, Apple's iPhoto and TikTok).
There are also thousands of successful AI applications used to solve specific problems for specific
industries or institutions. In a 2017 survey, one in five companies reported they had incorporated "AI"
in some offerings or processes.[116] A few examples are energy storage,[117] medical diagnosis, military
logistics, applications that predict the result of judicial decisions,[118] foreign policy,[119] or supply
chain management.
https://en.wikipedia.org/wiki/Artificial_intelligence 7/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
AlphaFold 2 (2020), which demonstrated the ability to approximate, in hours rather than months, the
3D structure of a protein.[134]
Ethics
Algorithmic bias
Machine learning applications will be biased if they learn from biased data.[135] The developers may
not be aware that the bias exists.[136] For example, on June 28, 2015, Google Photos’s new image
labeling feature mistakenly identified Jacky Alcine and a friend as “gorillas” because they were black.
The system was trained on a dataset that contained very few images of black people.[137] Google
“fixed” this problem by preventing the system from labelling anything as a “gorilla”. Eight years later,
in 2023, Google Photos still could not identify a gorilla, and neither could similar products from
Apple, Microsoft and Amazon.[138]
Bias can be introduced by the way training data is selected and by the way a model is
deployed.[139][135] It can also emerge from correlations: AI is used to classify individuals into groups
and then make predictions assuming that the individual will resemble other members of the group. In
some cases, this assumption may be unfair.[140] An example of this is COMPAS, a commercial
program widely used by U.S. courts to assess the likelihood of a defendant becoming a recidivist.
https://en.wikipedia.org/wiki/Artificial_intelligence 8/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
ProPublica claims that the COMPAS-assigned recidivism risk level of black defendants is far more
likely to be overestimated than that of white defendants, despite the fact that the program was not
told the races of the defendants.[141]
Health equity issues may also be exacerbated when many-to-many mapping is done without taking
steps to ensure equity for populations at risk for bias. At this time equity-focused tools and
regulations are not in place to ensure equity application representation and usage.[142] Other
examples where algorithmic bias can lead to unfair outcomes are when AI is used for credit rating, CV
screening, hiring and applications for public housing.[135]
At its 2022 Conference on Fairness, Accountability, and Transparency (ACM FAccT 2022) the
Association for Computing Machinery, in Seoul, South Korea, presented and published findings
recommending that until AI and robotics systems are demonstrated to be free of bias mistakes, they
are unsafe and the use of self-learning neural networks trained on vast, unregulated sources of flawed
internet data should be curtailed.[143]
Lack of transparency
Modern machine learning applications can not explain how they have reached a decision.
AI provides a number of tools that are particularly useful for authoritarian governments: smart
spyware, face recognition and voice recognition allow widespread surveillance; such surveillance
allows machine learning to classify potential enemies of the state and can prevent them from hiding;
recommendation systems can precisely target propaganda and misinformation for maximum effect;
deepfakes aid in producing misinformation; advanced AI can make centralized decision making more
competitive with liberal and decentralized systems such as markets.[144]
Terrorists, criminals and rogue states may use other forms of weaponized AI such as advanced digital
warfare and lethal autonomous weapons. By 2015, over fifty countries were reported to be researching
battlefield robots.[145]
Technological unemployment
From the early days of the development of artificial intelligence there have been arguments, for
example those put forward by Weizenbaum, about whether tasks that can be done by computers
actually should be done by them, given the difference between computers and humans, and between
quantitative calculation and qualitative, value-based judgement. [147]
Economists have frequently highlighted the risks of redundancies from AI, and speculated about
unemployment if there is no adequate social policy for full employment.[148]
In the past, technology has tended to increase rather than reduce total employment, but economists
acknowledge that "we're in uncharted territory" with AI.[149] A survey of economists showed
disagreement about whether the increasing use of robots and AI will cause a substantial increase in
https://en.wikipedia.org/wiki/Artificial_intelligence 9/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
long-term unemployment, but they generally agree that it could be a net benefit if productivity gains
are redistributed.[150] Risk estimates vary; for example, in the 2010s Michael Osborne and Carl
Benedikt Frey estimated 47% of U.S. jobs are at "high risk" of potential automation, while an OECD
report classified only 9% of U.S. jobs as "high risk".[k][152] The methodology of speculating about
future employment levels has been criticised as lacking evidential foundation, and for implying that
technology (rather than social policy) creates unemployment (as opposed to redundancies).[148]
Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial
intelligence; The Economist stated in 2015 that "the worry that AI could do to white-collar jobs what
steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously".[153]
Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase
for care-related professions ranging from personal healthcare to the clergy.[154]
Copyright
Friendly AI are machines that have been designed from the beginning to minimize risks and to make
choices that benefit humans. Eliezer Yudkowsky, who coined the term, argues that developing friendly
AI should be a higher research priority: it may require a large investment and it must be completed
before AI becomes an existential risk.[156]
Machines with intelligence have the potential to use their intelligence to make ethical decisions. The
field of machine ethics provides machines with ethical principles and procedures for resolving ethical
dilemmas.[157] The field of machine ethics is also called computational morality,[157] and was founded
at an AAAI symposium in 2005.[158]
Other approaches include Wendell Wallach's "artificial moral agents"[159] and Stuart J. Russell's three
principles for developing provably beneficial machines.[160]
Regulation
Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates,
US and Vietnam. Others were in the process of elaborating their own AI strategy, including
Bangladesh, Malaysia and Tunisia.[165] The Global Partnership on Artificial Intelligence was launched
in June 2020, stating a need for AI to be developed in accordance with human rights and democratic
values, to ensure public confidence and trust in the technology.[165] Henry Kissinger, Eric Schmidt,
and Daniel Huttenlocher published a joint statement in November 2021 calling for a government
commission to regulate AI.[166] In 2023, OpenAI leaders published recommendations for the
governance of superintelligence, which they believe may happen in less than 10 years.[167]
In a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but
only 35% of Americans, agreed that "products and services using AI have more benefits than
drawbacks".[163] A 2023 Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree,
that AI poses risks to humanity.[168] In a 2023 Fox News poll, 35% of Americans thought it "very
important", and an additional 41% thought it "somewhat important", for the federal government to
regulate AI, versus 13% responding "not very important" and 8% responding "not at all
important".[169][170]
History
The study of mechanical or "formal" reasoning began with philosophers and mathematicians in
antiquity. The study of logic led directly to Alan Turing's theory of computation, which suggested that
a machine, by shuffling symbols as simple as "0" and "1", could simulate both mathematical deduction
and formal reasoning, which is known as the Church–Turing thesis.[171] This, along with at the time
new discoveries in cybernetics and information theory, led researchers to consider the possibility of
building an "electronic brain".[l][173] The first paper later recognized as "AI" was McCullouch and Pitts
design for Turing-complete "artificial neurons" in 1943.[174]
The field of AI research was founded at a workshop at Dartmouth College in 1956.[m][2] The attendees
became the leaders of AI research in the 1960s.[n] They and their students produced programs that
the press described as "astonishing":[o] computers were learning checkers strategies, solving word
problems in algebra, proving logical theorems and speaking English.[p][3]
By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[178]
and laboratories had been established around the world.[179] Herbert Simon predicted, "machines will
be capable, within twenty years, of doing any work a man can do".[180] Marvin Minsky agreed,
writing, "within a generation ... the problem of creating 'artificial intelligence' will substantially be
solved".[181]
They had, however, underestimated the difficulty of the problem.[q] Both the U.S. and British
governments cut off exploratory research in response to the criticism of Sir James Lighthill[183] and
ongoing pressure from the US Congress to fund more productive projects. Minsky's and Papert's book
Perceptrons was understood as proving that artificial neural networks approach would never be
useful for solving real-world tasks, thus discrediting the approach altogether.[184] The "AI winter", a
period when obtaining funding for AI projects was difficult, followed.[5]
In the early 1980s, AI research was revived by the commercial success of expert systems,[185] a form
of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the
market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer
https://en.wikipedia.org/wiki/Artificial_intelligence 11/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
project inspired the U.S. and British governments to restore funding for academic research.[4]
However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into
disrepute, and a second, longer-lasting winter began.[6]
Many researchers began to doubt that the current practices would be able to imitate all the processes
of human cognition, especially perception, robotics, learning and pattern recognition.[186] A number
of researchers began to look into "sub-symbolic" approaches.[187] Robotics researchers, such as
Rodney Brooks, rejected "representation" in general and focussed directly on engineering machines
that move and survive.[r]. Judea Pearl, Lofti Zadeh and others developed methods that handled
incomplete and uncertain information by making reasonable guesses rather than precise logic.[73][192]
But the most important development was the revival of "connectionism", including neural network
research, by Geoffrey Hinton and others.[193] In 1990, Yann LeCun successfully showed that
convolutional neural networks can recognize handwritten digits, the first of many successful
applications of neural networks.[194]
AI gradually restored its reputation in the late 1990s and early 21st century by exploiting formal
mathematical methods and by finding specific solutions to specific problems. This "narrow" and
"formal" focus allowed researchers to produce verifiable results and collaborate with other fields
(such as statistics, economics and mathematics).[195] By 2000, solutions developed by AI researchers
were being widely used, although in the 1990s they were rarely described as "artificial
intelligence".[196]
Several academic researchers became concerned that AI was no longer pursuing the original goal of
creating versatile, fully intelligent machines. Beginning around 2002, they founded the subfield of
artificial general intelligence (or "AGI"), which had several well-funded institutions by the 2010s.[8]
Deep learning began to dominate industry benchmarks in 2012 and was adopted throughout the
field.[7] For many specific tasks, other methods were abandoned.[s] Deep learning's success was based
on both hardware improvements (faster computers,[198] graphics processing units, cloud
computing[199]) and access to large amounts of data[200] (including curated datasets,[199] such as
ImageNet).
Deep learning's success led to an enormous increase in interest and funding in AI.[t] The amount of
machine learning research (measured by total publications) increased by 50% in the years 2015–
2019,[165] and WIPO reported that AI was the most prolific emerging technology in terms of the
number of patent applications and granted patents[201] According to 'AI Impacts', about $50 billion
annually was invested in "AI" around 2022 in the US alone and about 20% of new US Computer
Science PhD graduates have specialized in "AI";[202] about 800,000 "AI"-related US job openings
existed in 2022.[203]
In 2016, issues of fairness and the misuse of technology were catapulted into center stage at machine
learning conferences, publications vastly increased, funding became available, and many researchers
re-focussed their careers on these issues. The alignment problem became a serious field of academic
study.[204]
Philosophy
https://en.wikipedia.org/wiki/Artificial_intelligence 12/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
Alan Turing wrote in 1950 "I propose to consider the question 'can machines think'?"[205] He advised
changing the question from whether a machine "thinks", to "whether or not it is possible for
machinery to show intelligent behaviour".[205] He devised the Turing test, which measures the ability
of a machine to simulate human conversation.[206] Since we can only observe the behavior of the
machine, it does not matter if it is "actually" thinking or literally has a "mind". Turing notes that we
can not determine these things about other people[u] but "it is usual to have a polite convention that
everyone thinks"[207]
Russell and Norvig agree with Turing that AI must be defined in terms of "acting" and not
"thinking".[208] However, they are critical that the test compares machines to people. "Aeronautical
engineering texts," they wrote, "do not define the goal of their field as making 'machines that fly so
exactly like pigeons that they can fool other pigeons.' "[209] AI founder John McCarthy agreed, writing
that "Artificial intelligence is not, by definition, simulation of human intelligence".[210]
McCarthy defines intelligence as "the computational part of the ability to achieve goals in the
world."[211] Another AI founder, Marvin Minsky similarly defines it as "the ability to solve hard
problems".[212] These definitions view intelligence in terms of well-defined problems with well-
defined solutions, where both the difficulty of the problem and the performance of the program are
direct measures of the "intelligence" of the machine—and no other philosophical discussion is
required, or may not even be possible.
A definition that has also been adopted by Google[213] – major practitionary in the field of AI. This
definition stipulated the ability of systems to synthesize information as the manifestation of
intelligence, similar to the way it is defined in biological intelligence.
Evaluating approaches to AI
No established unifying theory or paradigm has guided AI research for most of its history.[v] The
unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so
much so that some sources, especially in the business world, use the term "artificial intelligence" to
mean "machine learning with neural networks"). This approach is mostly sub-symbolic, soft and
narrow (see below). Critics argue that these questions may have to be revisited by future generations
of AI researchers.
Symbolic AI (or "GOFAI")[215] simulated the high-level conscious reasoning that people use when
they solve puzzles, express legal reasoning and do mathematics. They were highly successful at
"intelligent" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed the physical
symbol systems hypothesis: "A physical symbol system has the necessary and sufficient means of
general intelligent action."[216]
However, the symbolic approach failed on many tasks that humans solve easily, such as learning,
recognizing an object or commonsense reasoning. Moravec's paradox is the discovery that high-level
"intelligent" tasks were easy for AI, but low level "instinctive" tasks were extremely difficult.[217]
Philosopher Hubert Dreyfus had argued since the 1960s that human expertise depends on
unconscious instinct rather than conscious symbol manipulation, and on having a "feel" for the
situation, rather than explicit symbolic knowledge.[218] Although his arguments had been ridiculed
and ignored when they were first presented, eventually, AI research came to agree.[w][13]
https://en.wikipedia.org/wiki/Artificial_intelligence 13/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
The issue is not resolved: sub-symbolic reasoning can make many of the same inscrutable mistakes
that human intuition does, such as algorithmic bias. Critics such as Noam Chomsky argue continuing
research into symbolic AI will still be necessary to attain general intelligence,[220][221] in part because
sub-symbolic AI is a move away from explainable AI: it can be difficult or impossible to understand
why a modern statistical AI program made a particular decision. The emerging field of neuro-
symbolic artificial intelligence attempts to bridge the two approaches.
"Neats" hope that intelligent behavior is described using simple, elegant principles (such as logic,
optimization, or neural networks). "Scruffies" expect that it necessarily requires solving a large
number of unrelated problems. Neats defend their programs with theoretical rigor, scruffies rely
mainly on incremental testing to see if they work. This issue was actively discussed in the 70s and
80s,[222] but eventually was seen as irrelevant. Modern AI has elements of both.
Finding a provably correct or optimal solution is intractable for many important problems.[12] Soft
computing is a set of techniques, including genetic algorithms, fuzzy logic and neural networks, that
are tolerant of imprecision, uncertainty, partial truth and approximation. Soft computing was
introduced in the late 80s and most successful AI programs in the 21st century are examples of soft
computing with neural networks.
AI researchers are divided as to whether to pursue the goals of artificial general intelligence and
superintelligence directly or to solve as many specific problems as possible (narrow AI) in hopes these
solutions will lead indirectly to the field's long-term goals.[223][224] General intelligence is difficult to
define and difficult to measure, and modern AI has had more verifiable successes by focusing on
specific problems with specific solutions. The experimental sub-field of artificial general intelligence
studies this area exclusively.
The philosophy of mind does not know whether a machine can have a mind, consciousness and
mental states, in the same sense that human beings do. This issue considers the internal experiences
of the machine, rather than its external behavior. Mainstream AI research considers this issue
irrelevant because it does not affect the goals of the field: to build machines that can solve problems
using intelligence. Russell and Norvig add that "[t]he additional project of making a machine
conscious in exactly the way humans are is not one that we are equipped to take on."[225] However,
the question has become central to the philosophy of mind. It is also typically the central question at
issue in artificial intelligence in fiction.
Consciousness
https://en.wikipedia.org/wiki/Artificial_intelligence 14/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
David Chalmers identified two problems in understanding the mind, which he named the "hard" and
"easy" problems of consciousness.[226] The easy problem is understanding how the brain processes
signals, makes plans and controls behavior. The hard problem is explaining how this feels or why it
should feel like anything at all, assuming we are right in thinking that it truly does feel like something
(Dennett's consciousness illusionism says this is an illusion). Human information processing is easy
to explain, however, human subjective experience is difficult to explain. For example, it is easy to
imagine a color-blind person who has learned to identify which objects in their field of view are red,
but it is not clear what would be required for the person to know what red looks like.[227]
Computationalism is the position in the philosophy of mind that the human mind is an information
processing system and that thinking is a form of computing. Computationalism argues that the
relationship between mind and body is similar or identical to the relationship between software and
hardware and thus may be a solution to the mind–body problem. This philosophical position was
inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally
proposed by philosophers Jerry Fodor and Hilary Putnam.[228]
Philosopher John Searle characterized this position as "strong AI": "The appropriately programmed
computer with the right inputs and outputs would thereby have a mind in exactly the same sense
human beings have minds."[x] Searle counters this assertion with his Chinese room argument, which
attempts to show that, even if a machine perfectly simulates human behavior, there is still no reason
to suppose it also has a mind.[232]
Robot rights
If a machine has a mind and subjective experience, then it may also have sentience (the ability to feel),
and if so it could also suffer; it has been argued that this could entitle it to certain rights.[233] Any
hypothetical robot rights would lie on a spectrum with animal rights and human rights.[234] This issue
has been considered in fiction for centuries,[235] and is now being considered by, for example,
California's Institute for the Future; however, critics argue that the discussion is premature.[236]
Future
A superintelligence is a hypothetical agent that would possess intelligence far surpassing that of the
brightest and most gifted human mind.[224]
If research into artificial general intelligence produced sufficiently intelligent software, it might be
able to reprogram and improve itself. The improved software would be even better at improving itself,
leading to what I. J. Good called an "intelligence explosion" and Vernor Vinge called a
https://en.wikipedia.org/wiki/Artificial_intelligence 15/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
Existential risk
It has been argued AI will become so powerful that humanity may irreversibly lose control of it. This
could, as the physicist Stephen Hawking puts it, "spell the end of the human race".[239] According to
the philosopher Nick Bostrom, for almost any goals that a sufficiently intelligent AI may have, it is
instrumentally incentivized to protect itself from being shut down and to acquire more resources, as
intermediary steps to better achieve these goals. Sentience or emotions are then not required for an
advanced AI to be dangerous. In order to be safe for humanity, a superintelligence would have to be
genuinely aligned with humanity's morality and values so that it is "fundamentally on our side".[240]
The political scientist Charles T. Rubin argued that "any sufficiently advanced benevolence may be
indistinguishable from malevolence" and warned that we should not be confident that intelligent
machines will by default treat us favorably.[241]
The opinions amongst experts and industry insiders are mixed, with sizable fractions both concerned
and unconcerned by risk from eventual superintelligent AI.[242] Personalities such as Stephen
Hawking, Bill Gates, Elon Musk have expressed concern about existential risk from AI.[243] In 2023,
AI pioneers including Geoffrey Hinton, Yoshua Bengio, Demis Hassabis, and Sam Altman issued the
joint statement that "Mitigating the risk of extinction from AI should be a global priority alongside
other societal-scale risks such as pandemics and nuclear war"; some others such as Yann LeCun
consider this to be unfounded.[244] Mark Zuckerberg said that AI will "unlock a huge amount of
positive things", including curing diseases and improving the safety of self-driving cars.[245] Some
experts have argued that the risks are too distant in the future to warrant research, or that humans
will be valuable from the perspective of a superintelligent machine.[246] Rodney Brooks, in particular,
said in 2014 that "malevolent" AI is still centuries away.[y]
Transhumanism
Robot designer Hans Moravec, cyberneticist Kevin Warwick, and inventor Ray Kurzweil have
predicted that humans and machines will merge in the future into cyborgs that are more capable and
powerful than either. This idea, called transhumanism, has roots in Aldous Huxley and Robert
Ettinger.[248]
Edward Fredkin argues that "artificial intelligence is the next stage in evolution", an idea first
proposed by Samuel Butler's "Darwin among the Machines" as far back as 1863, and expanded upon
by George Dyson in his book of the same name in 1998.[249]
In fiction
Thought-capable artificial beings have appeared as storytelling devices since antiquity,[250] and have
been a persistent theme in science fiction.[251]
A common trope in these works began with Mary Shelley's Frankenstein, where a human creation
becomes a threat to its masters. This includes such works as Arthur C. Clarke's and Stanley Kubrick's
2001: A Space Odyssey (both 1968), with HAL 9000, the murderous computer in charge of the
https://en.wikipedia.org/wiki/Artificial_intelligence 16/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
Several works use AI to force us to confront the fundamental question of what makes us human,
showing us artificial beings that have the ability to feel, and thus to suffer. This appears in Karel
Čapek's R.U.R., the films A.I. Artificial Intelligence and Ex Machina, as well as the novel Do Androids
Dream of Electric Sheep?, by Philip K. Dick. Dick considers the idea that our understanding of human
subjectivity is altered by technology created with artificial intelligence.[255]
See also
AI safety – Research area on making AI safe and beneficial
AI alignment – Conformance to the intended objective
Artificial intelligence in healthcare – Machine-learning algorithms and software in the analysis,
presentation, and comprehension of complex medical and health care data
Artificial intelligence arms race – Arms race for the most advanced AI-related technologies
Artificial intelligence detection software
Behavior selection algorithm – Algorithm that selects actions for intelligent agents
Business process automation – Technology-enabled automation of complex business processes
Case-based reasoning – Process of solving new problems based on the solutions of similar past
problems
Emergent algorithm – Algorithm exhibiting emergent behavior
Female gendering of AI technologies
Glossary of artificial intelligence – List of definitions of terms and concepts commonly used in the
study of artificial intelligence
Operations research – Discipline concerning the application of advanced analytical methods
Robotic process automation – Form of business process automation technology
Synthetic intelligence – Alternate term for or form of artificial intelligence
Universal basic income – Welfare system of unconditional income
Weak artificial intelligence – Form of artificial intelligence
Data sources – The list of data sources for study and research
Explanatory notes
a. This list of intelligent traits is based on the topics covered by the major AI textbooks, including:
Russell & Norvig (2021), Luger & Stubblefield (2004), Poole, Mackworth & Goebel (1998) and
Nilsson (1998)
https://en.wikipedia.org/wiki/Artificial_intelligence 17/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
b. This list of tools is based on the topics covered by the major AI textbooks, including: Russell &
Norvig (2021), Luger & Stubblefield (2004), Poole, Mackworth & Goebel (1998) and Nilsson
(1998)
c. Alan Turing discussed the centrality of learning as early as 1950, in his classic paper "Computing
Machinery and Intelligence".[30] In 1956, at the original Dartmouth AI summer conference, Ray
Solomonoff wrote a report on unsupervised probabilistic machine learning: "An Inductive
Inference Machine".[31]
d. See AI winter § Machine translation and the ALPAC report of 1966
e. Compared with symbolic logic, formal Bayesian inference is computationally expensive. For
inference to be tractable, most observations must be conditionally independent of one another.
AdSense uses a Bayesian network with over 300 million edges to learn which ads to serve.[75]
f. Expectation-maximization, one of the most popular algorithms in machine learning, allows
clustering in the presence of unknown latent variables.[77]
g. Russell and Norvig suggest the alternative term "computational graphs" -- that is, an abstract
network (or "graph") where the edges and nodes are assigned numeric values.
h. Some form of deep neural networks (without a specific learning algorithm) were described by:
Alan Turing (1948);[103] Frank Rosenblatt(1957);[103] Karl Steinbuch and Roger David Joseph
(1961).[104] Deep or recurrent networks that learned (or used gradient descent) were developed
by: Ernst Ising and Wilhelm Lenz (1925);[105] Oliver Selfridge (1959);[104] Alexey Ivakhnenko and
Valentin Lapa (1965);[105] Kaoru Nakano (1977);[106] Shun-Ichi Amari (1972);[106] John Joseph
Hopfield (1982).[106] Backpropagation was independently discovered by: Henry J. Kelley
(1960);[103] Arthur E. Bryson (1962);[103] Stuart Dreyfus (1962);[103] Arthur E. Bryson and Yu-Chi
Ho (1969);[103] Seppo Linnainmaa (1970);[107] Paul Werbos (1974).[103] In fact, backpropagation
and gradient descent are straight forward applications of Gottfried Leibniz' chain rule in calculus
(1676),[108] and is essentially identical (for one layer) to the method of least squares, developed
independently by Johann Carl Friedrich Gauss (1795) and Adrien-Marie Legendre (1805).[109]
There are probably many others, yet to be discovered by historians of science.
i. Geoffrey Hinton said, of his work on neural networks in the 1990s, “our labeled datasets were
thousands of times too small. [And] our computers were millions of times too slow”[110]
j. The Smithsonian reports: "Pluribus has bested poker pros in a series of six-player no-limit Texas
Hold'em games, reaching a milestone in artificial intelligence research. It is the first bot to beat
humans in a complex multiplayer competition."[123]
k. See table 4; 9% is both the OECD average and the US average.[151]
l. "Electronic brain" was the term used by the press around this time.[172]
m. Daniel Crevier wrote, "the conference is generally recognized as the official birthdate of the new
science."[175] Russell and Norvig called the conference "the inception of artificial intelligence."[174]
n. Russell and Norvig wrote "for the next 20 years the field would be dominated by these people and
their students."[176]
o. Russell and Norvig wrote "it was astonishing whenever a computer did anything kind of
smartish".[177]
p. The programs described are Arthur Samuel's checkers program for the IBM 701, Daniel Bobrow's
STUDENT, Newell and Simon's Logic Theorist and Terry Winograd's SHRDLU.
q. Russell and Norvig write: "in almost all cases, these early systems failed on more difficult
problems"[182]
r. Embodied approaches to AI[188] were championed by Hans Moravec[189] and Rodney Brooks[190]
and went by many names: Nouvelle AI.[190] Developmental robotics,[191]
https://en.wikipedia.org/wiki/Artificial_intelligence 18/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
s. Matteo Wong wrote in The Atlantic: "Whereas for decades, computer-science fields such as
natural-language processing, computer vision, and robotics used extremely different methods,
now they all use a programming method called “deep learning.” As a result, their code and
approaches have become more similar, and their models are easier to integrate into one
another."[197]
t. Jack Clark wrote in Bloomberg: "After a half-decade of quiet breakthroughs in artificial
intelligence, 2015 has been a landmark year. Computers are smarter and learning faster than
ever," and noted that the number of software projects that use machine learning at Google
increased from a "sporadic usage" in 2012 to more than 2,700 projects in 2015.[199]
u. See Problem of other minds
v. Nils Nilsson wrote in 1983: "Simply put, there is wide disagreement in the field about what AI is all
about."[214]
w. Daniel Crevier wrote that "time has proven the accuracy and perceptiveness of some of Dreyfus's
comments. Had he formulated them less aggressively, constructive actions they suggested might
have been taken much earlier."[219]
x. Searle presented this definition of "Strong AI" in 1999.[229] Searle's original formulation was "The
appropriately programmed computer really is a mind, in the sense that computers given the right
programs can be literally said to understand and have other cognitive states."[230] Strong AI is
defined similarly by Russell and Norvig: "Stong AI - the assertion that machines that do so are
actually thinking (as opposed to simulating thinking)."[231]
y. Rodney Brooks writes, "I think it is a mistake to be worrying about us developing malevolent AI
anytime in the next few hundred years. I think the worry stems from a fundamental error in not
distinguishing the difference between the very real recent advances in a particular aspect of AI
and the enormity and complexity of building sentient volitional intelligence."[247]
References
1. Google (2016).
2. Dartmouth workshop:
Russell & Norvig (2021, p. 18)
McCorduck (2004, pp. 111–136)
NRC (1999, pp. 200–201)
The proposal:
McCarthy et al. (1955)
3. Successful programs the 60s:
McCorduck (2004, pp. 243–252)
Crevier (1993, pp. 52–107)
Moravec (1988, p. 9)
Russell & Norvig (2021, pp. 19–21)
https://en.wikipedia.org/wiki/Artificial_intelligence 19/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
4. Funding initiatives in the early 80s: Fifth Generation Project (Japan), Alvey (UK), Microelectronics
and Computer Technology Corporation (US), Strategic Computing Initiative (US):
McCorduck (2004, pp. 426–441)
Crevier (1993, pp. 161–162, 197–203, 211, 240)
Russell & Norvig (2021, p. 23)
NRC (1999, pp. 210–211)
Newquist (1994, pp. 235–248)
5. First AI Winter, Lighthill report, Mansfield Amendment
Crevier (1993, pp. 115–117)
Russell & Norvig (2021, pp. 21–22)
NRC (1999, pp. 212–213)
Howe (1994)
Newquist (1994, pp. 189–201)
6. Second AI Winter:
Russell & Norvig (2021, p. 24)
McCorduck (2004, pp. 430–435)
Crevier (1993, pp. 209–210)
NRC (1999, pp. 214–216)
Newquist (1994, pp. 301–318)
7. Deep learning revolution, AlexNet:
Russell & Norvig (2021, p. 26)
McKinsey (2018)
8. Artificial general intelligence:
Russell & Norvig (2021, pp. 32–33, 1020–1021)
Proposal for the modern version:
Pennachin & Goertzel (2007)
Warnings of overspecialization in AI from leading researchers:
Nilsson (1995)
McCarthy (2007)
Beal & Winston (2009)
9. Russell & Norvig (2021, §1.2)
10. Problem solving, puzzle solving, game playing and deduction:
Russell & Norvig (2021, chpt. 3–5)
Russell & Norvig (2021, chpt. 6) (constraint satisfaction)
Poole, Mackworth & Goebel (1998, chpt. 2,3,7,9)
Luger & Stubblefield (2004, chpt. 3,4,6,8)
Nilsson (1998, chpt. 7–12)
https://en.wikipedia.org/wiki/Artificial_intelligence 20/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
https://en.wikipedia.org/wiki/Artificial_intelligence 21/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
25. Default reasoning, Frame problem, default logic, non-monotonic logics, circumscription, closed
world assumption, abduction:
Russell & Norvig (2021, §10.6)
Poole, Mackworth & Goebel (1998, pp. 248–256, 323–335)
Luger & Stubblefield (2004, pp. 335–363)
Nilsson (1998, ~18.3.3)
(Poole et al. places abduction under "default reasoning". Luger et al. places this under "uncertain
reasoning").
26. Breadth of commonsense knowledge:
Lenat & Guha (1989, Introduction)
Crevier (1993, pp. 113–114),
Moravec (1988, p. 13),
Russell & Norvig (2021, pp. 241, 385, 982) (qualification problem)
27. Automated planning:
Russell & Norvig (2021, chpt. 11)
28. Automated decision making:
Russell & Norvig (2021, chpt. 16-18)
29. Learning:
Russell & Norvig (2021, chpt. 19–22)
Poole, Mackworth & Goebel (1998, pp. 397–438)
Luger & Stubblefield (2004, pp. 385–542)
Nilsson (1998, chpt. 3.3, 10.3, 17.5, 20)
30. Turing (1950).
31. Solomonoff (1956).
32. Unsupervised learning:
Russell & Norvig (2021, pp. 653) (definition)
Russell & Norvig (2021, pp. 738–740) (cluster analysis)
Russell & Norvig (2021, pp. 846–860) (word embedding)
33. Supervised learning:
Russell & Norvig (2021, §19.2) (Definition)
Russell & Norvig (2021, Chpt. 19-20) (Techniques)
34. Reinforcement learning:
Russell & Norvig (2021, chpt. 22)
Luger & Stubblefield (2004, pp. 442–449)
35. Transfer learning:
Russell & Norvig (2021, pp. 281)
The Economist (2016)
https://en.wikipedia.org/wiki/Artificial_intelligence 22/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
https://en.wikipedia.org/wiki/Artificial_intelligence 23/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
https://en.wikipedia.org/wiki/Artificial_intelligence 24/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
70. Forward chaining, backward chaining, Horn clauses, and logical deduction as search:
Russell & Norvig (2021, §9.3, §9.4)
Poole, Mackworth & Goebel (1998, pp. ~46–52)
Luger & Stubblefield (2004, pp. 62–73)
Nilsson (1998, chpt. 4.2, 7.2)
71. citation in progress
72. Fuzzy logic:
Russell & Norvig (2021, pp. 214, 255, 459)
Scientific American (1999)
73. Stochastic methods for uncertain reasoning:
Russell & Norvig (2021, Chpt. 12-18 and 20),
Poole, Mackworth & Goebel (1998, pp. 345–395),
Luger & Stubblefield (2004, pp. 165–191, 333–381),
Nilsson (1998, chpt. 19)
74. Bayesian networks:
Russell & Norvig (2021, §12.5-12.6, §13.4-13.5, §14.3-14.5, §16.5, §20.2 -20.3),
Poole, Mackworth & Goebel (1998, pp. 361–381),
Luger & Stubblefield (2004, pp. ~182–190, ≈363–379),
Nilsson (1998, chpt. 19.3–4)
75. Domingos (2015), chapter 6.
76. Bayesian inference algorithm:
Russell & Norvig (2021, §13.3-13.5),
Poole, Mackworth & Goebel (1998, pp. 361–381),
Luger & Stubblefield (2004, pp. ~363–379),
Nilsson (1998, chpt. 19.4 & 7)
77. Domingos (2015), p. 210.
78. Bayesian learning and the expectation-maximization algorithm:
Russell & Norvig (2021, Chpt. 20),
Poole, Mackworth & Goebel (1998, pp. 424–433),
Nilsson (1998, chpt. 20)
Domingos (2015, p. 210)
79. Bayesian decision theory and Bayesian decision networks:
Russell & Norvig (2021, §16.5)
https://en.wikipedia.org/wiki/Artificial_intelligence 25/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
https://en.wikipedia.org/wiki/Artificial_intelligence 26/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
https://en.wikipedia.org/wiki/Artificial_intelligence 28/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
https://en.wikipedia.org/wiki/Artificial_intelligence 30/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
169. Kasperowicz, Peter (1 May 2023). "Regulate AI? GOP much more skeptical than Dems that
government can do it right: poll" (https://www.foxnews.com/politics/regulate-ai-gop-much-more-sk
eptical-than-dems-that-the-government-can-do-it-right-poll). Fox News. Archived (https://web.archi
ve.org/web/20230619013616/https://www.foxnews.com/politics/regulate-ai-gop-much-more-skepti
cal-than-dems-that-the-government-can-do-it-right-poll) from the original on 19 June 2023.
Retrieved 19 June 2023.
170. "Fox News Poll" (https://static.foxnews.com/foxnews.com/content/uploads/2023/05/Fox_April-21-2
4-2023_Complete_National_Topline_May-1-Release.pdf) (PDF). Fox News. 2023. Archived (http
s://web.archive.org/web/20230512082712/https://static.foxnews.com/foxnews.com/content/upload
s/2023/05/Fox_April-21-24-2023_Complete_National_Topline_May-1-Release.pdf) (PDF) from the
original on 12 May 2023. Retrieved 19 June 2023.
171. Berlinski (2000).
172. "Google books ngram" (https://books.google.com/ngrams/graph?content=electronic+brain&year_s
tart=1930&year_end=2019&corpus=en-2019&smoothing=3).
173. AI's immediate precursors:
McCorduck (2004, pp. 51–107)
Crevier (1993, pp. 27–32)
Russell & Norvig (2021, pp. 8–17)
Moravec (1988, p. 3)
174. Russell & Norvig (2021), p. 17.
175. Crevier (1993), pp. 47–49.
176. Russell & Norvig (2003), p. 17.
177. Russell & Norvig (2003), p. 18.
178. AI heavily funded in the 1960s:
McCorduck (2004, p. 131)
Crevier (1993, pp. 51, 64–65)
NRC (1999, pp. 204–205)
179. Howe (1994).
180. Simon (1965, p. 96) quoted in Crevier (1993, p. 109)
181. Minsky (1967, p. 2) quoted in Crevier (1993, p. 109)
182. Russell & Norvig (2021), p. 21.
183. Lighthill (1973).
184. Russell & Norvig (2021), p. 22.
185. Expert systems:
Russell & Norvig (2021, pp. 23, 292)
Luger & Stubblefield (2004, pp. 227–331)
Nilsson (1998, chpt. 17.4)
McCorduck (2004, pp. 327–335, 434–435)
Crevier (1993, pp. 145–62, 197–203)
Newquist (1994, pp. 155–183)
186. Russell & Norvig (2021), p. 24.
187. Nilsson (1998), p. 7.
188. McCorduck (2004), pp. 454–462.
https://en.wikipedia.org/wiki/Artificial_intelligence 31/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
https://en.wikipedia.org/wiki/Artificial_intelligence 33/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
https://en.wikipedia.org/wiki/Artificial_intelligence 34/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
AI textbooks
The two most widely used textbooks in 2023. (See the Open Syllabus (https://explorer.opensyllabus.o
rg/result/field?id=Computer+Science)).
Russell, Stuart J.; Norvig, Peter. (2021). Artificial Intelligence: A Modern Approach (4th ed.).
Hoboken: Pearson. ISBN 978-0134610993. LCCN 20190474 (https://lccn.loc.gov/20190474).
Rich, Elaine; Knight, Kevin; Nair, Shivashankar B (2010). Artificial Intelligence (3rd ed.). New
Delhi: Tata McGraw Hill India. ISBN 978-0070087705.
These were the four the most widely used AI textbooks in 2008:
Luger, George; Stubblefield, William (2004). Artificial Intelligence: Structures and Strategies for
Complex Problem Solving (https://archive.org/details/artificialintell0000luge) (5th ed.).
Benjamin/Cummings. ISBN 978-0-8053-4780-7. Archived (https://web.archive.org/web/20200726
https://en.wikipedia.org/wiki/Artificial_intelligence 35/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
Later editions.
Poole, David; Mackworth, Alan (2017). Artificial Intelligence: Foundations of Computational Agents
(http://artint.info/index.html) (2nd ed.). Cambridge University Press. ISBN 978-1-107-19539-4.
Archived (https://web.archive.org/web/20171207013855/http://artint.info/index.html) from the
original on 7 December 2017. Retrieved 6 December 2017.
History of AI
Crevier, Daniel (1993). AI: The Tumultuous Search for Artificial Intelligence. New York, NY:
BasicBooks. ISBN 0-465-02997-3..
McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters, Ltd.,
ISBN 1-56881-205-1.
Newquist, HP (1994). The Brain Makers: Genius, Ego, And Greed In The Quest For Machines
That Think. New York: Macmillan/SAMS. ISBN 978-0-672-30412-5.
Nilsson, Nils (2009). The Quest for Artificial Intelligence: A History of Ideas and Achievements.
New York: Cambridge University Press. ISBN 978-0-521-12293-1.
Other sources
Schmidhuber, Jürgen (2022). "Annotated History of Modern AI and Deep Learning" (https://peopl
e.idsia.ch/~juergen/).
Chen, Stephen (25 March 2023). "Artificial intelligence, immune to fear or favour, is helping to
make China's foreign policy | South China Morning Post" (https://web.archive.org/web/202303252
24424/https://www.scmp.com/news/china/society/article/2157223/artificial-intelligence-immune-fe
ar-or-favour-helping-make). Archived from the original (https://www.scmp.com/news/china/society/
article/2157223/artificial-intelligence-immune-fear-or-favour-helping-make) on 25 March 2023.
Retrieved 26 March 2023.
Vogels, Emily A. (24 May 2023). "A majority of Americans have heard of ChatGPT, but few have
tried it themselves" (https://www.pewresearch.org/short-reads/2023/05/24/a-majority-of-americans
-have-heard-of-chatgpt-but-few-have-tried-it-themselves/). Pew Research Center. Archived (http
s://web.archive.org/web/20230608181200/https://www.pewresearch.org/short-reads/2023/05/24/a
-majority-of-americans-have-heard-of-chatgpt-but-few-have-tried-it-themselves/) from the original
on 8 June 2023. Retrieved 15 June 2023.
Kobielus, James (27 November 2019). "GPUs Continue to Dominate the AI Accelerator Market for
Now" (https://www.informationweek.com/ai-or-machine-learning/gpus-continue-to-dominate-the-ai
https://en.wikipedia.org/wiki/Artificial_intelligence 36/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
https://en.wikipedia.org/wiki/Artificial_intelligence 38/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
08/https://www.theguardian.com/global/2017/mar/14/googles-deepmind-makes-ai-program-that-c
an-learn-like-a-human) from the original on 26 April 2018. Retrieved 26 April 2018.
Heath, Nick (11 December 2020). "What is AI? Everything you need to know about Artificial
Intelligence" (https://www.zdnet.com/article/what-is-ai-everything-you-need-to-know-about-artificial
-intelligence/). ZDNet. Archived (https://web.archive.org/web/20210302205428/https://www.zdnet.
com/article/what-is-ai-everything-you-need-to-know-about-artificial-intelligence/) from the original
on 2 March 2021. Retrieved 1 March 2021.
Bowling, Michael; Burch, Neil; Johanson, Michael; Tammelin, Oskari (9 January 2015). "Heads-up
limit hold'em poker is solved" (https://www.science.org/doi/10.1126/science.1259433). Science.
347 (6218): 145–149. Bibcode:2015Sci...347..145B (https://ui.adsabs.harvard.edu/abs/2015Sci...
347..145B). doi:10.1126/science.1259433 (https://doi.org/10.1126%2Fscience.1259433).
ISSN 0036-8075 (https://www.worldcat.org/issn/0036-8075). PMID 25574016 (https://pubmed.ncb
i.nlm.nih.gov/25574016). S2CID 3796371 (https://api.semanticscholar.org/CorpusID:3796371).
Archived (https://web.archive.org/web/20220801134446/https://www.science.org/doi/10.1126/scie
nce.1259433) from the original on 1 August 2022. Retrieved 30 June 2022.
Solly, Meilan (15 July 2019). "This Poker-Playing A.I. Knows When to Hold 'Em and When to Fold
'Em" (https://www.smithsonianmag.com/smart-news/poker-playing-ai-knows-when-hold-em-when-
fold-em-180972643/). Smithsonian. Archived (https://web.archive.org/web/20210926070851/http
s://www.smithsonianmag.com/smart-news/poker-playing-ai-knows-when-hold-em-when-fold-em-1
80972643/) from the original on 26 September 2021. Retrieved 1 October 2021.
"Artificial intelligence: Google's AlphaGo beats Go master Lee Se-dol" (https://www.bbc.com/new
s/technology-35785875). BBC News. 12 March 2016. Archived (https://web.archive.org/web/2016
0826103910/http://www.bbc.com/news/technology-35785875) from the original on 26 August
2016. Retrieved 1 October 2016.
Rowinski, Dan (15 January 2013). "Virtual Personal Assistants & The Future Of Your Smartphone
[Infographic]" (http://readwrite.com/2013/01/15/virtual-personal-assistants-the-future-of-your-smart
phone-infographic). ReadWrite. Archived (https://web.archive.org/web/20151222083034/http://rea
dwrite.com/2013/01/15/virtual-personal-assistants-the-future-of-your-smartphone-infographic)
from the original on 22 December 2015.
Manyika, James (2022). "Getting AI Right: Introductory Notes on AI & Society" (https://www.amac
ad.org/publication/getting-ai-right-introductory-notes-ai-society). Daedalus. 151 (2): 5–27.
doi:10.1162/daed_e_01897 (https://doi.org/10.1162%2Fdaed_e_01897). S2CID 248377878 (http
s://api.semanticscholar.org/CorpusID:248377878). Archived (https://web.archive.org/web/2022050
5183207/https://www.amacad.org/publication/getting-ai-right-introductory-notes-ai-society) from
the original on 5 May 2022. Retrieved 5 May 2022.
Markoff, John (16 February 2011). "Computer Wins on 'Jeopardy!': Trivial, It's Not" (https://www.ny
times.com/2011/02/17/science/17jeopardy-watson.html). The New York Times. Archived (https://w
eb.archive.org/web/20141022023202/http://www.nytimes.com/2011/02/17/science/17jeopardy-wat
son.html) from the original on 22 October 2014. Retrieved 25 October 2014.
Anadiotis, George (1 October 2020). "The state of AI in 2020: Democratization, industrialization,
and the way to artificial general intelligence" (https://www.zdnet.com/article/the-state-of-ai-in-2020
-democratization-industrialization-and-the-way-to-artificial-general-intelligence/). ZDNet. Archived
(https://web.archive.org/web/20210315103618/https://www.zdnet.com/article/the-state-of-ai-in-202
0-democratization-industrialization-and-the-way-to-artificial-general-intelligence/) from the original
on 15 March 2021. Retrieved 1 March 2021.
Goertzel, Ben; Lian, Ruiting; Arel, Itamar; de Garis, Hugo; Chen, Shuo (December 2010). "A world
survey of artificial brain projects, Part II: Biologically inspired cognitive architectures".
Neurocomputing. 74 (1–3): 30–49. doi:10.1016/j.neucom.2010.08.012 (https://doi.org/10.1016%2
Fj.neucom.2010.08.012).
https://en.wikipedia.org/wiki/Artificial_intelligence 39/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
Robinson, A. J.; Fallside, F. (1987), "The utility driven dynamic error propagation network.",
Technical Report CUED/F-INFENG/TR.1, Cambridge University Engineering Department
Hochreiter, Sepp (1991). Untersuchungen zu dynamischen neuronalen Netzen (https://web.archiv
e.org/web/20150306075401/http://people.idsia.ch/~juergen/SeppHochreiter1991ThesisAdvisorSc
hmidhuber.pdf) (PDF) (diploma thesis). Munich: Institut f. Informatik, Technische Univ. Archived
from the original (http://people.idsia.ch/~juergen/SeppHochreiter1991ThesisAdvisorSchmidhuber.
pdf) (PDF) on 6 March 2015. Retrieved 16 April 2016.
Williams, R. J.; Zipser, D. (1994), "Gradient-based learning algorithms for recurrent networks and
their computational complexity", Back-propagation: Theory, Architectures and Applications,
Hillsdale, NJ: Erlbaum
Hochreiter, Sepp; Schmidhuber, Jürgen (1997), "Long Short-Term Memory", Neural Computation,
9 (8): 1735–1780, doi:10.1162/neco.1997.9.8.1735 (https://doi.org/10.1162%2Fneco.1997.9.8.173
5), PMID 9377276 (https://pubmed.ncbi.nlm.nih.gov/9377276), S2CID 1915014 (https://api.seman
ticscholar.org/CorpusID:1915014)
Goodfellow, Ian; Bengio, Yoshua; Courville, Aaron (2016), Deep Learning (https://web.archive.org/
web/20160416111010/http://www.deeplearningbook.org/), MIT Press., archived from the original
(http://www.deeplearningbook.org/) on 16 April 2016, retrieved 12 November 2017
Hinton, G.; Deng, L.; Yu, D.; Dahl, G.; Mohamed, A.; Jaitly, N.; Senior, A.; Vanhoucke, V.; Nguyen,
P.; Sainath, T.; Kingsbury, B. (2012). "Deep Neural Networks for Acoustic Modeling in Speech
Recognition – The shared views of four research groups". IEEE Signal Processing Magazine. 29
(6): 82–97. Bibcode:2012ISPM...29...82H (https://ui.adsabs.harvard.edu/abs/2012ISPM...29...82
H). doi:10.1109/msp.2012.2205597 (https://doi.org/10.1109%2Fmsp.2012.2205597).
S2CID 206485943 (https://api.semanticscholar.org/CorpusID:206485943).
Schmidhuber, J. (2015). "Deep Learning in Neural Networks: An Overview". Neural Networks. 61:
85–117. arXiv:1404.7828 (https://arxiv.org/abs/1404.7828). doi:10.1016/j.neunet.2014.09.003 (htt
ps://doi.org/10.1016%2Fj.neunet.2014.09.003). PMID 25462637 (https://pubmed.ncbi.nlm.nih.go
v/25462637). S2CID 11715509 (https://api.semanticscholar.org/CorpusID:11715509).
Linnainmaa, Seppo (1970). The representation of the cumulative rounding error of an algorithm as
a Taylor expansion of the local rounding errors (Thesis) (in Finnish). Univ. Helsinki, 6–7.|
Griewank, Andreas (2012). "Who Invented the Reverse Mode of Differentiation? Optimization
Stories". Documenta Matematica, Extra Volume ISMP: 389–400.
Werbos, Paul (1974). Beyond Regression: New Tools for Prediction and Analysis in the
Behavioral Sciences (Ph.D. thesis). Harvard University.
Werbos, Paul (1982). "Beyond Regression: New Tools for Prediction and Analysis in the
Behavioral Sciences" (https://web.archive.org/web/20160414055503/http://werbos.com/Neural/Se
nsitivityIFIPSeptember1981.pdf) (PDF). System Modeling and Optimization. Applications of
advances in nonlinear sensitivity analysis. Berlin, Heidelberg: Springer. Archived from the original
(http://werbos.com/Neural/SensitivityIFIPSeptember1981.pdf) (PDF) on 14 April 2016. Retrieved
16 April 2016.
"What is 'fuzzy logic'? Are there computers that are inherently fuzzy and do not apply the usual
binary logic?" (https://www.scientificamerican.com/article/what-is-fuzzy-logic-are-t/). Scientific
American. 21 October 1999. Archived (https://web.archive.org/web/20180506035133/https://www.
scientificamerican.com/article/what-is-fuzzy-logic-are-t/) from the original on 6 May 2018.
Retrieved 5 May 2018.
Merkle, Daniel; Middendorf, Martin (2013). "Swarm Intelligence". In Burke, Edmund K.; Kendall,
Graham (eds.). Search Methodologies: Introductory Tutorials in Optimization and Decision
Support Techniques. Springer Science & Business Media. ISBN 978-1-4614-6940-7.
van der Walt, Christiaan; Bernard, Etienne (2006). "Data characteristics that determine classifier
performance" (https://web.archive.org/web/20090325194051/http://www.patternrecognition.co.za/
https://en.wikipedia.org/wiki/Artificial_intelligence 40/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
https://en.wikipedia.org/wiki/Artificial_intelligence 41/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
https://en.wikipedia.org/wiki/Artificial_intelligence 42/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
Müller, Vincent C.; Bostrom, Nick (2014). "Future Progress in Artificial Intelligence: A Poll Among
Experts" (http://www.sophia.de/pdf/2014_PT-AI_polls.pdf) (PDF). AI Matters. 1 (1): 9–11.
doi:10.1145/2639475.2639478 (https://doi.org/10.1145%2F2639475.2639478). S2CID 8510016 (h
ttps://api.semanticscholar.org/CorpusID:8510016). Archived (https://web.archive.org/web/2016011
5114604/http://www.sophia.de/pdf/2014_PT-AI_polls.pdf) (PDF) from the original on 15 January
2016.
Cellan-Jones, Rory (2 December 2014). "Stephen Hawking warns artificial intelligence could end
mankind" (https://www.bbc.com/news/technology-30290540). BBC News. Archived (https://web.ar
chive.org/web/20151030054329/http://www.bbc.com/news/technology-30290540) from the
original on 30 October 2015. Retrieved 30 October 2015.
Rawlinson, Kevin (29 January 2015). "Microsoft's Bill Gates insists AI is a threat" (https://www.bb
c.co.uk/news/31047780). BBC News. Archived (https://web.archive.org/web/20150129183607/htt
p://www.bbc.co.uk/news/31047780) from the original on 29 January 2015. Retrieved 30 January
2015.
Holley, Peter (28 January 2015). "Bill Gates on dangers of artificial intelligence: 'I don't understand
why some people are not concerned' " (https://www.washingtonpost.com/news/the-switch/wp/201
5/01/28/bill-gates-on-dangers-of-artificial-intelligence-dont-understand-why-some-people-are-not-
concerned/). The Washington Post. ISSN 0190-8286 (https://www.worldcat.org/issn/0190-8286).
Archived (https://web.archive.org/web/20151030054330/https://www.washingtonpost.com/news/th
e-switch/wp/2015/01/28/bill-gates-on-dangers-of-artificial-intelligence-dont-understand-why-some-
people-are-not-concerned/) from the original on 30 October 2015. Retrieved 30 October 2015.
Gibbs, Samuel (27 October 2014). "Elon Musk: artificial intelligence is our biggest existential
threat" (https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-bi
ggest-existential-threat). The Guardian. Archived (https://web.archive.org/web/20151030054330/h
ttp://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-exis
tential-threat) from the original on 30 October 2015. Retrieved 30 October 2015.
Bostrom, Nick (2015). "What happens when our computers get smarter than we are?" (https://ww
w.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are/t
ranscript). TED (conference). Archived (https://web.archive.org/web/20200725005719/https://ww
w.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are/t
ranscript) from the original on 25 July 2020. Retrieved 30 January 2020.
Thibodeau, Patrick (25 March 2019). "Oracle CEO Mark Hurd sees no reason to fear ERP AI" (htt
ps://searcherp.techtarget.com/news/252460208/Oracle-CEO-Mark-Hurd-sees-no-reason-to-fear-
ERP-AI). SearchERP. Archived (https://web.archive.org/web/20190506173749/https://searcherp.t
echtarget.com/news/252460208/Oracle-CEO-Mark-Hurd-sees-no-reason-to-fear-ERP-AI) from
the original on 6 May 2019. Retrieved 6 May 2019.
Bhardwaj, Prachi (24 May 2018). "Mark Zuckerberg responds to Elon Musk's paranoia about AI:
'AI is going to... help keep our communities safe.' " (https://www.businessinsider.com/mark-zucker
berg-shares-thoughts-elon-musks-ai-2018-5). Business Insider. Archived (https://web.archive.org/
web/20190506173756/https://www.businessinsider.com/mark-zuckerberg-shares-thoughts-elon-m
usks-ai-2018-5) from the original on 6 May 2019. Retrieved 6 May 2019.
Geist, Edward Moore (9 August 2015). "Is artificial intelligence really an existential threat to
humanity?" (http://thebulletin.org/artificial-intelligence-really-existential-threat-humanity8577).
Bulletin of the Atomic Scientists. Archived (https://web.archive.org/web/20151030054330/http://the
bulletin.org/artificial-intelligence-really-existential-threat-humanity8577) from the original on 30
October 2015. Retrieved 30 October 2015.
Madrigal, Alexis C. (27 February 2015). "The case against killer robots, from a guy actually
working on artificial intelligence" (https://www.hrw.org/report/2012/11/19/losing-humanity/case-aga
inst-killer-robots). Fusion.net. Archived (https://web.archive.org/web/20160204175716/http://fusio
https://en.wikipedia.org/wiki/Artificial_intelligence 43/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
https://en.wikipedia.org/wiki/Artificial_intelligence 44/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
Lohr, Steve (28 February 2016). "The Promise of Artificial Intelligence Unfolds in Small Steps" (htt
ps://www.nytimes.com/2016/02/29/technology/the-promise-of-artificial-intelligence-unfolds-in-smal
l-steps.html). The New York Times. Archived (https://web.archive.org/web/20160229171843/http://
www.nytimes.com/2016/02/29/technology/the-promise-of-artificial-intelligence-unfolds-in-small-ste
ps.html) from the original on 29 February 2016. Retrieved 29 February 2016.
Smith, Mark (22 July 2016). "So you think you chose to read this article?" (https://www.bbc.co.uk/n
ews/business-36837824). BBC News. Archived (https://web.archive.org/web/20160725205007/htt
p://www.bbc.co.uk/news/business-36837824) from the original on 25 July 2016.
Aletras, N.; Tsarapatsanis, D.; Preotiuc-Pietro, D.; Lampos, V. (2016). "Predicting judicial
decisions of the European Court of Human Rights: a Natural Language Processing perspective"
(https://doi.org/10.7717%2Fpeerj-cs.93). PeerJ Computer Science. 2: e93. doi:10.7717/peerj-
cs.93 (https://doi.org/10.7717%2Fpeerj-cs.93).
Cadena, Cesar; Carlone, Luca; Carrillo, Henry; Latif, Yasir; Scaramuzza, Davide; Neira, Jose;
Reid, Ian; Leonard, John J. (December 2016). "Past, Present, and Future of Simultaneous
Localization and Mapping: Toward the Robust-Perception Age". IEEE Transactions on Robotics.
32 (6): 1309–1332. arXiv:1606.05830 (https://arxiv.org/abs/1606.05830).
doi:10.1109/TRO.2016.2624754 (https://doi.org/10.1109%2FTRO.2016.2624754).
S2CID 2596787 (https://api.semanticscholar.org/CorpusID:2596787).
Cambria, Erik; White, Bebo (May 2014). "Jumping NLP Curves: A Review of Natural Language
Processing Research [Review Article]". IEEE Computational Intelligence Magazine. 9 (2): 48–57.
doi:10.1109/MCI.2014.2307227 (https://doi.org/10.1109%2FMCI.2014.2307227).
S2CID 206451986 (https://api.semanticscholar.org/CorpusID:206451986).
Vincent, James (7 November 2019). "OpenAI has published the text-generating AI it said was too
dangerous to share" (https://www.theverge.com/2019/11/7/20953040/openai-text-generation-ai-gp
t-2-full-model-release-1-5b-parameters). The Verge. Archived (https://web.archive.org/web/20200
611054114/https://www.theverge.com/2019/11/7/20953040/openai-text-generation-ai-gpt-2-full-mo
del-release-1-5b-parameters) from the original on 11 June 2020. Retrieved 11 June 2020.
Jordan, M. I.; Mitchell, T. M. (16 July 2015). "Machine learning: Trends, perspectives, and
prospects". Science. 349 (6245): 255–260. Bibcode:2015Sci...349..255J (https://ui.adsabs.harvar
d.edu/abs/2015Sci...349..255J). doi:10.1126/science.aaa8415 (https://doi.org/10.1126%2Fscienc
e.aaa8415). PMID 26185243 (https://pubmed.ncbi.nlm.nih.gov/26185243). S2CID 677218 (https://
api.semanticscholar.org/CorpusID:677218).
Maschafilm (2010). "Content: Plug & Pray Film – Artificial Intelligence – Robots" (http://www.pluga
ndpray-film.de/en/content.html). plugandpray-film.de. Archived (https://web.archive.org/web/20160
212040134/http://www.plugandpray-film.de/en/content.html) from the original on 12 February
2016.
Evans, Woody (2015). "Posthuman Rights: Dimensions of Transhuman Worlds" (https://doi.org/1
0.5209%2Frev_TK.2015.v12.n2.49072). Teknokultura. 12 (2).
doi:10.5209/rev_TK.2015.v12.n2.49072 (https://doi.org/10.5209%2Frev_TK.2015.v12.n2.49072).
Waddell, Kaveh (2018). "Chatbots Have Entered the Uncanny Valley" (https://www.theatlantic.co
m/technology/archive/2017/04/uncanny-valley-digital-assistants/523806/). The Atlantic. Archived
(https://web.archive.org/web/20180424202350/https://www.theatlantic.com/technology/archive/20
17/04/uncanny-valley-digital-assistants/523806/) from the original on 24 April 2018. Retrieved
24 April 2018.
Poria, Soujanya; Cambria, Erik; Bajpai, Rajiv; Hussain, Amir (September 2017). "A review of
affective computing: From unimodal analysis to multimodal fusion" (http://researchrepository.napie
r.ac.uk/Output/1792429). Information Fusion. 37: 98–125. doi:10.1016/j.inffus.2017.02.003 (http
s://doi.org/10.1016%2Fj.inffus.2017.02.003). hdl:1893/25490 (https://hdl.handle.net/1893%2F254
90). S2CID 205433041 (https://api.semanticscholar.org/CorpusID:205433041). Archived (https://w
eb.archive.org/web/20230323165407/https://www.napier.ac.uk/research-and-innovation/research-
https://en.wikipedia.org/wiki/Artificial_intelligence 45/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
search/outputs/a-review-of-affective-computing-from-unimodal-analysis-to-multimodal-fusion) from
the original on 23 March 2023. Retrieved 27 April 2021.
"Robots could demand legal rights" (http://news.bbc.co.uk/2/hi/technology/6200005.stm). BBC
News. 21 December 2006. Archived (https://web.archive.org/web/20191015042628/http://news.bb
c.co.uk/2/hi/technology/6200005.stm) from the original on 15 October 2019. Retrieved 3 February
2011.
Horst, Steven (2005). "The Computational Theory of Mind" (http://plato.stanford.edu/entries/comp
utational-mind). The Stanford Encyclopedia of Philosophy. Archived (https://web.archive.org/web/
20160306083748/http://plato.stanford.edu/entries/computational-mind/) from the original on 6
March 2016. Retrieved 7 March 2016.
Omohundro, Steve (2008). The Nature of Self-Improving Artificial Intelligence. presented and
distributed at the 2007 Singularity Summit, San Francisco, CA.
Ford, Martin; Colvin, Geoff (6 September 2015). "Will robots create more jobs than they destroy?"
(https://www.theguardian.com/technology/2015/sep/06/will-robots-create-destroy-jobs). The
Guardian. Archived (https://web.archive.org/web/20180616204119/https://www.theguardian.com/t
echnology/2015/sep/06/will-robots-create-destroy-jobs) from the original on 16 June 2018.
Retrieved 13 January 2018.
White Paper: On Artificial Intelligence – A European approach to excellence and trust (https://ec.e
uropa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf) (PDF).
Brussels: European Commission. 2020. Archived (https://web.archive.org/web/20200220173419/h
ttps://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.p
df) (PDF) from the original on 20 February 2020. Retrieved 20 February 2020.
Anderson, Michael; Anderson, Susan Leigh (2011). Machine Ethics. Cambridge University Press.
"Machine Ethics" (https://web.archive.org/web/20141129044821/http://www.aaai.org/Library/Symp
osia/Fall/fs05-06). aaai.org. Archived from the original (http://www.aaai.org/Library/Symposia/Fall/f
s05-06) on 29 November 2014.
Russell, Stuart (2019). Human Compatible: Artificial Intelligence and the Problem of Control.
United States: Viking. ISBN 978-0-525-55861-3. OCLC 1083694322 (https://www.worldcat.org/ocl
c/1083694322).
"AI set to exceed human brain power" (http://www.cnn.com/2006/TECH/science/07/24/ai.bostro
m/). CNN. 9 August 2006. Archived (https://web.archive.org/web/20080219001624/http://www.cn
n.com/2006/TECH/science/07/24/ai.bostrom/) from the original on 19 February 2008.
"Robots could demand legal rights" (http://news.bbc.co.uk/2/hi/technology/6200005.stm). BBC
News. 21 December 2006. Archived (https://web.archive.org/web/20191015042628/http://news.bb
c.co.uk/2/hi/technology/6200005.stm) from the original on 15 October 2019. Retrieved 3 February
2011.
"Kismet" (http://www.ai.mit.edu/projects/humanoid-robotics-group/kismet/kismet.html). MIT
Artificial Intelligence Laboratory, Humanoid Robotics Group. Archived (https://web.archive.org/we
b/20141017040432/http://www.ai.mit.edu/projects/humanoid-robotics-group/kismet/kismet.html)
from the original on 17 October 2014. Retrieved 25 October 2014.
Smoliar, Stephen W.; Zhang, HongJiang (1994). "Content based video indexing and retrieval".
IEEE MultiMedia. 1 (2): 62–72. doi:10.1109/93.311653 (https://doi.org/10.1109%2F93.311653).
S2CID 32710913 (https://api.semanticscholar.org/CorpusID:32710913).
Neumann, Bernd; Möller, Ralf (January 2008). "On scene interpretation with description logics".
Image and Vision Computing. 26 (1): 82–101. doi:10.1016/j.imavis.2007.08.013 (https://doi.org/1
0.1016%2Fj.imavis.2007.08.013). S2CID 10767011 (https://api.semanticscholar.org/CorpusID:107
67011).
Kuperman, G. J.; Reichley, R. M.; Bailey, T. C. (1 July 2006). "Using Commercial Knowledge
Bases for Clinical Decision Support: Opportunities, Hurdles, and Recommendations" (https://www.
https://en.wikipedia.org/wiki/Artificial_intelligence 46/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
https://en.wikipedia.org/wiki/Artificial_intelligence 47/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
https://en.wikipedia.org/wiki/Artificial_intelligence 48/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
Hawkins, Jeff; Blakeslee, Sandra (2005). On Intelligence. New York: Owl Books. ISBN 978-0-
8050-7853-4.
Henderson, Mark (24 April 2007). "Human rights for robots? We're getting carried away" (http://ww
w.thetimes.co.uk/tto/technology/article1966391.ece). The Times Online. London. Archived (https://
web.archive.org/web/20140531104850/http://www.thetimes.co.uk/tto/technology/article1966391.e
ce) from the original on 31 May 2014. Retrieved 31 May 2014.
Kahneman, Daniel; Slovic, D.; Tversky, Amos (1982). Judgment under uncertainty: Heuristics and
biases. Science. Vol. 185. New York: Cambridge University Press. pp. 1124–1131.
Bibcode:1974Sci...185.1124T (https://ui.adsabs.harvard.edu/abs/1974Sci...185.1124T).
doi:10.1126/science.185.4157.1124 (https://doi.org/10.1126%2Fscience.185.4157.1124).
ISBN 978-0-521-28414-1. PMID 17835457 (https://pubmed.ncbi.nlm.nih.gov/17835457).
S2CID 143452957 (https://api.semanticscholar.org/CorpusID:143452957).
Katz, Yarden (1 November 2012). "Noam Chomsky on Where Artificial Intelligence Went Wrong"
(https://www.theatlantic.com/technology/archive/2012/11/noam-chomsky-on-where-artificial-intellig
ence-went-wrong/261637/?single_page=true). The Atlantic. Archived (https://web.archive.org/we
b/20190228154403/https://www.theatlantic.com/technology/archive/2012/11/noam-chomsky-on-w
here-artificial-intelligence-went-wrong/261637/?single_page=true) from the original on 28
February 2019. Retrieved 26 October 2014.
Kurzweil, Ray (2005). The Singularity is Near. Penguin Books. ISBN 978-0-670-03384-3.
Langley, Pat (2011). "The changing science of machine learning" (https://doi.org/10.1007%2Fs109
94-011-5242-y). Machine Learning. 82 (3): 275–279. doi:10.1007/s10994-011-5242-y (https://doi.o
rg/10.1007%2Fs10994-011-5242-y).
Legg, Shane; Hutter, Marcus (15 June 2007). "A Collection of Definitions of Intelligence".
arXiv:0706.3639 (https://arxiv.org/abs/0706.3639) [cs.AI (https://arxiv.org/archive/cs.AI)].
Lenat, Douglas; Guha, R. V. (1989). Building Large Knowledge-Based Systems. Addison-Wesley.
ISBN 978-0-201-51752-1.
Lighthill, James (1973). "Artificial Intelligence: A General Survey". Artificial Intelligence: a paper
symposium. Science Research Council.
Lombardo, P; Boehm, I; Nairz, K (2020). "RadioComics – Santa Claus and the future of radiology"
(https://doi.org/10.1016%2Fj.ejrad.2019.108771). Eur J Radiol. 122 (1): 108771.
doi:10.1016/j.ejrad.2019.108771 (https://doi.org/10.1016%2Fj.ejrad.2019.108771).
PMID 31835078 (https://pubmed.ncbi.nlm.nih.gov/31835078).
Lungarella, M.; Metta, G.; Pfeifer, R.; Sandini, G. (2003). "Developmental robotics: a survey".
Connection Science. 15 (4): 151–190. CiteSeerX 10.1.1.83.7615 (https://citeseerx.ist.psu.edu/vie
wdoc/summary?doi=10.1.1.83.7615). doi:10.1080/09540090310001655110 (https://doi.org/10.108
0%2F09540090310001655110). S2CID 1452734 (https://api.semanticscholar.org/CorpusID:14527
34).
Maker, Meg Houston (2006). "AI@50: AI Past, Present, Future" (https://web.archive.org/web/2007
0103222615/http://www.engagingexperience.com/2006/07/ai50_ai_past_pr.html). Dartmouth
College. Archived from the original (http://www.engagingexperience.com/2006/07/ai50_ai_past_p
r.html) on 3 January 2007. Retrieved 16 October 2008.
McCarthy, John; Minsky, Marvin; Rochester, Nathan; Shannon, Claude (1955). "A Proposal for the
Dartmouth Summer Research Project on Artificial Intelligence" (https://web.archive.org/web/20070
826230310/http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html). Archived from
the original (http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html) on 26 August
2007. Retrieved 30 August 2007.
Minsky, Marvin (1967). Computation: Finite and Infinite Machines (https://archive.org/details/comp
utationfinit0000mins). Englewood Cliffs, N.J.: Prentice-Hall. ISBN 978-0-13-165449-5. Archived (ht
https://en.wikipedia.org/wiki/Artificial_intelligence 49/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
tps://web.archive.org/web/20200726131743/https://archive.org/details/computationfinit0000mins)
from the original on 26 July 2020. Retrieved 18 November 2019.
Moravec, Hans (1988). Mind Children (https://archive.org/details/mindchildrenfutu00mora).
Harvard University Press. ISBN 978-0-674-57616-2. Archived (https://web.archive.org/web/20200
726131644/https://archive.org/details/mindchildrenfutu00mora) from the original on 26 July 2020.
Retrieved 18 November 2019.
NRC (United States National Research Council) (1999). "Developments in Artificial Intelligence".
Funding a Revolution: Government Support for Computing Research. National Academy Press.
Newell, Allen; Simon, H. A. (1976). "Computer Science as Empirical Inquiry: Symbols and Search"
(https://doi.org/10.1145%2F360018.360022). Communications of the ACM. 19 (3): 113–126.
doi:10.1145/360018.360022 (https://doi.org/10.1145%2F360018.360022)..
Nilsson, Nils (1983). "Artificial Intelligence Prepares for 2001" (https://ai.stanford.edu/~nilsson/Onli
nePubs-Nils/General%20Essays/AIMag04-04-002.pdf) (PDF). AI Magazine. 1 (1). Archived (http
s://web.archive.org/web/20200817194457/http://ai.stanford.edu/~nilsson/OnlinePubs-Nils/Genera
l%20Essays/AIMag04-04-002.pdf) (PDF) from the original on 17 August 2020. Retrieved
22 August 2020. Presidential Address to the Association for the Advancement of Artificial
Intelligence.
Oudeyer, P-Y. (2010). "On the impact of robotics in behavioral and cognitive sciences: from insect
navigation to human cognitive development" (http://www.pyoudeyer.com/IEEETAMDOudeyer10.p
df) (PDF). IEEE Transactions on Autonomous Mental Development. 2 (1): 2–16.
doi:10.1109/tamd.2009.2039057 (https://doi.org/10.1109%2Ftamd.2009.2039057).
S2CID 6362217 (https://api.semanticscholar.org/CorpusID:6362217). Archived (https://web.archiv
e.org/web/20181003202543/http://www.pyoudeyer.com/IEEETAMDOudeyer10.pdf) (PDF) from
the original on 3 October 2018. Retrieved 4 June 2013.
Schank, Roger C. (1991). "Where's the AI" (https://ojs.aaai.org/aimagazine/index.php/aimagazine/
issue/view/94). AI magazine. Vol. 12, no. 4.
Searle, John (1980). "Minds, Brains and Programs" (http://cogprints.org/7150/1/10.1.1.83.5248.pd
f) (PDF). Behavioral and Brain Sciences. 3 (3): 417–457. doi:10.1017/S0140525X00005756 (http
s://doi.org/10.1017%2FS0140525X00005756). S2CID 55303721 (https://api.semanticscholar.org/
CorpusID:55303721). Archived (https://web.archive.org/web/20190317230215/http://cogprints.org/
7150/1/10.1.1.83.5248.pdf) (PDF) from the original on 17 March 2019. Retrieved 22 August 2020.
Searle, John (1999). Mind, language and society (https://archive.org/details/mindlanguagesoci00s
ear). New York: Basic Books. ISBN 978-0-465-04521-1. OCLC 231867665 (https://www.worldcat.
org/oclc/231867665). Archived (https://web.archive.org/web/20200726220615/https://archive.org/
details/mindlanguagesoci00sear) from the original on 26 July 2020. Retrieved 22 August 2020.
Simon, H. A. (1965). The Shape of Automation for Men and Management (https://archive.org/detai
ls/shapeofautomatio00simo). New York: Harper & Row. Archived (https://web.archive.org/web/202
00726131655/https://archive.org/details/shapeofautomatio00simo) from the original on 26 July
2020. Retrieved 18 November 2019.
Solomonoff, Ray (1956). An Inductive Inference Machine (http://world.std.com/~rjs/indinf56.pdf)
(PDF). Dartmouth Summer Research Conference on Artificial Intelligence. Archived (https://web.a
rchive.org/web/20110426161749/http://world.std.com/~rjs/indinf56.pdf) (PDF) from the original on
26 April 2011. Retrieved 22 March 2011 – via std.com, pdf scanned copy of the original. Later
published as
Solomonoff, Ray (1957). "An Inductive Inference Machine". IRE Convention Record. Vol. Section
on Information Theory, part 2. pp. 56–62.
Spadafora, Anthony (21 October 2016). "Stephen Hawking believes AI could be mankind's last
accomplishment" (https://betanews.com/2016/10/21/artificial-intelligence-stephen-hawking/).
BetaNews. Archived (https://web.archive.org/web/20170828183930/https://betanews.com/2016/1
0/21/artificial-intelligence-stephen-hawking/) from the original on 28 August 2017.
https://en.wikipedia.org/wiki/Artificial_intelligence 50/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
Tao, Jianhua; Tan, Tieniu (2005). Affective Computing and Intelligent Interaction. Affective
Computing: A Review. Lecture Notes in Computer Science. Vol. LNCS 3784. Springer. pp. 981–
995. doi:10.1007/11573548 (https://doi.org/10.1007%2F11573548). ISBN 978-3-540-29621-8.
Tecuci, Gheorghe (March–April 2012). "Artificial Intelligence". Wiley Interdisciplinary Reviews:
Computational Statistics. 4 (2): 168–180. doi:10.1002/wics.200 (https://doi.org/10.1002%2Fwics.2
00). S2CID 196141190 (https://api.semanticscholar.org/CorpusID:196141190).
Thro, Ellen (1993). Robotics: The Marriage of Computers and Machines (https://archive.org/detail
s/isbn_9780816026289). New York: Facts on File. ISBN 978-0-8160-2628-9. Archived (https://we
b.archive.org/web/20200726131505/https://archive.org/details/isbn_9780816026289) from the
original on 26 July 2020. Retrieved 22 August 2020.
Turing, Alan (October 1950), "Computing Machinery and Intelligence", Mind, LIX (236): 433–460,
doi:10.1093/mind/LIX.236.433 (https://doi.org/10.1093%2Fmind%2FLIX.236.433), ISSN 0026-
4423 (https://www.worldcat.org/issn/0026-4423).
UNESCO Science Report: the Race Against Time for Smarter Development (https://unesdoc.unes
co.org/ark:/48223/pf0000377433/PDF/377433eng.pdf.multi). Paris: UNESCO. 2021. ISBN 978-
92-3-100450-6. Archived (https://web.archive.org/web/20220618233752/https://unesdoc.unesco.o
rg/ark:/48223/pf0000377433/PDF/377433eng.pdf.multi) from the original on 18 June 2022.
Retrieved 18 September 2021.
Vinge, Vernor (1993). "The Coming Technological Singularity: How to Survive in the Post-Human
Era" (https://web.archive.org/web/20070101133646/http://www-rohan.sdsu.edu/faculty/vinge/misc/
singularity.html). Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace:
11. Bibcode:1993vise.nasa...11V (https://ui.adsabs.harvard.edu/abs/1993vise.nasa...11V).
Archived from the original (http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html) on 1
January 2007. Retrieved 14 November 2011.
Wason, P. C.; Shapiro, D. (1966). "Reasoning" (https://archive.org/details/newhorizonsinpsy0000f
oss). In Foss, B. M. (ed.). New horizons in psychology. Harmondsworth: Penguin. Archived (http
s://web.archive.org/web/20200726131518/https://archive.org/details/newhorizonsinpsy0000foss)
from the original on 26 July 2020. Retrieved 18 November 2019.
Weng, J.; McClelland; Pentland, A.; Sporns, O.; Stockman, I.; Sur, M.; Thelen, E. (2001).
"Autonomous mental development by robots and animals" (http://www.cse.msu.edu/dl/SciencePa
per.pdf) (PDF). Science. 291 (5504): 599–600. doi:10.1126/science.291.5504.599 (https://doi.org/
10.1126%2Fscience.291.5504.599). PMID 11229402 (https://pubmed.ncbi.nlm.nih.gov/1122940
2). S2CID 54131797 (https://api.semanticscholar.org/CorpusID:54131797). Archived (https://web.
archive.org/web/20130904235242/http://www.cse.msu.edu/dl/SciencePaper.pdf) (PDF) from the
original on 4 September 2013. Retrieved 4 June 2013 – via msu.edu.
AI & ML in Fusion (https://suli.pppl.gov/2023/course/Rea-PPPL-SULI2023.pdf)
AI & ML in Fusion, video lecture (https://drive.google.com/file/d/1npCTrJ8XJn20ZGDA_DfMpANu
QZFMzKPh/view?usp=drive_link) Archived (https://web.archive.org/web/20230702164332/https://
drive.google.com/file/d/1npCTrJ8XJn20ZGDA_DfMpANuQZFMzKPh/view?usp=drive_link) 2 July
2023 at the Wayback Machine
Further reading
Autor, David H., "Why Are There Still So Many Jobs? The History and Future of Workplace
Automation" (2015) 29(3) Journal of Economic Perspectives 3.
Boden, Margaret, Mind As Machine, Oxford University Press, 2006.
https://en.wikipedia.org/wiki/Artificial_intelligence 51/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
Cukier, Kenneth, "Ready for Robots? How to Think about the Future of AI", Foreign Affairs, vol.
98, no. 4 (July/August 2019), pp. 192–98. George Dyson, historian of computing, writes (in what
might be called "Dyson's Law") that "Any system simple enough to be understandable will not be
complicated enough to behave intelligently, while any system complicated enough to behave
intelligently will be too complicated to understand." (p. 197.) Computer scientist Alex Pentland
writes: "Current AI machine-learning algorithms are, at their core, dead simple stupid. They work,
but they work by brute force." (p. 198.)
Domingos, Pedro, "Our Digital Doubles: AI will serve our species, not control it", Scientific
American, vol. 319, no. 3 (September 2018), pp. 88–93.
Gertner, Jon. (2023) "Wikipedia's Moment of Truth: Can the online encyclopedia help teach A.I.
chatbots to get their facts right — without destroying itself in the process?" New York Times
Magazine (July 18, 2023) online (https://www.nytimes.com/2023/07/18/magazine/wikipedia-ai-cha
tgpt.html)
Johnston, John (2008) The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI, MIT
Press.
Jumper, John; Evans, Richard; Pritzel, Alexander; et al. (26 August 2021). "Highly accurate
protein structure prediction with AlphaFold" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC83716
05). Nature. 596 (7873): 583–589. Bibcode:2021Natur.596..583J (https://ui.adsabs.harvard.edu/a
bs/2021Natur.596..583J). doi:10.1038/s41586-021-03819-2 (https://doi.org/10.1038%2Fs41586-0
21-03819-2). PMC 8371605 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8371605).
PMID 34265844 (https://pubmed.ncbi.nlm.nih.gov/34265844). S2CID 235959867 (https://api.sem
anticscholar.org/CorpusID:235959867).
LeCun, Yann; Bengio, Yoshua; Hinton, Geoffrey (28 May 2015). "Deep learning" (https://www.natu
re.com/articles/nature14539). Nature. 521 (7553): 436–444. Bibcode:2015Natur.521..436L (http
s://ui.adsabs.harvard.edu/abs/2015Natur.521..436L). doi:10.1038/nature14539 (https://doi.org/10.
1038%2Fnature14539). PMID 26017442 (https://pubmed.ncbi.nlm.nih.gov/26017442). Archived (h
ttps://web.archive.org/web/20230605235832/https://www.nature.com/articles/nature14539) from
the original on 5 June 2023. Retrieved 19 June 2023.
Gary Marcus, "Artificial Confidence: Even the newest, buzziest systems of artificial general
intelligence are stymmied by the same old problems", Scientific American, vol. 327, no. 4 (October
2022), pp. 42–45.
Mitchell, Melanie (2019). Artificial intelligence: a guide for thinking humans. New York: Farrar,
Straus and Giroux. ISBN 9780374257835.
Mnih, Volodymyr; Kavukcuoglu, Koray; Silver, David; et al. (26 February 2015). "Human-level
control through deep reinforcement learning" (https://www.nature.com/articles/nature14236/).
Nature. 518 (7540): 529–533. Bibcode:2015Natur.518..529M (https://ui.adsabs.harvard.edu/abs/2
015Natur.518..529M). doi:10.1038/nature14236 (https://doi.org/10.1038%2Fnature14236).
PMID 25719670 (https://pubmed.ncbi.nlm.nih.gov/25719670). S2CID 205242740 (https://api.sem
anticscholar.org/CorpusID:205242740). Archived (https://web.archive.org/web/20230619055525/h
ttps://www.nature.com/articles/nature14236/) from the original on 19 June 2023. Retrieved
19 June 2023. Introduced DQN, which produced human-level performance on some Atari games.
Eka Roivainen, "AI's IQ: ChatGPT aced a [standard intelligence] test but showed that intelligence
cannot be measured by IQ alone", Scientific American, vol. 329, no. 1 (July/August 2023), p. 7.
"Despite its high IQ, ChatGPT fails at tasks that require real humanlike reasoning or an
understanding of the physical and social world.... ChatGPT seemed unable to reason logically and
tried to rely on its vast database of... facts derived from online texts."
https://en.wikipedia.org/wiki/Artificial_intelligence 52/53
8/11/23, 5:07 PM Artificial intelligence - Wikipedia
Serenko, Alexander; Michael Dohan (2011). "Comparing the expert survey and citation impact
journal ranking methods: Example from the field of Artificial Intelligence" (http://www.aserenko.co
m/papers/JOI_AI_Journal_Ranking_Serenko.pdf) (PDF). Journal of Informetrics. 5 (4): 629–49.
doi:10.1016/j.joi.2011.06.002 (https://doi.org/10.1016%2Fj.joi.2011.06.002). Archived (https://web.
archive.org/web/20131004212839/http://www.aserenko.com/papers/JOI_AI_Journal_Ranking_Se
renko.pdf) (PDF) from the original on 4 October 2013. Retrieved 12 September 2013.
Silver, David; Huang, Aja; Maddison, Chris J.; et al. (28 January 2016). "Mastering the game of
Go with deep neural networks and tree search" (https://www.nature.com/articles/nature16961).
Nature. 529 (7587): 484–489. Bibcode:2016Natur.529..484S (https://ui.adsabs.harvard.edu/abs/2
016Natur.529..484S). doi:10.1038/nature16961 (https://doi.org/10.1038%2Fnature16961).
PMID 26819042 (https://pubmed.ncbi.nlm.nih.gov/26819042). S2CID 515925 (https://api.semantic
scholar.org/CorpusID:515925). Archived (https://web.archive.org/web/20230618213059/https://ww
w.nature.com/articles/nature16961) from the original on 18 June 2023. Retrieved 19 June 2023.
Ashish Vaswani, Noam Shazeer, Niki Parmar et al. "Attention is all you need." Advances in neural
information processing systems 30 (2017). Seminal paper on transformers.
External links
"Artificial Intelligence" (http://www.iep.utm.edu/art-inte). Internet Encyclopedia of Philosophy.
Thomason, Richmond. "Logic and Artificial Intelligence" (https://plato.stanford.edu/entries/logic-a
i/). In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
Artificial Intelligence (https://www.bbc.co.uk/programmes/p003k9fc). BBC Radio 4 discussion with
John Agar, Alison Adam & Igor Aleksander (In Our Time, 8 December 2005).
Theranostics and AI—The Next Advance in Cancer Precision Medicine (https://datascience.cance
r.gov/news-events/blog/theranostics-and-ai-next-advance-cancer-precision-medicine).
https://en.wikipedia.org/wiki/Artificial_intelligence 53/53