0% found this document useful (0 votes)
166 views

Upload 2

Uploaded by

Jurydel Rama
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
166 views

Upload 2

Uploaded by

Jurydel Rama
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 48

Page semi-protected

Artificial intelligence
From Wikipedia, the free encyclopedia
Jump to navigationJump to search
"AI" redirects here. For other uses, see AI (disambiguation) and Artificial
intelligence (disambiguation).

This article may have too many section headers dividing up its content. Please help
improve the article by merging similar sections and removing unneeded subheaders.
(July 2022) (Learn how and when to remove this template message)
Part of a series on
Artificial intelligence
Anatomy-1751201 1280.png
Major goals
Approaches
Philosophy
History
Technology
Glossary
vte
Artificial intelligence (AI) is intelligence demonstrated by machines, as opposed
to the natural intelligence displayed by animals including humans. AI research has
been defined as the field of study of intelligent agents, which refers to any
system that perceives its environment and takes actions that maximize its chance of
achieving its goals.[a]

The term "artificial intelligence" had previously been used to describe machines
that mimic and display "human" cognitive skills that are associated with the human
mind, such as "learning" and "problem-solving". This definition has since been
rejected by major AI researchers who now describe AI in terms of rationality and
acting rationally, which does not limit how intelligence can be articulated.[b]

AI applications include advanced web search engines (e.g., Google), recommendation


systems (used by YouTube, Amazon and Netflix), understanding human speech (such as
Siri and Alexa), self-driving cars (e.g., Tesla), automated decision-making and
competing at the highest level in strategic game systems (such as chess and Go).[2]
As machines become increasingly capable, tasks considered to require "intelligence"
are often removed from the definition of AI, a phenomenon known as the AI effect.
[3] For instance, optical character recognition is frequently excluded from things
considered to be AI,[4] having become a routine technology.[5]

Artificial intelligence was founded as an academic discipline in 1956, and in the


years since has experienced several waves of optimism,[6][7] followed by
disappointment and the loss of funding (known as an "AI winter"),[8][9] followed by
new approaches, success and renewed funding.[7][10] AI research has tried and
discarded many different approaches since its founding, including simulating the
brain, modeling human problem solving, formal logic, large databases of knowledge
and imitating animal behavior. In the first decades of the 21st century, highly
mathematical-statistical machine learning has dominated the field, and this
technique has proved highly successful, helping to solve many challenging problems
throughout industry and academia.[10][11]

The various sub-fields of AI research are centered around particular goals and the
use of particular tools. The traditional goals of AI research include reasoning,
knowledge representation, planning, learning, natural language processing,
perception, and the ability to move and manipulate objects.[c] General intelligence
(the ability to solve an arbitrary problem) is among the field's long-term goals.
[12] To solve these problems, AI researchers have adapted and integrated a wide
range of problem-solving techniques—including search and mathematical optimization,
formal logic, artificial neural networks, and methods based on statistics,
probability and economics. AI also draws upon computer science, psychology,
linguistics, philosophy, and many other fields.

The field was founded on the assumption that human intelligence "can be so
precisely described that a machine can be made to simulate it".[d] This raised
philosophical arguments about the mind and the ethical consequences of creating
artificial beings endowed with human-like intelligence; these issues have
previously been explored by myth, fiction and philosophy since antiquity.[14]
Computer scientists and philosophers have since suggested that AI may become an
existential risk to humanity if its rational capacities are not steered towards
beneficial goals.[e]

Contents
1 History
1.1 Fictions and early concepts
1.2 Early researches
1.3 From expert systems to machine learning
2 Goals
2.1 Reasoning, problem-solving
2.2 Knowledge representation
2.3 Planning
2.4 Learning
2.5 Natural language processing
2.6 Perception
2.7 Motion and manipulation
2.8 Social intelligence
2.9 General intelligence
3 Tools
3.1 Search and optimization
3.2 Logic
3.3 Probabilistic methods for uncertain reasoning
3.4 Classifiers and statistical learning methods
3.5 Artificial neural networks
3.5.1 Deep learning
3.6 Specialized languages and hardware
4 Applications
4.1 Legal aspects
5 Philosophy
5.1 Defining artificial intelligence
5.1.1 Thinking vs. acting: the Turing test
5.1.2 Acting humanly vs. acting intelligently: intelligent agents
5.2 Evaluating approaches to AI
5.2.1 Symbolic AI and its limits
5.2.2 Neat vs. scruffy
5.2.3 Soft vs. hard computing
5.2.4 Narrow vs. general AI
5.3 Machine consciousness, sentience and mind
5.3.1 Consciousness
5.3.2 Computationalism and functionalism
5.3.3 Robot rights
6 Future
6.1 Superintelligence
6.2 Risks
6.2.1 Technological unemployment
6.2.2 Bad actors and weaponized AI
6.2.3 Algorithmic bias
6.2.4 Existential risk
6.3 Ethical machines
6.4 Regulation
7 In fiction
8 Scientific diplomacy
8.1 Warfare
8.1.1 Russo-Ukrainian War
8.1.2 Warfare regulations
8.2 Cybersecurity
8.2.1 Czech Republic's approach
8.2.2 Germany's approach
8.2.3 European Union's approach
8.2.4 Russo-Ukrainian War
8.3 Election security
8.4 Future of work
8.4.1 Facial recognition
8.4.2 AI and school
8.4.3 AI and medicine
8.4.4 AI in business
8.4.5 Business and diplomacy
8.5 AI and foreign policy
9 See also
10 Explanatory notes
11 Citations
12 References
12.1 AI textbooks
12.2 History of AI
12.3 Other sources
13 Further reading
14 External links
History
Main articles: History of artificial intelligence and Timeline of artificial
intelligence
Fictions and early concepts

Silver didrachma from Crete depicting Talos, an ancient mythical automaton with
artificial intelligence
Artificial beings with intelligence appeared as storytelling devices in antiquity,
[15] and have been common in fiction, as in Mary Shelley's Frankenstein or Karel
Čapek's R.U.R.[16] These characters and their fates raised many of the same issues
now discussed in the ethics of artificial intelligence.[17]

The study of mechanical or "formal" reasoning began with philosophers and


mathematicians in antiquity. The study of mathematical logic led directly to Alan
Turing's theory of computation, which suggested that a machine, by shuffling
symbols as simple as "0" and "1", could simulate any conceivable act of
mathematical deduction. This insight that digital computers can simulate any
process of formal reasoning is known as the Church–Turing thesis.[18]

The Church-Turing thesis, along with concurrent discoveries in neurobiology,


information theory and cybernetics, led researchers to consider the possibility of
building an electronic brain.[19] The first work that is now generally recognized
as AI was McCullouch and Pitts' 1943 formal design for Turing-complete "artificial
neurons".[20]

Early researches
By the 1950s, two visions for how to achieve machine intelligence emerged. One
vision, known as Symbolic AI or GOFAI, was to use computers to create a symbolic
representation of the world and systems that could reason about the world.
Proponents included Allen Newell, Herbert A. Simon, and Marvin Minsky. Closely
associated with this approach was the "heuristic search" approach, which likened
intelligence to a problem of exploring a space of possibilities for answers. The
second vision, known as the connectionist approach, sought to achieve intelligence
through learning. Proponents of this approach, most prominently Frank Rosenblatt,
sought to connect Perceptron in ways inspired by connections of neurons.[21] James
Manyika and others have compared the two approaches to the mind (Symbolic AI) and
the brain (connectionist). Manyika argues that symbolic approaches dominated the
push for artificial intelligence in this period, due in part to its connection to
intellectual traditions of Descarte, Boole, Gottlob Frege, Bertrand Russell, and
others. Connectionist approaches based on cybernetics or artificial neural networks
were pushed to the background but have gained new prominence in recent decades.[22]

The field of AI research was born at a workshop at Dartmouth College in 1956.[f]


[25] The attendees became the founders and leaders of AI research.[g] They and
their students produced programs that the press described as "astonishing":[h]
computers were learning checkers strategies, solving word problems in algebra,
proving logical theorems and speaking English.[i][27] By the middle of the 1960s,
research in the U.S. was heavily funded by the Department of Defense[28] and
laboratories had been established around the world.[29]

Researchers in the 1960s and the 1970s were convinced that symbolic approaches
would eventually succeed in creating a machine with artificial general intelligence
and considered this the goal of their field.[30] Herbert Simon predicted, "machines
will be capable, within twenty years, of doing any work a man can do".[31] Marvin
Minsky agreed, writing, "within a generation ... the problem of creating
'artificial intelligence' will substantially be solved".[32]

They failed to recognize the difficulty of some of the remaining tasks. Progress
slowed and in 1974, in response to the criticism of Sir James Lighthill[33] and
ongoing pressure from the US Congress to fund more productive projects, both the
U.S. and British governments cut off exploratory research in AI. The next few years
would later be called an "AI winter", a period when obtaining funding for AI
projects was difficult.[8]

From expert systems to machine learning


In the early 1980s, AI research was revived by the commercial success of expert
systems,[34] a form of AI program that simulated the knowledge and analytical
skills of human experts. By 1985, the market for AI had reached over a billion
dollars. At the same time, Japan's fifth generation computer project inspired the
U.S and British governments to restore funding for academic research.[7] However,
beginning with the collapse of the Lisp Machine market in 1987, AI once again fell
into disrepute, and a second, longer-lasting winter began.[9]

Many researchers began to doubt that the symbolic approach would be able to imitate
all the processes of human cognition, especially perception, robotics, learning and
pattern recognition. A number of researchers began to look into "sub-symbolic"
approaches to specific AI problems.[35] Robotics researchers, such as Rodney
Brooks, rejected symbolic AI and focused on the basic engineering problems that
would allow robots to move, survive, and learn their environment.[j] Interest in
neural networks and "connectionism" was revived by Geoffrey Hinton, David Rumelhart
and others in the middle of the 1980s.[40] Soft computing tools were developed in
the 80s, such as neural networks, fuzzy systems, Grey system theory, evolutionary
computation and many tools drawn from statistics or mathematical optimization.

AI gradually restored its reputation in the late 1990s and early 21st century by
finding specific solutions to specific problems. The narrow focus allowed
researchers to produce verifiable results, exploit more mathematical methods, and
collaborate with other fields (such as statistics, economics and mathematics).[41]
By 2000, solutions developed by AI researchers were being widely used, although in
the 1990s they were rarely described as "artificial intelligence".[11]

Faster computers, algorithmic improvements, and access to large amounts of data


enabled advances in machine learning and perception; data-hungry deep learning
methods started to dominate accuracy benchmarks around 2012.[42] According to
Bloomberg's Jack Clark, 2015 was a landmark year for artificial intelligence, with
the number of software projects that use AI within Google increased from a
"sporadic usage" in 2012 to more than 2,700 projects.[k] He attributes this to an
increase in affordable neural networks, due to a rise in cloud computing
infrastructure and to an increase in research tools and datasets.[10] In a 2017
survey, one in five companies reported they had "incorporated AI in some offerings
or processes".[43] The amount of research into AI (measured by total publications)
increased by 50% in the years 2015–2019.[44]

Numerous academic researchers became concerned that AI was no longer pursuing the
original goal of creating versatile, fully intelligent machines. Much of current
research involves statistical AI, which is overwhelmingly used to solve specific
problems, even highly successful techniques such as deep learning. This concern has
led to the subfield of artificial general intelligence (or "AGI"), which had
several well-funded institutions by the 2010s.[12]

Goals
The general problem of simulating (or creating) intelligence has been broken down
into sub-problems. These consist of particular traits or capabilities that
researchers expect an intelligent system to display. The traits described below
have received the most attention.[c]

Reasoning, problem-solving
Early researchers developed algorithms that imitated step-by-step reasoning that
humans use when they solve puzzles or make logical deductions.[45] By the late
1980s and 1990s, AI research had developed methods for dealing with uncertain or
incomplete information, employing concepts from probability and economics.[46]

Many of these algorithms proved to be insufficient for solving large reasoning


problems because they experienced a "combinatorial explosion": they became
exponentially slower as the problems grew larger.[47] Even humans rarely use the
step-by-step deduction that early AI research could model. They solve most of their
problems using fast, intuitive judgments.[48]

Knowledge representation
Main articles: Knowledge representation, Commonsense knowledge, Description logic,
and Ontology

An ontology represents knowledge as a set of concepts within a domain and the


relationships between those concepts.
Knowledge representation and knowledge engineering[49] allow AI programs to answer
questions intelligently and make deductions about real-world facts.

A representation of "what exists" is an ontology: the set of objects, relations,


concepts, and properties formally described so that software agents can interpret
them.[50] The most general ontologies are called upper ontologies, which attempt to
provide a foundation for all other knowledge and act as mediators between domain
ontologies that cover specific knowledge about a particular knowledge domain (field
of interest or area of concern). A truly intelligent program would also need access
to commonsense knowledge; the set of facts that an average person knows. The
semantics of an ontology is typically represented in description logic, such as the
Web Ontology Language.[51]
AI research has developed tools to represent specific domains, such as objects,
properties, categories and relations between objects;[51] situations, events,
states and time;[52] causes and effects;[53] knowledge about knowledge (what we
know about what other people know);.[54] default reasoning (things that humans
assume are true until they are told differently and will remain true even when
other facts are changing); [55] as well as other domains. Among the most difficult
problems in AI are: the breadth of commonsense knowledge (the number of atomic
facts that the average person knows is enormous);[56] and the sub-symbolic form of
most commonsense knowledge (much of what people know is not represented as "facts"
or "statements" that they could express verbally).[48]

Formal knowledge representations are used in content-based indexing and retrieval,


[57] scene interpretation,[58] clinical decision support,[59] knowledge discovery
(mining "interesting" and actionable inferences from large databases),[60] and
other areas.[61]

Planning
Main article: Automated planning and scheduling
An intelligent agent that can plan makes a representation of the state of the
world, makes predictions about how their actions will change it and make choices
that maximize the utility (or "value") of the available choices.[62] In classical
planning problems, the agent can assume that it is the only system acting in the
world, allowing the agent to be certain of the consequences of its actions.[63]
However, if the agent is not the only actor, then it requires that the agent reason
under uncertainty, and continuously re-assess its environment and adapt.[64] Multi-
agent planning uses the cooperation and competition of many agents to achieve a
given goal. Emergent behavior such as this is used by evolutionary algorithms and
swarm intelligence.[65]

Learning
Main article: Machine learning
Machine learning (ML), a fundamental concept of AI research since the field's
inception,[l] is the study of computer algorithms that improve automatically
through experience.[m]

Unsupervised learning finds patterns in a stream of input. Supervised learning


requires a human to label the input data first, and comes in two main varieties:
classification and numerical regression. Classification is used to determine what
category something belongs in—the program sees a number of examples of things from
several categories and will learn to classify new inputs. Regression is the attempt
to produce a function that describes the relationship between inputs and outputs
and predicts how the outputs should change as the inputs change. Both classifiers
and regression learners can be viewed as "function approximators" trying to learn
an unknown (possibly implicit) function; for example, a spam classifier can be
viewed as learning a function that maps from the text of an email to one of two
categories, "spam" or "not spam".[69] In reinforcement learning the agent is
rewarded for good responses and punished for bad ones. The agent classifies its
responses to form a strategy for operating in its problem space.[70] Transfer
learning is when the knowledge gained from one problem is applied to a new problem.
[71]

Computational learning theory can assess learners by computational complexity, by


sample complexity (how much data is required), or by other notions of optimization.
[72]

Natural language processing


Main article: Natural language processing

A parse tree represents the syntactic structure of a sentence according to some


formal grammar.
Natural language processing (NLP)[73] allows machines to read and understand human
language. A sufficiently powerful natural language processing system would enable
natural-language user interfaces and the acquisition of knowledge directly from
human-written sources, such as newswire texts. Some straightforward applications of
NLP include information retrieval, question answering and machine translation.[74]

Symbolic AI used formal syntax to translate the deep structure of sentences into
logic. This failed to produce useful applications, due to the intractability of
logic[47] and the breadth of commonsense knowledge.[56] Modern statistical
techniques include co-occurrence frequencies (how often one word appears near
another), "Keyword spotting" (searching for a particular word to retrieve
information), transformer-based deep learning (which finds patterns in text), and
others.[75] They have achieved acceptable accuracy at the page or paragraph level,
and, by 2019, could generate coherent text.[76]

Perception
Main articles: Machine perception, Computer vision, and Speech recognition

Feature detection (pictured: edge detection) helps AI compose informative abstract


structures out of raw data.
Machine perception[77] is the ability to use input from sensors (such as cameras,
microphones, wireless signals, and active lidar, sonar, radar, and tactile sensors)
to deduce aspects of the world. Applications include speech recognition,[78] facial
recognition, and object recognition.[79] Computer vision is the ability to analyze
visual input.[80]

Motion and manipulation


Main article: Robotics
AI is heavily used in robotics.[81] Localization is how a robot knows its location
and maps its environment. When given a small, static, and visible environment, this
is easy; however, dynamic environments, such as (in endoscopy) the interior of a
patient's breathing body, pose a greater challenge.[82]

Motion planning is the process of breaking down a movement task into "primitives"
such as individual joint movements. Such movement often involves compliant motion,
a process where movement requires maintaining physical contact with an object.
Robots can learn from experience how to move efficiently despite the presence of
friction and gear slippage.[83]

Social intelligence
Main article: Affective computing

Kismet, a robot with rudimentary social skills[84]


Affective computing is an interdisciplinary umbrella that comprises systems that
recognize, interpret, process or simulate human feeling, emotion and mood.[85] For
example, some virtual assistants are programmed to speak conversationally or even
to banter humorously; it makes them appear more sensitive to the emotional dynamics
of human interaction, or to otherwise facilitate human–computer interaction.
However, this tends to give naïve users an unrealistic conception of how
intelligent existing computer agents actually are.[86] Moderate successes related
to affective computing include textual sentiment analysis and, more recently,
multimodal sentiment analysis), wherein AI classifies the affects displayed by a
videotaped subject.[87]

General intelligence
Main article: Artificial general intelligence
A machine with general intelligence can solve a wide variety of problems with
breadth and versatility similar to human intelligence. There are several competing
ideas about how to develop artificial general intelligence. Hans Moravec and Marvin
Minsky argue that work in different individual domains can be incorporated into an
advanced multi-agent system or cognitive architecture with general intelligence.
[88] Pedro Domingos hopes that there is a conceptually straightforward, but
mathematically difficult, "master algorithm" that could lead to AGI.[89] Others
believe that anthropomorphic features like an artificial brain[90] or simulated
child development[n] will someday reach a critical point where general intelligence
emerges.

Tools
Search and optimization
Main articles: Search algorithm, Mathematical optimization, and Evolutionary
computation
Many problems in AI can be solved theoretically by intelligently searching through
many possible solutions:[91] Reasoning can be reduced to performing a search. For
example, logical proof can be viewed as searching for a path that leads from
premises to conclusions, where each step is the application of an inference rule.
[92] Planning algorithms search through trees of goals and subgoals, attempting to
find a path to a target goal, a process called means-ends analysis.[93] Robotics
algorithms for moving limbs and grasping objects use local searches in
configuration space.[94]

Simple exhaustive searches[95] are rarely sufficient for most real-world problems:
the search space (the number of places to search) quickly grows to astronomical
numbers. The result is a search that is too slow or never completes. The solution,
for many problems, is to use "heuristics" or "rules of thumb" that prioritize
choices in favor of those more likely to reach a goal and to do so in a shorter
number of steps. In some search methodologies, heuristics can also serve to
eliminate some choices unlikely to lead to a goal (called "pruning the search
tree"). Heuristics supply the program with a "best guess" for the path on which the
solution lies.[96] Heuristics limit the search for solutions into a smaller sample
size.[97]

A particle swarm seeking the global minimum


A very different kind of search came to prominence in the 1990s, based on the
mathematical theory of optimization. For many problems, it is possible to begin the
search with some form of a guess and then refine the guess incrementally until no
more refinements can be made. These algorithms can be visualized as blind hill
climbing: we begin the search at a random point on the landscape, and then, by
jumps or steps, we keep moving our guess uphill, until we reach the top. Other
related optimization algorithms include random optimization, beam search and
metaheuristics like simulated annealing.[98] Evolutionary computation uses a form
of optimization search. For example, they may begin with a population of organisms
(the guesses) and then allow them to mutate and recombine, selecting only the
fittest to survive each generation (refining the guesses). Classic evolutionary
algorithms include genetic algorithms, gene expression programming, and genetic
programming.[99] Alternatively, distributed search processes can coordinate via
swarm intelligence algorithms. Two popular swarm algorithms used in search are
particle swarm optimization (inspired by bird flocking) and ant colony optimization
(inspired by ant trails).[100]

Logic
Main articles: Logic programming and Automated reasoning
Logic[101] is used for knowledge representation and problem-solving, but it can be
applied to other problems as well. For example, the satplan algorithm uses logic
for planning[102] and inductive logic programming is a method for learning.[103]

Several different forms of logic are used in AI research. Propositional logic[104]


involves truth functions such as "or" and "not". First-order logic[105] adds
quantifiers and predicates and can express facts about objects, their properties,
and their relations with each other. Fuzzy logic assigns a "degree of truth"
(between 0 and 1) to vague statements such as "Alice is old" (or rich, or tall, or
hungry), that are too linguistically imprecise to be completely true or false.[106]
Default logics, non-monotonic logics and circumscription are forms of logic
designed to help with default reasoning and the qualification problem.[55] Several
extensions of logic have been designed to handle specific domains of knowledge,
such as description logics;[51] situation calculus, event calculus and fluent
calculus (for representing events and time);[52] causal calculus;[53] belief
calculus (belief revision); and modal logics.[54] Logics to model contradictory or
inconsistent statements arising in multi-agent systems have also been designed,
such as paraconsistent logics.[107]

Probabilistic methods for uncertain reasoning


Main articles: Bayesian network, Hidden Markov model, Kalman filter, Particle
filter, Decision theory, and Utility theory

Expectation-maximization clustering of Old Faithful eruption data starts from a


random guess but then successfully converges on an accurate clustering of the two
physically distinct modes of eruption.
Many problems in AI (including in reasoning, planning, learning, perception, and
robotics) require the agent to operate with incomplete or uncertain information. AI
researchers have devised a number of tools to solve these problems using methods
from probability theory and economics.[108] Bayesian networks[109] are a very
general tool that can be used for various problems, including reasoning (using the
Bayesian inference algorithm),[o][111] learning (using the expectation-maximization
algorithm),[p][113] planning (using decision networks)[114] and perception (using
dynamic Bayesian networks).[115] Probabilistic algorithms can also be used for
filtering, prediction, smoothing and finding explanations for streams of data,
helping perception systems to analyze processes that occur over time (e.g., hidden
Markov models or Kalman filters).[115]

A key concept from the science of economics is "utility", a measure of how valuable
something is to an intelligent agent. Precise mathematical tools have been
developed that analyze how an agent can make choices and plan, using decision
theory, decision analysis,[116] and information value theory.[117] These tools
include models such as Markov decision processes,[118] dynamic decision networks,
[115] game theory and mechanism design.[119]

Classifiers and statistical learning methods


Main articles: Classifier (mathematics), Statistical classification, and Machine
learning
The simplest AI applications can be divided into two types: classifiers ("if shiny
then diamond") and controllers ("if diamond then pick up"). Controllers do,
however, also classify conditions before inferring actions, and therefore
classification forms a central part of many AI systems. Classifiers are functions
that use pattern matching to determine the closest match. They can be tuned
according to examples, making them very attractive for use in AI. These examples
are known as observations or patterns. In supervised learning, each pattern belongs
to a certain predefined class. A class is a decision that has to be made. All the
observations combined with their class labels are known as a data set. When a new
observation is received, that observation is classified based on previous
experience.[120]

A classifier can be trained in various ways; there are many statistical and machine
learning approaches. The decision tree is the simplest and most widely used
symbolic machine learning algorithm.[121] K-nearest neighbor algorithm was the most
widely used analogical AI until the mid-1990s.[122] Kernel methods such as the
support vector machine (SVM) displaced k-nearest neighbor in the 1990s.[123] The
naive Bayes classifier is reportedly the "most widely used learner"[124] at Google,
due in part to its scalability.[125] Neural networks are also used for
classification.[126]

Classifier performance depends greatly on the characteristics of the data to be


classified, such as the dataset size, distribution of samples across classes,
dimensionality, and the level of noise. Model-based classifiers perform well if the
assumed model is an extremely good fit for the actual data. Otherwise, if no
matching model is available, and if accuracy (rather than speed or scalability) is
the sole concern, conventional wisdom is that discriminative classifiers
(especially SVM) tend to be more accurate than model-based classifiers such as
"naive Bayes" on most practical data sets.[127]

Artificial neural networks


Main articles: Artificial neural network and Connectionism

A neural network is an interconnected group of nodes, akin to the vast network of


neurons in the human brain.
Neural networks[126] were inspired by the architecture of neurons in the human
brain. A simple "neuron" N accepts input from other neurons, each of which, when
activated (or "fired"), casts a weighted "vote" for or against whether neuron N
should itself activate. Learning requires an algorithm to adjust these weights
based on the training data; one simple algorithm (dubbed "fire together, wire
together") is to increase the weight between two connected neurons when the
activation of one triggers the successful activation of another. Neurons have a
continuous spectrum of activation; in addition, neurons can process inputs in a
nonlinear way rather than weighing straightforward votes.

Modern neural networks model complex relationships between inputs and outputs and
find patterns in data. They can learn continuous functions and even digital logical
operations. Neural networks can be viewed as a type of mathematical optimization—
they perform gradient descent on a multi-dimensional topology that was created by
training the network. The most common training technique is the backpropagation
algorithm.[128] Other learning techniques for neural networks are Hebbian learning
("fire together, wire together"), GMDH or competitive learning.[129]

The main categories of networks are acyclic or feedforward neural networks (where
the signal passes in only one direction) and recurrent neural networks (which allow
feedback and short-term memories of previous input events). Among the most popular
feedforward networks are perceptrons, multi-layer perceptrons and radial basis
networks.[130]

Deep learning
Representing Images on Multiple Layers of Abstraction in Deep Learning
Representing images on multiple layers of abstraction in deep learning[131]
Deep learning[132] uses several layers of neurons between the network's inputs and
outputs. The multiple layers can progressively extract higher-level features from
the raw input. For example, in image processing, lower layers may identify edges,
while higher layers may identify the concepts relevant to a human such as digits or
letters or faces.[133] Deep learning has drastically improved the performance of
programs in many important subfields of artificial intelligence, including computer
vision, speech recognition, image classification[134] and others.

Deep learning often uses convolutional neural networks for many or all of its
layers. In a convolutional layer, each neuron receives input from only a restricted
area of the previous layer called the neuron's receptive field. This can
substantially reduce the number of weighted connections between neurons,[135] and
creates a hierarchy similar to the organization of the animal visual cortex.[136]
In a recurrent neural network the signal will propagate through a layer more than
once;[137] thus, an RNN is an example of deep learning.[138] RNNs can be trained by
gradient descent,[139] however long-term gradients which are back-propagated can
"vanish" (that is, they can tend to zero) or "explode" (that is, they can tend to
infinity), known as the vanishing gradient problem.[140] The long short term memory
(LSTM) technique can prevent this in most cases.[141]

Specialized languages and hardware


Main articles: Programming languages for artificial intelligence and Hardware for
artificial intelligence
Specialized languages for artificial intelligence have been developed, such as
Lisp, Prolog, TensorFlow and many others. Hardware developed for AI includes AI
accelerators and neuromorphic computing.

Applications
Main article: Applications of artificial intelligence
See also: Embodied cognition and Legal informatics

For this project the AI had to learn the typical patterns in the colors and
brushstrokes of Renaissance painter Raphael. The portrait shows the face of the
actress Ornella Muti, "painted" by AI in the style of Raphael.
AI is relevant to any intellectual task.[142] Modern artificial intelligence
techniques are pervasive and are too numerous to list here.[143] Frequently, when a
technique reaches mainstream use, it is no longer considered artificial
intelligence; this phenomenon is described as the AI effect.[144]

In the 2010s, AI applications were at the heart of the most commercially successful
areas of computing, and have become a ubiquitous feature of daily life. AI is used
in search engines (such as Google Search), targeting online advertisements,[145]
recommendation systems (offered by Netflix, YouTube or Amazon), driving internet
traffic,[146][147] targeted advertising (AdSense, Facebook), virtual assistants
(such as Siri or Alexa),[148] autonomous vehicles (including drones and self-
driving cars), automatic language translation (Microsoft Translator, Google
Translate), facial recognition (Apple's Face ID or Microsoft's DeepFace), image
labeling (used by Facebook, Apple's iPhoto and TikTok) and spam filtering.

There are also thousands of successful AI applications used to solve problems for
specific industries or institutions. A few examples are energy storage,[149]
deepfakes,[150] medical diagnosis, military logistics, or supply chain management.

Game playing has been a test of AI's strength since the 1950s. Deep Blue became the
first computer chess-playing system to beat a reigning world chess champion, Garry
Kasparov, on 11 May 1997.[151] In 2011, in a Jeopardy! quiz show exhibition match,
IBM's question answering system, Watson, defeated the two greatest Jeopardy!
champions, Brad Rutter and Ken Jennings, by a significant margin.[152] In March
2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol,
becoming the first computer Go-playing system to beat a professional Go player
without handicaps.[153] Other programs handle imperfect-information games; such as
for poker at a superhuman level, Pluribus[q] and Cepheus.[155] DeepMind in the
2010s developed a "generalized artificial intelligence" that could learn many
diverse Atari games on its own.[156]

By 2020, Natural Language Processing systems such as the enormous GPT-3 (then by
far the largest artificial neural network) were matching human performance on pre-
existing benchmarks, albeit without the system attaining a commonsense
understanding of the contents of the benchmarks.[157] DeepMind's AlphaFold 2 (2020)
demonstrated the ability to approximate, in hours rather than months, the 3D
structure of a protein.[158] Other applications predict the result of judicial
decisions,[159] create art (such as poetry or painting) and prove mathematical
theorems.

AI Patent families for functional application categories and sub categories.


Computer vision represents 49 percent of patent families related to a functional
application in 2016.
In 2019, WIPO reported that AI was the most prolific emerging technology in terms
of number of patent applications and granted patents, the Internet of things was
estimated to be the largest in terms of market size. It was followed, again in
market size, by big data technologies, robotics, AI, 3D printing and the fifth
generation of mobile services (5G).[160] Since AI emerged in the 1950s, 340000 AI-
related patent applications were filed by innovators and 1.6 million scientific
papers have been published by researchers, with the majority of all AI-related
patent filings published since 2013. Companies represent 26 out of the top 30 AI
patent applicants, with universities or public research organizations accounting
for the remaining four.[161] The ratio of scientific papers to inventions has
significantly decreased from 8:1 in 2010 to 3:1 in 2016, which is attributed to be
indicative of a shift from theoretical research to the use of AI technologies in
commercial products and services. Machine learning is the dominant AI technique
disclosed in patents and is included in more than one-third of all identified
inventions (134777 machine learning patents filed for a total of 167038 AI patents
filed in 2016), with computer vision being the most popular functional application.
AI-related patents not only disclose AI techniques and applications, they often
also refer to an application field or industry. Twenty application fields were
identified in 2016 and included, in order of magnitude: telecommunications (15
percent), transportation (15 percent), life and medical sciences (12 percent), and
personal devices, computing and human–computer interaction (11 percent). Other
sectors included banking, entertainment, security, industry and manufacturing,
agriculture, and networks (including social networks, smart cities and the Internet
of things). IBM has the largest portfolio of AI patents with 8,290 patent
applications, followed by Microsoft with 5,930 patent applications.[161]

Legal aspects
AI's decisions making abilities raises the questions of legal responsibility and
copyright status of created works. This issues are being refined in various
jurisdictions.[162]

Philosophy
Main article: Philosophy of artificial intelligence
Defining artificial intelligence
Thinking vs. acting: the Turing test
Main articles: Turing test, Dartmouth Workshop, and Synthetic intelligence
Alan Turing wrote in 1950 "I propose to consider the question 'can machines
think'?"[163] He advised changing the question from whether a machine "thinks", to
"whether or not it is possible for machinery to show intelligent behaviour".[164]
The only thing visible is the behavior of the machine, so it does not matter if the
machine is conscious, or has a mind, or whether the intelligence is merely a
"simulation" and not "the real thing". He noted that we also don't know these
things about other people, but that we extend a "polite convention" that they are
actually "thinking". This idea forms the basis of the Turing test.[165][r]

Acting humanly vs. acting intelligently: intelligent agents


Main article: Intelligent agents
AI founder John McCarthy said: "Artificial intelligence is not, by definition,
simulation of human intelligence".[167] Russell and Norvig agree and criticize the
Turing test. They wrote: "Aeronautical engineering texts do not define the goal of
their field as making 'machines that fly so exactly like pigeons that they can fool
other pigeons.'"[168] Other researchers and analysts disagree and have argued that
AI should simulate natural intelligence by studying psychology or neurobiology.[s]

The intelligent agent paradigm[170] defines intelligent behavior in general,


without reference to human beings. An intelligent agent is a system that perceives
its environment and takes actions that maximize its chances of success. Any system
that has goal-directed behavior can be analyzed as an intelligent agent: something
as simple as a thermostat, as complex as a human being, as well as large systems
such as firms, biomes or nations. The intelligent agent paradigm became widely
accepted during the 1990s, and currently serves as the definition of the field.[a]

The paradigm has other advantages for AI. It provides a reliable and scientific way
to test programs; researchers can directly compare or even combine different
approaches to isolated problems, by asking which agent is best at maximizing a
given "goal function". It also gives them a common language to communicate with
other fields – such as mathematical optimization (which is defined in terms of
"goals") or economics (which uses the same definition of a "rational agent").[171]

Evaluating approaches to AI
No established unifying theory or paradigm has guided AI research for most of its
history.[t] The unprecedented success of statistical machine learning in the 2010s
eclipsed all other approaches (so much so that some sources, especially in the
business world, use the term "artificial intelligence" to mean "machine learning
with neural networks"). This approach is mostly sub-symbolic, neat, soft and narrow
(see below). Critics argue that these questions may have to be revisited by future
generations of AI researchers.

Symbolic AI and its limits


Main articles: Symbolic AI, Physical symbol systems hypothesis, Moravec's paradox,
and Dreyfus' critique of artificial intelligence
Symbolic AI (or "GOFAI")[173] simulated the high-level conscious reasoning that
people use when they solve puzzles, express legal reasoning and do mathematics.
They were highly successful at "intelligent" tasks such as algebra or IQ tests. In
the 1960s, Newell and Simon proposed the physical symbol systems hypothesis: "A
physical symbol system has the necessary and sufficient means of general
intelligent action."[174]

However, the symbolic approach failed dismally on many tasks that humans solve
easily, such as learning, recognizing an object or commonsense reasoning. Moravec's
paradox is the discovery that high-level "intelligent" tasks were easy for AI, but
low level "instinctive" tasks were extremely difficult.[175] Philosopher Hubert
Dreyfus had argued since the 1960s that human expertise depends on unconscious
instinct rather than conscious symbol manipulation, and on having a "feel" for the
situation, rather than explicit symbolic knowledge.[176] Although his arguments had
been ridiculed and ignored when they were first presented, eventually, AI research
came to agree.[u][48]

The issue is not resolved: sub-symbolic reasoning can make many of the same
inscrutable mistakes that human intuition does, such as algorithmic bias. Critics
such as Noam Chomsky argue continuing research into symbolic AI will still be
necessary to attain general intelligence,[178][179] in part because sub-symbolic AI
is a move away from explainable AI: it can be difficult or impossible to understand
why a modern statistical AI program made a particular decision.

Neat vs. scruffy


Main article: Neats and scruffies
"Neats" hope that intelligent behavior is described using simple, elegant
principles (such as logic, optimization, or neural networks). "Scruffies" expect
that it necessarily requires solving a large number of unrelated problems. This
issue was actively discussed in the 70s and 80s,[180] but in the 1990s mathematical
methods and solid scientific standards became the norm, a transition that Russell
and Norvig termed "the victory of the neats".[181]

Soft vs. hard computing


Main article: Soft computing
Finding a provably correct or optimal solution is intractable for many important
problems.[47] Soft computing is a set of techniques, including genetic algorithms,
fuzzy logic and neural networks, that are tolerant of imprecision, uncertainty,
partial truth and approximation. Soft computing was introduced in the late 80s and
most successful AI programs in the 21st century are examples of soft computing with
neural networks.

Narrow vs. general AI


Main article: Artificial general intelligence
AI researchers are divided as to whether to pursue the goals of artificial general
intelligence and superintelligence (general AI) directly or to solve as many
specific problems as possible (narrow AI) in hopes these solutions will lead
indirectly to the field's long-term goals[182][183] General intelligence is
difficult to define and difficult to measure, and modern AI has had more verifiable
successes by focussing on specific problems with specific solutions. The
experimental sub-field of artificial general intelligence studies this area
exclusively.

Machine consciousness, sentience and mind


Main articles: Philosophy of artificial intelligence and Artificial Consciousness
The philosophy of mind does not know whether a machine can have a mind,
consciousness and mental states, in the same sense that human beings do. This issue
considers the internal experiences of the machine, rather than its external
behavior. Mainstream AI research considers this issue irrelevant because it does
not affect the goals of the field. Stuart Russell and Peter Norvig observe that
most AI researchers "don't care about the [philosophy of AI]—as long as the program
works, they don't care whether you call it a simulation of intelligence or real
intelligence."[184] However, the question has become central to the philosophy of
mind. It is also typically the central question at issue in artificial intelligence
in fiction.

Consciousness
Main articles: Hard problem of consciousness and Theory of mind
David Chalmers identified two problems in understanding the mind, which he named
the "hard" and "easy" problems of consciousness.[185] The easy problem is
understanding how the brain processes signals, makes plans and controls behavior.
The hard problem is explaining how this feels or why it should feel like anything
at all. Human information processing is easy to explain, however, human subjective
experience is difficult to explain. For example, it is easy to imagine a color-
blind person who has learned to identify which objects in their field of view are
red, but it is not clear what would be required for the person to know what red
looks like.[186]

Computationalism and functionalism


Main articles: Computationalism, Functionalism (philosophy of mind), and Chinese
room
Computationalism is the position in the philosophy of mind that the human mind is
an information processing system and that thinking is a form of computing.
Computationalism argues that the relationship between mind and body is similar or
identical to the relationship between software and hardware and thus may be a
solution to the mind-body problem. This philosophical position was inspired by the
work of AI researchers and cognitive scientists in the 1960s and was originally
proposed by philosophers Jerry Fodor and Hilary Putnam.[187]
Philosopher John Searle characterized this position as "strong AI": "The
appropriately programmed computer with the right inputs and outputs would thereby
have a mind in exactly the same sense human beings have minds."[v] Searle counters
this assertion with his Chinese room argument, which attempts to show that, even if
a machine perfectly simulates human behavior, there is still no reason to suppose
it also has a mind.[190]

Robot rights
Main article: Robot rights
If a machine has a mind and subjective experience, then it may also have sentience
(the ability to feel), and if so, then it could also suffer, and thus it would be
entitled to certain rights.[191] Any hypothetical robot rights would lie on a
spectrum with animal rights and human rights.[192] This issue has been considered
in fiction for centuries,[193] and is now being considered by, for example,
California's Institute for the Future, however, critics argue that the discussion
is premature.[194]

Future
Superintelligence
Main articles: Superintelligence, Technological singularity, and Transhumanism
A superintelligence, hyperintelligence, or superhuman intelligence, is a
hypothetical agent that would possess intelligence far surpassing that of the
brightest and most gifted human mind. Superintelligence may also refer to the form
or degree of intelligence possessed by such an agent.[183]

If research into artificial general intelligence produced sufficiently intelligent


software, it might be able to reprogram and improve itself. The improved software
would be even better at improving itself, leading to recursive self-improvement.
[195] Its intelligence would increase exponentially in an intelligence explosion
and could dramatically surpass humans. Science fiction writer Vernor Vinge named
this scenario the "singularity".[196] Because it is difficult or impossible to know
the limits of intelligence or the capabilities of superintelligent machines, the
technological singularity is an occurrence beyond which events are unpredictable or
even unfathomable.[197]

Robot designer Hans Moravec, cyberneticist Kevin Warwick, and inventor Ray Kurzweil
have predicted that humans and machines will merge in the future into cyborgs that
are more capable and powerful than either. This idea, called transhumanism, has
roots in Aldous Huxley and Robert Ettinger.[198]

Edward Fredkin argues that "artificial intelligence is the next stage in


evolution", an idea first proposed by Samuel Butler's "Darwin among the Machines"
as far back as 1863, and expanded upon by George Dyson in his book of the same name
in 1998.[199]

Risks
Technological unemployment
Main articles: Workplace impact of artificial intelligence and Technological
unemployment
In the past technology has tended to increase rather than reduce total employment,
but economists acknowledge that "we're in uncharted territory" with AI.[200] A
survey of economists showed disagreement about whether the increasing use of robots
and AI will cause a substantial increase in long-term unemployment, but they
generally agree that it could be a net benefit if productivity gains are
redistributed.[201] Subjective estimates of the risk vary widely; for example,
Michael Osborne and Carl Benedikt Frey estimate 47% of U.S. jobs are at "high risk"
of potential automation, while an OECD report classifies only 9% of U.S. jobs as
"high risk".[w][203]
Unlike previous waves of automation, many middle-class jobs may be eliminated by
artificial intelligence; The Economist states that "the worry that AI could do to
white-collar jobs what steam power did to blue-collar ones during the Industrial
Revolution" is "worth taking seriously".[204] Jobs at extreme risk range from
paralegals to fast food cooks, while job demand is likely to increase for care-
related professions ranging from personal healthcare to the clergy.[205]

Bad actors and weaponized AI


Main articles: Lethal autonomous weapon and Artificial intelligence arms race
AI provides a number of tools that are particularly useful for authoritarian
governments: smart spyware, face recognition and voice recognition allow widespread
surveillance; such surveillance allows machine learning to classify potential
enemies of the state and can prevent them from hiding; recommendation systems can
precisely target propaganda and misinformation for maximum effect; deepfakes aid in
producing misinformation; advanced AI can make centralized decision making more
competitive with liberal and decentralized systems such as markets.[206]

Terrorists, criminals and rogue states may use other forms of weaponized AI such as
advanced digital warfare and lethal autonomous weapons. By 2015, over fifty
countries were reported to be researching battlefield robots.[207]

Machine-learning AI is also able to design tens of thousands of toxic molecules in


a matter of hours.[208]

Algorithmic bias
Main article: Algorithmic bias
AI programs can become biased after learning from real-world data. It is not
typically introduced by the system designers but is learned by the program, and
thus the programmers are often unaware that the bias exists.[209] Bias can be
inadvertently introduced by the way training data is selected.[210] It can also
emerge from correlations: AI is used to classify individuals into groups and then
make predictions assuming that the individual will resemble other members of the
group. In some cases, this assumption may be unfair.[211] An example of this is
COMPAS, a commercial program widely used by U.S. courts to assess the likelihood of
a defendant becoming a recidivist. ProPublica claims that the COMPAS-assigned
recidivism risk level of black defendants is far more likely to be overestimated
than that of white defendants, despite the fact that the program was not told the
races of the defendants.[212] Other examples where algorithmic bias can lead to
unfair outcomes are when AI is used for credit rating or hiring.

At its 2022 Conference on Fairness, Accountability, and Transparency (ACM FAccT


2022) the Association for Computing Machinery, in Seoul, South Korea, presented and
published findings recommending that until AI and robotics systems are demonstrated
to be free of bias mistakes, they are unsafe and the use of self-learning neural
networks trained on vast, unregulated sources of flawed internet data should be
curtailed.[213]

Existential risk
Main articles: Existential risk from artificial general intelligence and
Superintelligence
Superintelligent AI may be able to improve itself to the point that humans could
not control it. This could, as physicist Stephen Hawking puts it, "spell the end of
the human race".[214] Philosopher Nick Bostrom argues that sufficiently intelligent
AI if it chooses actions based on achieving some goal, will exhibit convergent
behavior such as acquiring resources or protecting itself from being shut down. If
this AI's goals do not fully reflect humanity's, it might need to harm humanity to
acquire more resources or prevent itself from being shut down, ultimately to better
achieve its goal. He concludes that AI poses a risk to mankind, however humble or
"friendly" its stated goals might be.[215] Political scientist Charles T. Rubin
argues that "any sufficiently advanced benevolence may be indistinguishable from
malevolence." Humans should not assume machines or robots would treat us favorably
because there is no a priori reason to believe that they would share our system of
morality.[216]

The opinion of experts and industry insiders is mixed, with sizable fractions both
concerned and unconcerned by risk from eventual superhumanly-capable AI.[217]
Stephen Hawking, Microsoft founder Bill Gates, history professor Yuval Noah Harari,
and SpaceX founder Elon Musk have all expressed serious misgivings about the future
of AI.[218] Prominent tech titans including Peter Thiel (Amazon Web Services) and
Musk have committed more than $1 billion to nonprofit companies that champion
responsible AI development, such as OpenAI and the Future of Life Institute.[219]
Mark Zuckerberg (CEO, Facebook) has said that artificial intelligence is helpful in
its current form and will continue to assist humans.[220] Other experts argue is
that the risks are far enough in the future to not be worth researching, or that
humans will be valuable from the perspective of a superintelligent machine.[221]
Rodney Brooks, in particular, has said that "malevolent" AI is still centuries
away.[x]

Ethical machines
Main articles: Machine ethics, Friendly AI, Artificial moral agents, and Human
Compatible
Friendly AI are machines that have been designed from the beginning to minimize
risks and to make choices that benefit humans. Eliezer Yudkowsky, who coined the
term, argues that developing friendly AI should be a higher research priority: it
may require a large investment and it must be completed before AI becomes an
existential risk.[223]

Machines with intelligence have the potential to use their intelligence to make
ethical decisions. The field of machine ethics provides machines with ethical
principles and procedures for resolving ethical dilemmas.[224] Machine ethics is
also called machine morality, computational ethics or computational morality,[224]
and was founded at an AAAI symposium in 2005.[225]

Other approaches include Wendell Wallach's "artificial moral agents"[226] and


Stuart J. Russell's three principles for developing provably beneficial machines.
[227]

Regulation
Main articles: Regulation of artificial intelligence, Regulation of algorithms, and
AI control problem
The regulation of artificial intelligence is the development of public sector
policies and laws for promoting and regulating artificial intelligence (AI); it is
therefore related to the broader regulation of algorithms.[228] The regulatory and
policy landscape for AI is an emerging issue in jurisdictions globally.[229]
Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI.
[44] Most EU member states had released national AI strategies, as had Canada,
China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab
Emirates, USA and Vietnam. Others were in the process of elaborating their own AI
strategy, including Bangladesh, Malaysia and Tunisia.[44] The Global Partnership on
Artificial Intelligence was launched in June 2020, stating a need for AI to be
developed in accordance with human rights and democratic values, to ensure public
confidence and trust in the technology.[44] Henry Kissinger, Eric Schmidt, and
Daniel Huttenlocher published a joint statement in November 2021 calling for a
government commission to regulate AI.[230]

In fiction
Main article: Artificial intelligence in fiction
The word "robot" itself was coined by Karel Čapek in his 1921 play R.U.R., the
title standing for "Rossum's Universal Robots".
Thought-capable artificial beings have appeared as storytelling devices since
antiquity,[15] and have been a persistent theme in science fiction.[17]

A common trope in these works began with Mary Shelley's Frankenstein, where a human
creation becomes a threat to its masters. This includes such works as Arthur C.
Clarke's and Stanley Kubrick's 2001: A Space Odyssey (both 1968), with HAL 9000,
the murderous computer in charge of the Discovery One spaceship, as well as The
Terminator (1984) and The Matrix (1999). In contrast, the rare loyal robots such as
Gort from The Day the Earth Stood Still (1951) and Bishop from Aliens (1986) are
less prominent in popular culture.[231]

Isaac Asimov introduced the Three Laws of Robotics in many books and stories, most
notably the "Multivac" series about a super-intelligent computer of the same name.
Asimov's laws are often brought up during lay discussions of machine ethics;[232]
while almost all artificial intelligence researchers are familiar with Asimov's
laws through popular culture, they generally consider the laws useless for many
reasons, one of which is their ambiguity.[233]

Transhumanism (the merging of humans and machines) is explored in the manga Ghost
in the Shell and the science-fiction series Dune.

Several works use AI to force us to confront the fundamental question of what makes
us human, showing us artificial beings that have the ability to feel, and thus to
suffer. This appears in Karel Čapek's R.U.R., the films A.I. Artificial
Intelligence and Ex Machina, as well as the novel Do Androids Dream of Electric
Sheep?, by Philip K. Dick. Dick considers the idea that our understanding of human
subjectivity is altered by technology created with artificial intelligence.[234]

Scientific diplomacy
Warfare
As technology and research evolve and the world enters the third revolution of
warfare following gunpowder and nuclear weapons, the artificial intelligence arms
race ensues between the United States, China, and Russia, three countries with the
world's top five highest military budgets.[235] Intentions of being a world leader
in AI research by 2030[236] have been declared by China's leader Xi Jinping, and
President Putin of Russia has stated that "Whoever becomes the leader in this
sphere will become the ruler of the world".[237] If Russia were to become the
leader in AI research, President Putin has stated Russia's intent to share some of
their research with the world so as to not monopolize the field,[237] similar to
their current sharing of nuclear technologies, maintaining science diplomacy
relations. The United States, China, and Russia, are some examples of countries
that have taken their stances toward military artificial intelligence since as
early as 2014, having established military programs to develop cyber weapons,
control lethal autonomous weapons, and drones that can be used for surveillance.

Russo-Ukrainian War
President Putin announced that artificial intelligence is the future for all
mankind [237] and recognizes the power and opportunities that the development and
deployment of lethal autonomous weapons AI technology can hold in warfare and
homeland security, as well as its threats. President Putin's prediction that future
wars will be fought using AI has started to come to fruition to an extent after
Russia invaded Ukraine on 24 February 2022. The Ukrainian military is making use
of the Turkish Bayraktar TB2-drones[238] that still require human operation to
deploy laser-guided bombs but can take off, land, and cruise autonomously. Ukraine
has also been using Switchblade drones supplied by the US and receiving information
gathering by the United States's own surveillance operations regarding battlefield
intelligence and national security about Russia.[239] Similarly, Russia can use AI
to help analyze battlefield data from surveillance footage taken by drones. Reports
and images show that Russia's military has deployed KUB- BLA suicide drones [240]
into Ukraine, with speculations of intentions to assassinate Ukrainian President
Volodymyr Zelenskyy.

Warfare regulations
As research in the AI realm progresses, there is pushback about the use of AI from
the Campaign to Stop Killer Robots and world technology leaders have sent a
petition[241] to the United Nations calling for new regulations on the development
and use of AI technologies in 2017, including a ban on the use of lethal autonomous
weapons due to ethical concerns for innocent civilian populations.

Cybersecurity
With the ever evolving cyber-attacks and generation of devices, AI can be used for
threat detection and more effective response by risk prioritization. With this
tool, some challenges are also presented such as privacy, informed consent, and
responsible use.[242] According to CISA, the cyberspace is difficult to secure for
the following factors: the ability of malicious actors to operate from anywhere in
the world, the linkages between cyberspace and physical systems, and the difficulty
of reducing vulnerabilities and consequences in complex cyber networks.[243] With
the increased technological advances of the world, the risk for wide scale
consequential events rises. Paradoxically, the ability to protect information and
create a line of communication between the scientific and diplomatic community
thrives. The role of cybersecurity in diplomacy has become increasingly relevant,
creating the term of cyber diplomacy – which is not uniformly defined and not
synonymous with cyber defence.[244] Many nations have developed unique approaches
to scientific diplomacy in cyberspace.

Czech Republic's approach


Dating back to 2011, when the Czech National Security Authority (NSA) was appointed
as the national authority for the cyber agenda. The role of cyber diplomacy
strengthened in 2017 when the Czech Ministry of Foreign Affairs (MFA) detected a
serious cyber campaign directed against its own computer networks.[245] In 2016,
three cyber diplomats were deployed to Washington, D.C., Brussels and Tel Aviv,
with the goal of establishing active international cooperation focused on
engagement with the EU and NATO. The main agenda for these scientific diplomacy
efforts is to bolster research on artificial intelligence and how it can be
utilized in cybersecurity research, development, and overall consumer trust.[246]
CzechInvest is a key stakeholder in scientific diplomacy and cybersecurity. For
example, in September 2018, they organized a mission to Canada in September 2018
with a special focus on artificial intelligence. The main goal of this particular
mission was a promotional effort on behalf of Prague, attempting to establish it as
a future knowledge hub for the industry for interested Canadian firms.[247]

Germany's approach
Cybersecurity is recognized as a governmental task, dividing into three ministries
of responsibility: the Federal Ministry of the Interior, the Federal Ministry of
Defence, and the Federal Foreign Office.[248] These distinctions promoted the
creation of various institutions, such as The German National Office for
Information Security, The National Cyberdefence Centre, The German National Cyber
Security Council, and The Cyber and Information Domain Service.[246] In 2018, a new
strategy for artificial intelligence was established by the German government, with
the creation of a German-French virtual research and innovation network,[249]
holding opportunity for research expansion into cybersecurity.

European Union's approach


The adoption of The Cybersecurity Strategy of the European Union – An Open, Safe
and Secure Cyberspace document in 2013 by the European commission[246] pushed forth
cybersecurity efforts integrated with scientific diplomacy and artificial
intelligence. Efforts are strong, as the EU funds various programs and institutions
in the effort to bring science to diplomacy and bring diplomacy to science. Some
examples are the cyber security programme Competence Research Innovation
(CONCORDIA), which brings together 14 member states,[250] Cybersecurity for Europe
(CSE)- which brings together 43 partners involving 20 member states.[251] In
addition, The European Network of Cybersecurity Centres and Competence Hub for
Innovation and Operations (ECHO) gathers 30 partners with 15 member states[252] and
SPARTA gathers 44 partners involving 14 member states.[253] These efforts reflect
the overall goals of the EU, to innovate cybersecurity for defense and protection,
establish a highly integrated cyberspace among many nations, and further contribute
to the security of artificial intelligence.[246]

Russo-Ukrainian War
With the 2022 invasion of Ukraine, there has been a rise in malicious cyber
activity against the United States,[254] Ukraine, and Russia. A prominent and rare
documented use of artificial intelligence in conflict is on behalf of Ukraine,
using facial recognition software to uncover Russian assailants and identify
Ukrainians killed in the ongoing war.[255] Though these governmental figures are
not primarily focused on scientific and cyber diplomacy, other institutions are
commenting on the use of artificial intelligence in cybersecurity with that focus.
For example, Georgetown University's Center for Security and Emerging Technology
(CSET) has the Cyber-AI Project, with one goal being to attract policymakers'
attention to the growing body of academic research, whichexposes the exploitive
consequences of AI and machine-learning (ML) algorithms.[256] This vulnerability
can be a plausible explanation as to why Russia is not engaging in the use of AI in
conflict per, Andrew Lohn, a senior fellow at CSET. In addition to use on the
battlefield, AI is being used by the Pentagon to analyze data from the war,
analyzing to strengthen cybersecurity and warfare intelligence for the United
States.[239][257]

Election security
As artificial intelligence grows and the overwhelming amount of news portrayed
through cyberspace expands, it is becoming extremely overwhelming for a voter to
know what to believe. There are many intelligent codes, referred to as bots,
written to portray people on social media with the goal of spreading miss
information.[258] The 2016 USA election is a victim of such actions. During the
Hillary Clinton and Donald Trump campaign, artificial intelligent bots from Russia
were spreading misinformation about the candidates in order to help the Trump
campaign.[259] Analysts concluded that approximately 19% of Twitter tweets centered
around the 2016 election were detected to come from bots.[259] YouTube in recent
years has been used to spread political information as well. Although there is no
proof that the platform attempts to manipulate its viewers opinions, Youtubes AI
algorithm recommends videos of similar variety.[260] If a person begins to research
right wing political podcasts, then YouTube's algorithm will recommend more right
wing videos.[261] The uprising in a program called Deepfake, a software used to
replicate someone's face and words, has also shown its potential threat. In 2018 a
Deepfake video of Barack Obama was released saying words he claims to have never
said.[262] While in a national election a Deepfake will quickly be debunked, the
software has the capability to heavily sway a smaller local election. This tool
holds a lot of potential for spreading misinformation and is monitored with great
attention.[263] Although it may be seen as a tool used for harm, AI can help
enhance election campaigns as well. AI bots can be programed to target articles
with known misinformation. The bots can then indicate what is being misinformed to
help shine light on the truth. AI can also be used to inform a person where each
parts stands on a certain topic such as healthcare or climate change.[264] The
political leaders of a nation have heavy sway on international affairs. Thus, a
political leader with a lack of interest for international collaborative scientific
advancement can have a negative impact in the scientific diplomacy of that
nation[265]
Future of work
Facial recognition
The use of artificial intelligence (AI) has subtly grown to become part of everyday
life. It is used every day in facial recognition software. It is the first measure
of security for many companies in the form of a biometric authentication. This
means of authentication allows even the most official organizations such as the
United States Internal Revenue Service to verify a person's identity [266] via a
database generated from machine learning. As of the year 2022, the United States
IRS requires those who do not undergo a live interview with an agent to complete a
biometric verification of their identity via ID.me's facial recognition tool.[266]

AI and school
In Japan and South Korea, artificial intelligence software is used in the
instruction of English language via the company Riiid.[267] Riiid is a Korean
education company working alongside Japan to give students the means to learn and
use their English communication skills via engaging with artificial intelligence in
a live chat.[267] Riid is not the only company to do this. An American company such
as Duolingo is very well known for their automated teaching of 41 languages.
Babbel, a German language learning program also uses artificial intelligence in its
teaching automation, allowing for European students to learn vital communication
skills needed in social, economic, and diplomatic settings. Artificial
intelligence will also automate the routine tasks that teachers need to do such as
grading, taking attendance, and handling routine student inquiries.[268] This
enables the teacher to carry on with the complexities of teaching that an automated
machine cannot handle. These include creating exams, explaining complex material in
a way that will benefit students individually and handling unique questions from
students.

AI and medicine
Unlike the human brain, which possess generalized intelligence, the specialized
intelligence of AI can serve as a means of support to physicians internationally.
The medical field has a diverse and profound amount of data in which AI can employ
to generate a predictive diagnosis. Researchers at an Oxford hospital have
developed artificial intelligence that can diagnose heart scans for heart disease
and cancer.[269] This artificial intelligence can pick up diminutive details in the
scans that doctors may miss. As such, artificial intelligence in medicine will
better the industry, giving doctors the means to precisely diagnose their patients
using the tools available. The artificial intelligence algorithms will also be used
to further improve diagnosis over time, via an application of machine learning
called precision medicine.[270] Furthermore, the narrow application of artificial
intelligence can use "deep learning" in order to improve medical image analysis. In
radiology imaging, AI uses deep learning algorithms to identify potentially
cancerous lesions which is an important process assisting in early diagnosis.[271]

AI in business
Data analysis is a fundamental property of artificial intelligence that enables it
to be used in every facet of life from search results to the way people buy
product. According to NewVantage Partners,[272] over 90% of top businesses have
ongoing investments in artificial intelligence. According to IBM, one of the
world's leaders in technology, 45% of respondents from companies with over 1,000
employees have adopted AI.[273] Recent data shows that the business market [274]
for artificial intelligence during the year 2020 was valued at $51.08 billion. The
business market for artificial intelligence is projected to be over $640.3 billion
by the year 2028.[274] To prevent harm, AI-deploying organizations need to play a
central role in creating and deploying trustworthy AI in line with the principles
of trustworthy AI,[275] and take accountability to mitigate the risks.[276]

Business and diplomacy


With the exponential surge of artificial technology and communication, the
distribution of one's ideals and values has been evident in daily life. Digital
information is spread via communication apps such as Whatsapp, Facebook/Meta,
Snapchat, Instagram and Twitter. However, it is known that these sites relay
specific information corresponding to data analysis. If a right-winged individual
were to do a google search, Google's algorithms would target that individual and
relay data pertinent to that target audience. US President Bill Clinton noted in
2000:"In the new century, liberty will spread by cell phone and cable modem. [...]
We know how much the Internet has changed America, and we are already an open
society.[277] However, when the private sector uses artificial intelligence to
gather data, a shift in power from the state to the private sector may be seen.
This shift in power, specifically in large technological corporations, could
profoundly change how diplomacy functions in society. The rise in digital
technology and usage of artificial technology enabled the private sector to gather
immense data on the public, which is then further categorized by race, location,
age, gender, etc.[278] The New York Times calculates that "the ten largest tech
firms, which have become gatekeepers in commerce, finance, entertainment and
communications, now have a combined market capitalization of more than $10
trillion. In gross domestic product terms, that would rank them as the world's
third-largest economy."[279] Beyond the general lobbying of
congressmen/congresswomen, companies such as Facebook/Meta or Google use collected
data in order to reach their intended audiences with targeted information.[279]

AI and foreign policy


Multiple nations around the globe employ artificial intelligence to assist with
their foreign policy decisions. The Chinese Department of External Security Affairs
– under the Ministry of Foreign Affairs – uses AI to review almost all its foreign
investment projects for risk mitigation.[280] The government of China plans to
utilize artificial intelligence in its $900 billion global infrastructure
development plan, called the "Belt and Road Initiative" for political, economic,
and environmental risk alleviation.[281]

Over 200 applications of artificial intelligence are being used by over 46 United
Nations agencies, in sectors ranging from health care dealing with issues such as
combating COVID-19 to smart agriculture, to assist the UN in political and
diplomatic relations.[282] One example is the use of AI by the UN Global Pulse
program to model the effect of the spread of COVID-19 on internally displaced
people (IDP) and refugee settlements to assist them in creating an appropriate
global health policy.[283][284]

Novel AI tools such as remote sensing can also be employed by diplomats for
collecting and analyzing data and near-real-time tracking of objects such as troop
or refugee movements along borders in violent conflict zones.[283][285]

Artificial intelligence can be used to mitigate vital cross-national diplomatic


talks to prevent translation errors caused by human translators.[286] A major
example is the 2021 Anchorage meetings held between US and China aimed at
stabilizing foreign relations, only for it to have the opposite effect, increasing
tension and aggressiveness between the two nations, due to translation errors
caused by human translators.[287] In the meeting, when United States National
Security Advisor to President Joe Biden, Jacob Jeremiah Sullivan stated, "We do not
seek conflict, but we welcome stiff competition and we will always stand up for our
principles, for our people, and for our friends", it was mistranslated into Chinese
as "we will face competition between us, and will present our stance in a very
clear manner", adding an aggressive tone to the speech.[287] AI's ability for fast
and efficient natural language processing and real-time translation and
transliteration makes it an important tool for foreign-policy communication between
nations and prevents unintended mistranslation.[288]
See also
icon Computer programming portal
A.I. Rising
AI control problem
Artificial intelligence arms race
Artificial general intelligence
Behavior selection algorithm
Business process automation
Case-based reasoning
Citizen science
Emergent algorithm
Female gendering of AI technologies
Glossary of artificial intelligence
Robotic process automation
Synthetic intelligence
Universal basic income
Weak AI
Explanatory notes
Definition of AI as the study of intelligent agents, drawn from leading AI
textbooks.
Poole, Mackworth & Goebel (1998, p. 1), which provides the version that is used in
this article. These authors use the term "computational intelligence" as a synonym
for artificial intelligence.
Russell & Norvig (2003, p. 55) (who prefer the term "rational agent") and write
"The whole-agent view is now widely accepted in the field".
Nilsson (1998)
Legg & Hutter (2007)
Stuart Russell and Peter Norvig characterize this definition as "thinking humanly"
and reject it in favor of "acting rationally".[1]
This list of intelligent traits is based on the topics covered by the major AI
textbooks, including: Russell & Norvig (2003), Luger & Stubblefield (2004), Poole,
Mackworth & Goebel (1998) and Nilsson (1998)
This statement comes from the proposal for the Dartmouth workshop of 1956, which
reads: "Every aspect of learning or any other feature of intelligence can be so
precisely described that a machine can be made to simulate it."[13]
Russel and Norvig note in the textbook Artificial Intelligence: A Modern Approach
(4th ed.), section 1.5: "In the longer term, we face the difficult problem of
controlling superintelligent AI systems that may evolve in unpredictable ways."
while referring to computer scientists, philosophers, and technologists.
Daniel Crevier wrote "the conference is generally recognized as the official
birthdate of the new science."[23] Russell and Norvifg call the conference "the
birth of artificial intelligence."[24]
Russell and Norvig wrote "for the next 20 years the field would be dominated by
these people and their students."[24]
Russell and Norvig wrote "it was astonishing whenever a computer did anything kind
of smartish".[26]
The programs described are Arthur Samuel's checkers program for the IBM 701,
Daniel Bobrow's STUDENT, Newell and Simon's Logic Theorist and Terry Winograd's
SHRDLU.
Embodied approaches to AI[36] were championed by Hans Moravec[37] and Rodney
Brooks[38] and went by many names: Nouvelle AI,[38] Developmental robotics,[39]
situated AI, behavior-based AI as well as others. A similar movement in cognitive
science was the embodied mind thesis.
Clark wrote: "After a half-decade of quiet breakthroughs in artificial
intelligence, 2015 has been a landmark year. Computers are smarter and learning
faster than ever."[10]
Alan Turing discussed the centrality of learning as early as 1950, in his classic
paper "Computing Machinery and Intelligence".[66] In 1956, at the original
Dartmouth AI summer conference, Ray Solomonoff wrote a report on unsupervised
probabilistic machine learning: "An Inductive Inference Machine".[67]
This is a form of Tom Mitchell's widely quoted definition of machine learning: "A
computer program is set to learn from an experience E with respect to some task T
and some performance measure P if its performance on T as measured by P improves
with experience E."[68]
Alan Turing suggested in "Computing Machinery and Intelligence" that a "thinking
machine" would need to be educated like a child.[66] Developmental robotics is a
modern version of the idea.[39]
Compared with symbolic logic, formal Bayesian inference is computationally
expensive. For inference to be tractable, most observations must be conditionally
independent of one another. AdSense uses a Bayesian network with over 300 million
edges to learn which ads to serve.[110]
Expectation-maximization, one of the most popular algorithms in machine learning,
allows clustering in the presence of unknown latent variables.[112]
The Smithsonian reports: "Pluribus has bested poker pros in a series of six-player
no-limit Texas Hold'em games, reaching a milestone in artificial intelligence
research. It is the first bot to beat humans in a complex multiplayer
competition."[154]
The distinction between "acting" and "thinking" is due to Russell and Norvig.[166]
The distinction between "acting humanly" and "acting rationally" is due to Russell
and Norvig.[166] Pamela McCorduck wrote in 2004 that there are "two major branches
of artificial intelligence: one aimed at producing intelligent behavior regardless
of how it was accomplished, and the other aimed at modeling intelligent processes
found in nature, particularly human ones."[169]
Nils Nilsson wrote in 1983: "Simply put, there is wide disagreement in the field
about what AI is all about."[172]
Daniel Crevier wrote that "time has proven the accuracy and perceptiveness of some
of Dreyfus's comments. Had he formulated them less aggressively, constructive
actions they suggested might have been taken much earlier."[177]
Searle presented this definition of "Strong AI" in 1999.[188] Searle's original
formulation was "The appropriately programmed computer really is a mind, in the
sense that computers given the right programs can be literally said to understand
and have other cognitive states."[189] Strong AI is defined similarly by Russell
and Norvig: "The assertion that machines could possibly act intelligently (or,
perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis
by philosophers, and the assertion that machines that do so are actually thinking
(as opposed to simulating thinking) is called the 'strong AI' hypothesis."[184]
See table 4; 9% is both the OECD average and the US average.[202]
Rodney Brooks writes, "I think it is a mistake to be worrying about us developing
malevolent AI anytime in the next few hundred years. I think the worry stems from a
fundamental error in not distinguishing the difference between the very real recent
advances in a particular aspect of AI and the enormity and complexity of building
sentient volitional intelligence."[222]
Citations
Russell & Norvig (2021), p. 2.
Google (2016).
McCorduck (2004), p. 204.
Ashok83 (2019).
Schank (1991), p. 38.
Crevier (1993), p. 109.
Funding initiatives in the early 80s: Fifth Generation Project (Japan), Alvey
(UK), Microelectronics and Computer Technology Corporation (US), Strategic
Computing Initiative (US):
McCorduck (2004, pp. 426–441)
Crevier (1993, pp. 161–162, 197–203, 211, 240)
Russell & Norvig (2003, p. 24)
NRC (1999, pp. 210–211)
Newquist (1994, pp. 235–248)
First AI Winter, Lighthill report, Mansfield Amendment
Crevier (1993, pp. 115–117)
Russell & Norvig (2003, p. 22)
NRC (1999, pp. 212–213)
Howe (1994)
Newquist (1994, pp. 189–201)
Second AI Winter:
McCorduck (2004, pp. 430–435)
Crevier (1993, pp. 209–210)
NRC (1999, pp. 214–216)
Newquist (1994, pp. 301–318)
Clark (2015b).
AI widely used in late 1990s:
Russell & Norvig (2003, p. 28)
Kurzweil (2005, p. 265)
NRC (1999, pp. 216–222)
Newquist (1994, pp. 189–201)
Pennachin & Goertzel (2007); Roberts (2016)
McCarthy et al. (1955).
Newquist (1994), pp. 45–53.
AI in myth:
McCorduck (2004, pp. 4–5)
Russell & Norvig (2003, p. 939)
McCorduck (2004), pp. 17–25.
McCorduck (2004), pp. 340–400.
Berlinski (2000).
AI's immediate precursors:
McCorduck (2004, pp. 51–107)
Crevier (1993, pp. 27–32)
Russell & Norvig (2003, pp. 15, 940)
Moravec (1988, p. 3)
Russell & Norvig (2009), p. 16.
Manyika 2022, p. 9.
Manyika 2022, p. 10.
Crevier (1993), pp. 47–49.
Russell & Norvig (2003), p. 17.
Dartmouth workshop:
Russell & Norvig (2003, p. 17)
McCorduck (2004, pp. 111–136)
NRC (1999, pp. 200–201)
The proposal:
McCarthy et al. (1955)
Russell & Norvig (2003), p. 18.
Successful Symbolic AI programs:
McCorduck (2004, pp. 243–252)
Crevier (1993, pp. 52–107)
Moravec (1988, p. 9)
Russell & Norvig (2003, pp. 18–21)
AI heavily funded in 1960s:
McCorduck (2004, p. 131)
Crevier (1993, pp. 51, 64–65)
NRC (1999, pp. 204–205)
Howe (1994).
Newquist (1994), pp. 86–86.
Simon (1965, p. 96) quoted in Crevier (1993, p. 109)
Minsky (1967, p. 2) quoted in Crevier (1993, p. 109)
Lighthill (1973).
Expert systems:
Russell & Norvig (2003, pp. 22–24)
Luger & Stubblefield (2004, pp. 227–331)
Nilsson (1998, chpt. 17.4)
McCorduck (2004, pp. 327–335, 434–435)
Crevier (1993, pp. 145–62, 197–203)
Newquist (1994, pp. 155–183)
Nilsson (1998), p. 7.
McCorduck (2004), pp. 454–462.
Moravec (1988).
Brooks (1990).
Developmental robotics:
Weng et al. (2001)
Lungarella et al. (2003)
Asada et al. (2009)
Oudeyer (2010)
Revival of connectionism:
Crevier (1993, pp. 214–215)
Russell & Norvig (2003, p. 25)
Formal and narrow methods adopted in the 1990s:
Russell & Norvig (2003, pp. 25–26)
McCorduck (2004, pp. 486–487)
McKinsey (2018).
MIT Sloan Management Review (2018); Lorica (2017)
UNESCO (2021).
Problem solving, puzzle solving, game playing and deduction:
Russell & Norvig (2003, chpt. 3–9)
Poole, Mackworth & Goebel (1998, chpt. 2,3,7,9)
Luger & Stubblefield (2004, chpt. 3,4,6,8)
Nilsson (1998, chpt. 7–12)
Uncertain reasoning:
Russell & Norvig (2003, pp. 452–644)
Poole, Mackworth & Goebel (1998, pp. 345–395)
Luger & Stubblefield (2004, pp. 333–381)
Nilsson (1998, chpt. 19)
Intractability and efficiency and the combinatorial explosion:
Russell & Norvig (2003, pp. 9, 21–22)
Psychological evidence of the prevalence sub-symbolic reasoning and knowledge:
Kahneman (2011)
Wason & Shapiro (1966)
Kahneman, Slovic & Tversky (1982)
Dreyfus & Dreyfus (1986)
Knowledge representation and knowledge engineering:
Russell & Norvig (2003, pp. 260–266, 320–363)
Poole, Mackworth & Goebel (1998, pp. 23–46, 69–81, 169–233, 235–277, 281–298, 319–
345)
Luger & Stubblefield (2004, pp. 227–243),
Nilsson (1998, chpt. 17.1–17.4, 18)
Russell & Norvig (2003), pp. 320–328.
Representing categories and relations: Semantic networks, description logics,
inheritance (including frames and scripts):
Russell & Norvig (2003, pp. 349–354),
Poole, Mackworth & Goebel (1998, pp. 174–177),
Luger & Stubblefield (2004, pp. 248–258),
Nilsson (1998, chpt. 18.3)
Representing events and time:Situation calculus, event calculus, fluent calculus
(including solving the frame problem):
Russell & Norvig (2003, pp. 328–341),
Poole, Mackworth & Goebel (1998, pp. 281–298),
Nilsson (1998, chpt. 18.2)
Causal calculus:
Poole, Mackworth & Goebel (1998, pp. 335–337)
Representing knowledge about knowledge: Belief calculus, modal logics:
Russell & Norvig (2003, pp. 341–344),
Poole, Mackworth & Goebel (1998, pp. 275–277)
Default reasoning, Frame problem, default logic, non-monotonic logics,
circumscription, closed world assumption, abduction:
Russell & Norvig (2003, pp. 354–360)
Poole, Mackworth & Goebel (1998, pp. 248–256, 323–335)
Luger & Stubblefield (2004, pp. 335–363)
Nilsson (1998, ~18.3.3)
(Poole et al. places abduction under "default reasoning". Luger et al. places this
under "uncertain reasoning").
Breadth of commonsense knowledge:
Russell & Norvig (2003, p. 21),
Crevier (1993, pp. 113–114),
Moravec (1988, p. 13),
Lenat & Guha (1989, Introduction)
Smoliar & Zhang (1994).
Neumann & Möller (2008).
Kuperman, Reichley & Bailey (2006).
McGarry (2005).
Bertini, Del Bimbo & Torniai (2006).
Planning:
Russell & Norvig (2003, pp. 375–459)
Poole, Mackworth & Goebel (1998, pp. 281–316)
Luger & Stubblefield (2004, pp. 314–329)
Nilsson (1998, chpt. 10.1–2, 22)
Information value theory:
Russell & Norvig (2003, pp. 600–604)
Classical planning:
Russell & Norvig (2003, pp. 375–430)
Poole, Mackworth & Goebel (1998, pp. 281–315)
Luger & Stubblefield (2004, pp. 314–329)
Nilsson (1998, chpt. 10.1–2, 22)
Planning and acting in non-deterministic domains: conditional planning, execution
monitoring, replanning and continuous planning:
Russell & Norvig (2003, pp. 430–449)
Multi-agent planning and emergent behavior:
Russell & Norvig (2003, pp. 449–455)
Turing (1950).
Solomonoff (1956).
Russell & Norvig (2003), pp. 649–788.
Learning:
Russell & Norvig (2003, pp. 649–788)
Poole, Mackworth & Goebel (1998, pp. 397–438)
Luger & Stubblefield (2004, pp. 385–542)
Nilsson (1998, chpt. 3.3, 10.3, 17.5, 20)
Reinforcement learning:
Russell & Norvig (2003, pp. 763–788)
Luger & Stubblefield (2004, pp. 442–449)
The Economist (2016).
Jordan & Mitchell (2015).
Natural language processing (NLP):
Russell & Norvig (2003, pp. 790–831)
Poole, Mackworth & Goebel (1998, pp. 91–104)
Luger & Stubblefield (2004, pp. 591–632)
Applications of NLP:
Russell & Norvig (2003, pp. 840–857)
Luger & Stubblefield (2004, pp. 623–630)
Modern statistical approaches to NLP:
Cambria & White (2014)
Vincent (2019).
Machine perception:
Russell & Norvig (2003, pp. 537–581, 863–898)
Nilsson (1998, ~chpt. 6)
Speech recognition:
Russell & Norvig (2003, pp. 568–578)
Object recognition:
Russell & Norvig (2003, pp. 885–892)
Computer vision:
Russell & Norvig (2003, pp. 863–898)
Nilsson (1998, chpt. 6)
Robotics:
Russell & Norvig (2003, pp. 901–942)
Poole, Mackworth & Goebel (1998, pp. 443–460)
Robotic mapping and Localization:
Russell & Norvig (2003, pp. 908–915)
Cadena et al. (2016)
Motion planning and configuration space:
Russell & Norvig (2003, pp. 916–932)
Tecuci (2012)
MIT AIL (2014).
Affective computing:
Thro (1993)
Edelson (1991)
Tao & Tan (2005)
Scassellati (2002)
Waddell (2018).
Poria et al. (2017).
The Society of Mind:
Minsky (1986)
Moravec's "golden spike":
Moravec (1988, p. 20)
Multi-agent systems, hybrid intelligent systems, agent architectures, cognitive
architecture:
Russell & Norvig (2003, pp. 27, 932, 970–972)
Nilsson (1998, chpt. 25)
Domingos (2015), Chpt. 9.
Artificial brain as an approach to AGI:
Russell & Norvig (2003, p. 957)
Crevier (1993, pp. 271 & 279)
Goertzel et al. (2010)
A few of the people who make some form of the argument:
Moravec (1988, p. 20)
Kurzweil (2005, p. 262)
Hawkins & Blakeslee (2005)
Search algorithms:
Russell & Norvig (2003, pp. 59–189)
Poole, Mackworth & Goebel (1998, pp. 113–163)
Luger & Stubblefield (2004, pp. 79–164, 193–219)
Nilsson (1998, chpt. 7–12)
Forward chaining, backward chaining, Horn clauses, and logical deduction as
search:
Russell & Norvig (2003, pp. 217–225, 280–294)
Poole, Mackworth & Goebel (1998, pp. ~46–52)
Luger & Stubblefield (2004, pp. 62–73)
Nilsson (1998, chpt. 4.2, 7.2)
State space search and planning:
Russell & Norvig (2003, pp. 382–387)
Poole, Mackworth & Goebel (1998, pp. 298–305)
Nilsson (1998, chpt. 10.1–2)
Moving and configuration space:
Russell & Norvig (2003, pp. 916–932)
Uninformed searches (breadth first search, depth first search and general state
space search):
Russell & Norvig (2003, pp. 59–93)
Poole, Mackworth & Goebel (1998, pp. 113–132)
Luger & Stubblefield (2004, pp. 79–121)
Nilsson (1998, chpt. 8)
Heuristic or informed searches (e.g., greedy best first and A*):
Russell & Norvig (2003, pp. 94–109)
Poole, Mackworth & Goebel (1998, pp. pp. 132–147)
Poole & Mackworth (2017, Section 3.6)
Luger & Stubblefield (2004, pp. 133–150)
Tecuci (2012).
Optimization searches:
Russell & Norvig (2003, pp. 110–116, 120–129)
Poole, Mackworth & Goebel (1998, pp. 56–163)
Luger & Stubblefield (2004, pp. 127–133)
Genetic programming and genetic algorithms:
Luger & Stubblefield (2004, pp. 509–530)
Nilsson (1998, chpt. 4.2)
Artificial life and society based learning:
Luger & Stubblefield (2004, pp. 530–541)
Merkle & Middendorf (2013)
Logic:
Russell & Norvig (2003, pp. 194–310),
Luger & Stubblefield (2004, pp. 35–77),
Nilsson (1998, chpt. 13–16)
Satplan:
Russell & Norvig (2003, pp. 402–407),
Poole, Mackworth & Goebel (1998, pp. 300–301),
Nilsson (1998, chpt. 21)
Explanation based learning, relevance based learning, inductive logic programming,
case based reasoning:
Russell & Norvig (2003, pp. 678–710),
Poole, Mackworth & Goebel (1998, pp. 414–416),
Luger & Stubblefield (2004, pp. ~422–442),
Nilsson (1998, chpt. 10.3, 17.5)
Propositional logic:
Russell & Norvig (2003, pp. 204–233),
Luger & Stubblefield (2004, pp. 45–50)
Nilsson (1998, chpt. 13)
First-order logic and features such as equality:
Russell & Norvig (2003, pp. 240–310),
Poole, Mackworth & Goebel (1998, pp. 268–275),
Luger & Stubblefield (2004, pp. 50–62),
Nilsson (1998, chpt. 15)
Fuzzy logic:
Russell & Norvig (2003, pp. 526–527)
Scientific American (1999)
Abe, Jair Minoro; Nakamatsu, Kazumi (2009). "Multi-agent Systems and
Paraconsistent Knowledge". Knowledge Processing and Decision Making in Agent-Based
Systems. Studies in Computational Intelligence. Vol. 170. Springer Berlin
Heidelberg. pp. 101–121. doi:10.1007/978-3-540-88049-3_5. eISSN 1860-9503. ISBN
978-3-540-88048-6. ISSN 1860-949X. Retrieved 2 August 2022.
Stochastic methods for uncertain reasoning:
Russell & Norvig (2003, pp. 462–644),
Poole, Mackworth & Goebel (1998, pp. 345–395),
Luger & Stubblefield (2004, pp. 165–191, 333–381),
Nilsson (1998, chpt. 19)
Bayesian networks:
Russell & Norvig (2003, pp. 492–523),
Poole, Mackworth & Goebel (1998, pp. 361–381),
Luger & Stubblefield (2004, pp. ~182–190, ≈363–379),
Nilsson (1998, chpt. 19.3–4)
Domingos (2015), chapter 6.
Bayesian inference algorithm:
Russell & Norvig (2003, pp. 504–519),
Poole, Mackworth & Goebel (1998, pp. 361–381),
Luger & Stubblefield (2004, pp. ~363–379),
Nilsson (1998, chpt. 19.4 & 7)
Domingos (2015), p. 210.
Bayesian learning and the expectation-maximization algorithm:
Russell & Norvig (2003, pp. 712–724),
Poole, Mackworth & Goebel (1998, pp. 424–433),
Nilsson (1998, chpt. 20)
Domingos (2015, p. 210)
Bayesian decision theory and Bayesian decision networks:
Russell & Norvig (2003, pp. 597–600)
Stochastic temporal models:
Russell & Norvig (2003, pp. 537–581)
Dynamic Bayesian networks:
Russell & Norvig (2003, pp. 551–557)
Hidden Markov model:
(Russell & Norvig 2003, pp. 549–551)
Kalman filters:
Russell & Norvig (2003, pp. 551–557)
decision theory and decision analysis:
Russell & Norvig (2003, pp. 584–597),
Poole, Mackworth & Goebel (1998, pp. 381–394)
Information value theory:
Russell & Norvig (2003, pp. 600–604)
Markov decision processes and dynamic decision networks:
Russell & Norvig (2003, pp. 613–631)
Game theory and mechanism design:
Russell & Norvig (2003, pp. 631–643)
Statistical learning methods and classifiers:
Russell & Norvig (2003, pp. 712–754),
Luger & Stubblefield (2004, pp. 453–541)
Decision tree:
Domingos (2015, p. 88)
Russell & Norvig (2003, pp. 653–664),
Poole, Mackworth & Goebel (1998, pp. 403–408),
Luger & Stubblefield (2004, pp. 408–417)
K-nearest neighbor algorithm:
Domingos (2015, p. 187)
Russell & Norvig (2003, pp. 733–736)
kernel methods such as the support vector machine:
Domingos (2015, p. 88)
Russell & Norvig (2003, pp. 749–752)
Gaussian mixture model:
Russell & Norvig (2003, pp. 725–727)
Domingos (2015), p. 152.
Naive Bayes classifier:
Domingos (2015, p. 152)
Russell & Norvig (2003, p. 718)
Neural networks:
Russell & Norvig (2003, pp. 736–748),
Poole, Mackworth & Goebel (1998, pp. 408–414),
Luger & Stubblefield (2004, pp. 453–505),
Nilsson (1998, chpt. 3)
Domingos (2015, Chapter 4)
Classifier performance:
van der Walt & Bernard (2006)
Russell & Norvig (2009, 18.12: Learning from Examples: Summary)
Backpropagation:
Russell & Norvig (2003, pp. 744–748),
Luger & Stubblefield (2004, pp. 467–474),
Nilsson (1998, chpt. 3.3)
Paul Werbos' introduction of backpropagation to AI:
Werbos (1974); Werbos (1982)
Automatic differentiation, an essential precursor:
Linnainmaa (1970); Griewank (2012)
Competitive learning, Hebbian coincidence learning, Hopfield networks and
attractor networks:
Luger & Stubblefield (2004, pp. 474–505)
Feedforward neural networks, perceptrons and radial basis networks:
Russell & Norvig (2003, pp. 739–748, 758)
Luger & Stubblefield (2004, pp. 458–467)
Schulz & Behnke (2012).
Deep learning:
Goodfellow, Bengio & Courville (2016)
Hinton et al. (2016)
Schmidhuber (2015)
Deng & Yu (2014), pp. 199–200.
Ciresan, Meier & Schmidhuber (2012).
Habibi (2017).
Fukushima (2007).
Recurrent neural networks, Hopfield nets:
Russell & Norvig (2003, p. 758)
Luger & Stubblefield (2004, pp. 474–505)
Schmidhuber (2015)
Schmidhuber (2015).
Werbos (1988); Robinson & Fallside (1987); Williams & Zipser (1994)
Goodfellow, Bengio & Courville (2016); Hochreiter (1991)
Hochreiter & Schmidhuber (1997); Gers, Schraudolph & Schraudolph (2002)
Russell & Norvig (2009), p. 1.
European Commission (2020), p. 1.
CNN (2006).
Targeted advertising:
Russell & Norvig (2009, p. 1)
Economist (2016)
Lohr (2016)
Lohr (2016).
Smith (2016).
Rowinski (2013).
Frangoul (2019).
Brown (2019).
McCorduck (2004), pp. 480–483.
Markoff (2011).
Google (2016); BBC (2016)
Solly (2019).
Bowling et al. (2015).
Sample (2017).
Anadiotis (2020).
Heath (2020).
Aletras et al. (2016).
"Intellectual Property and Frontier Technologies". WIPO.
"WIPO Technology Trends 2019 – Artificial Intelligence" (PDF). WIPO. 2019.
"Artificial intelligence and copyright". www.wipo.int. Retrieved 27 May 2022.
Turing (1950), p. 1.
Turing (1948).
Turing's original publication of the Turing test in "Computing machinery and
intelligence":
Turing (1950)
Historical influence and philosophical implications:
Haugeland (1985, pp. 6–9)
Crevier (1993, p. 24)
McCorduck (2004, pp. 70–71)
Russell & Norvig (2021, pp. 2 and 984)
Russell & Norvig (2021), p. 2-3.
Maker (2006).
Russell & Norvig (2021), p. 3.
McCorduck (2004), pp. 100–101.
The intelligent agent paradigm:
Russell & Norvig (2021, p. 4, chpt. 2)
Poole, Mackworth & Goebel (1998, pp. 7–21)
Luger & Stubblefield (2004, pp. 235–240)
Hutter (2005, pp. 125–126)
The definition used in this article, in terms of goals, actions, perception and
environment, is due to Russell & Norvig (2021, p. 40). Other definitions also
include knowledge, learning and autonomy as additional criteria.
Russell & Norvig (2021), p. 4.
Nilsson (1983), p. 10.
Haugeland (1985), pp. 112–117.
Physical symbol system hypothesis:
Newell & Simon (1976, p. 116)
Historical significance:
McCorduck (2004, p. 153)
Russell & Norvig (2003, p. 18)
Moravec's paradox:
Moravec (1988, pp. 15–16)
Minsky (1986, p. 29)
Pinker (2007, pp. 190–91)
Dreyfus' critique of AI:
Dreyfus (1972)
Dreyfus & Dreyfus (1986)
Historical significance and philosophical implications:
Crevier (1993, pp. 120–132)
McCorduck (2004, pp. 211–239)
Russell & Norvig (2003, pp. 950–952)
Fearn (2007, Chpt. 3)
Crevier (1993), p. 125.
Langley (2011).
Katz (2012).
Neats vs. scruffies, the historic debate:
McCorduck (2004, pp. 421–424, 486–489)
Crevier (1993, p. 168)
Nilsson (1983, pp. 10–11)
A classic example of the "scruffy" approach to intelligence:
Minsky (1986)
A modern example of neat AI and its aspirations:
Domingos (2015)
Russell & Norvig (2003), pp. 25–26.
Pennachin & Goertzel (2007).
Roberts (2016).
Russell & Norvig (2003), p. 947.
Chalmers (1995).
Dennett (1991).
Horst (2005).
Searle (1999).
Searle (1980), p. 1.
Searle's Chinese room argument:
Searle (1980). Searle's original presentation of the thought experiment.
Searle (1999).
Discussion:
Russell & Norvig (2003, pp. 958–960)
McCorduck (2004, pp. 443–445)
Crevier (1993, pp. 269–271)
Robot rights:
Russell & Norvig (2003, p. 964)
BBC (2006)
Maschafilm (2010) (the film Plug & Pray)
Evans (2015).
McCorduck (2004), pp. 19–25.
Henderson (2007).
Omohundro (2008).
Vinge (1993).
Russell & Norvig (2003), p. 963.
Transhumanism:
Moravec (1988)
Kurzweil (2005)
Russell & Norvig (2003, p. 963)
AI as evolution:
Edward Fredkin is quoted in McCorduck (2004, p. 401)
Butler (1863)
Dyson (1998)
Ford & Colvin (2015); McGaughey (2018)
IGM Chicago (2017).
Arntz, Gregory & Zierahn (2016), p. 33.
Lohr (2017); Frey & Osborne (2017); Arntz, Gregory & Zierahn (2016, p. 33)
Morgenstern (2015).
Mahdawi (2017); Thompson (2014)
Harari (2018).
Weaponized AI:
Robitzski (2018)
Sainato (2015)
Urbina, Fabio; Lentzos, Filippa; Invernizzi, Cédric; Ekins, Sean (7 March 2022).
"Dual use of artificial-intelligence-powered drug discovery". Nature Machine
Intelligence. 4 (3): 189–191. doi:10.1038/s42256-022-00465-9. S2CID 247302391.
Retrieved 15 March 2022.
CNA (2019).
Goffrey (2008), p. 17.
Lipartito (2011, p. 36); Goodman & Flaxman (2017, p. 6)
Larson & Angwin (2016).
Dockrill, Peter, Robots With Flawed AI Make Sexist And Racist Decisions,
Experiment Shows, Science Alert, 27 June 2022
Cellan-Jones (2014).
Bostrom (2014); Müller & Bostrom (2014); Bostrom (2015)
Rubin (2003).
Müller & Bostrom (2014).
Leaders' concerns about the existential risks of AI:
Rawlinson (2015)
Holley (2015)
Gibbs (2014)
Churm (2019)
Sainato (2015)
Funding to mitigate risks of AI:
Post (2015)
Del Prado (2015)
Clark (2015a)
FastCompany (2015)
Leaders who argue the benefits of AI outweigh the risks:
Thibodeau (2019)
Bhardwaj (2018)
Arguments that AI is not an imminent risk:
Brooks (2014)
Geist (2015)
Madrigal (2015)
Lee (2014)
Brooks (2014).
Yudkowsky (2008).
Anderson & Anderson (2011).
AAAI (2014).
Wallach (2010).
Russell (2019), p. 173.
Regulation of AI to mitigate risks:
Berryhill et al. (2019)
Barfield & Pagallo (2018)
Iphofen & Kritikos (2019)
Wirtz, Weyerer & Geyer (2018)
Buiten (2019)
Law Library of Congress (U.S.). Global Legal Research Directorate (2019).
Kissinger, Henry (1 November 2021). "The Challenge of Being Human in the Age of
AI". The Wall Street Journal.
Buttazzo (2001).
Anderson (2008).
McCauley (2007).
Galvan (1997).
"A Visual Guide to the World's Military Budgets". Bloomberg.com. 11 March 2022.
Retrieved 11 May 2022.
Kharpal, Arjun (21 July 2017). "China wants to be a $150 billion world leader in
AI in less than 15 years". CNBC. Retrieved 11 May 2022.
Radina Gigova (2 September 2017). "Who Putin thinks will rule the world". CNN.
Retrieved 18 May 2022.
"In Ukraine, A.I. is going to war". Fortune. Retrieved 11 May 2022.
"AI Is Already Learning from Russia's War in Ukraine, DOD Says". Defense One.
Retrieved 11 May 2022.
"A.I. drones used in the Ukraine war raise fears of killer robots wreaking havoc
across future battlefields". Fortune. Retrieved 11 May 2022.
Vincent, James (21 August 2017). "Elon Musk and AI leaders call for a ban on
killer robots". The Verge. Retrieved 11 May 2022.
Kerry, Cameron F. (10 February 2020). "Protecting privacy in an AI-driven world".
Brookings. Retrieved 11 May 2022.
"CYBERSECURITY | CISA". www.cisa.gov. Retrieved 11 May 2022.
Inkster, Nigel (4 November 2021), Cornish, Paul (ed.), "Semi-Formal Diplomacy",
The Oxford Handbook of Cyber Security, Oxford University Press, pp. 530–542,
doi:10.1093/oxfordhb/9780198800682.013.33, ISBN 978-0-19-880068-2, retrieved 11 May
2022
"Connect the Dots on State-Sponsored Cyber Incidents – Compromise of the Czech
foreign minister's computer". Council on Foreign Relations. Retrieved 11 May 2022.
Kadlecová, Lucie; Meyer, Nadia; Cos, Rafaël; Ravinet, Pauline (2020). "Cyber
Security: Mapping the Role of Science Diplomacy in the Cyber Field". In: Young, M.,
T. Flink, e. Dall. Science Diplomacy in the Making: Case-based insights from the
S4D4C project.
Canada, Global Affairs (19 November 2019). "European innovation partnerships – A
guide for Canadian small and medium enterprises". GAC. Retrieved 11 May 2022.
"Homepage". Federal Office for Information Security. Retrieved 11 May 2022.
"Strategie Künstliche Intelligenz der Bundesregierung". Federal Ministry for
Economic Affairs and Energy: 6. 2018 – via Economy and climate protection (BMWK).
"Europe counts on Luxembourg's expertise". San Francisco. Retrieved 11 May 2022.
"Cyber Security Europe | Cyber security insight for boardroom and c-suite
executives". Cyber Security Europe. Retrieved 11 May 2022.
EMK, SU. "ECHO Network". Retrieved 11 May 2022.
"SPARTA Consortium". www.cybersecurityintelligence.com. Retrieved 11 May 2022.
"Shields Up | CISA". www.cisa.gov. Retrieved 11 May 2022.
Tegler, Eric. "The Vulnerability of AI Systems May Explain Why Russia Isn't Using
Them Extensively in Ukraine". Forbes. Retrieved 11 May 2022.
"CyberAI Project". Center for Security and Emerging Technology. Retrieved 11 May
2022.
"AI Weekly: The Russia-Ukraine conflict is a test case for AI in warfare".
VentureBeat. 4 March 2022. Retrieved 11 May 2022.
Kamarck, Elaine (29 November 2018). "Malevolent soft power, AI, and the threat to
democracy". Brookings. Retrieved 18 May 2022.
Guglielmi, Giorgia (28 October 2020). "The next-generation bots interfering with
the US election". Nature. 587 (7832): 21. Bibcode:2020Natur.587...21G.
doi:10.1038/d41586-020-03034-5. PMID 33116324. S2CID 226052075.
Hosseinmardi, Homa; Ghasemian, Amir; Clauset, Aaron; Mobius, Markus; Rothschild,
David M.; Watts, Duncan J. (10 August 2021). "Examining the consumption of radical
content on YouTube". Proceedings of the National Academy of Sciences. 118 (32):
e2101967118. doi:10.1073/pnas.2101967118. ISSN 0027-8424. PMC 8364190. PMID
34341121.
"Explained: Here is how YouTube recommendations work". The Indian Express. 26
October 2021. Retrieved 18 May 2022.
You Won't Believe What Obama Says In This Video! 😉, retrieved 18 May 2022
"Where Are The Deepfakes In This Presidential Election?". NPR.org. Retrieved 18
May 2022.
PhD, Slava Polonski (4 February 2018). "Artificial intelligence can save
democracy, unless it destroys it first". Medium. Retrieved 18 May 2022.
"Science Diplomacy and Future Worlds". Science & Diplomacy. Retrieved 18 May 2022.
"Internal Revenue Service (IRS)", Encyclopedia of Business Ethics and Society,
Thousand Oaks, California: SAGE Publications, Inc., 2008,
doi:10.4135/9781412956260.n436, ISBN 9781412916523, retrieved 19 May 2022
Ga-young, Park (9 October 2021). "Korea's edu tech startup Riiid acquires Japanese
mobile app distributor". The Korea Herald. Retrieved 19 May 2022.
"You are being redirected..." www.analyticsinsight.net. Retrieved 19 May 2022.
"AI early diagnosis could save heart and cancer patients". BBC News. 2 January
2018. Retrieved 19 May 2022.
Davenport, Thomas; Kalakota, Ravi (June 2019). "The potential for artificial
intelligence in healthcare". Future Healthcare Journal. 6 (2): 94–98.
doi:10.7861/futurehosp.6-2-94. ISSN 2514-6645. PMC 6616181. PMID 31363513.
Jain |, Pragya. "AI and the Future of Work in the United States". American
University. Retrieved 19 May 2022.
"BASF and its partners publish results for 'Pragati', world's first sustainable
castor bean program". Focus on Powder Coatings. 2022 (2): 5. February 2022.
doi:10.1016/j.fopow.2022.01.019. ISSN 1364-5439. S2CID 246561954.
"AI in 2020: From Experimentation to Adoption". THINK Blog. 3 January 2020.
Retrieved 19 May 2022.
"Artificial Intelligence Market Size, Share, Trends, Opportunities & Forecast".
Verified Market Research. Retrieved 19 May 2022.
"European Commission.: Ethics guidelines for trustworthy AI. EC HLEG". 2019.
Curtis, Caitlin; Gillespie, Nicole; Lockey, Steven (24 May 2022). "AI-deploying
organizations are key to addressing 'perfect storm' of AI risks". AI and Ethics: 1–
9. doi:10.1007/s43681-022-00163-7. ISSN 2730-5961. PMC 9127285. PMID 35634256.
"Fakers Who Realize That They're the Real Thing", The New York Times Television
Reviews 2000, Routledge, pp. 423–445, 5 June 2003, doi:10.4324/9780203508305-31,
ISBN 9780203508305, retrieved 19 May 2022
Barattero, Alberto (2018). "The People Vs Tech: How the Internet is Killing
Democracy (and How We Save It)". The Incarnate Word. 5 (2): 204–207.
doi:10.5840/tiw20185221. ISSN 2150-9824.
Tapsell, Paul (December 2021), "He Tohu: Tipping Point", Kāinga: People, Land,
Belonging, Bridget Williams Books, pp. 11–24, doi:10.7810/9781988587585_1, ISBN
9781988587585, S2CID 245941972, retrieved 19 May 2022
"Will algorithms make safe decisions in foreign affairs? – Diplo". 17 December
2019. Retrieved 19 May 2022.
"How AI Is Running China's Foreign Policy". Analytics India Magazine. 31 July
2018. Retrieved 19 May 2022.
"United Nations Activities on Artificial Intelligence (AI)". ITU. Retrieved 19 May
2022.
"Engineering Diplomacy: How AI and Human Augmentation Could Remake the Art of
Foreign Relations". Science & Diplomacy. Retrieved 19 May 2022.
Aylett-Bullock, Joseph; Cuesta-Lazaro, Carolina; Quera-Bofarull, Arnau; Katta,
Anjali; Pham, Katherine Hoffmann; Hoover, Benjamin; Strobelt, Hendrik; Jimenez,
Rebeca Moreno; Sedgewick, Aidan; Evers, Egmond Samir; Kennedy, David (6 July 2021).
"Operational response simulation tool for epidemics within refugee and IDP
settlements": 2021.01.27.21250611. doi:10.1101/2021.01.27.21250611. S2CID
231722795.
Witmer, Frank D. W. (3 May 2015). "Remote sensing of violent conflict: eyes from
above". International Journal of Remote Sensing. 36 (9): 2326–2352.
Bibcode:2015IJRS...36.2326W. doi:10.1080/01431161.2015.1035412. ISSN 0143-1161.
S2CID 140656194.
Wu, Jeff; Ouyang, Long; Ziegler, Daniel M.; Stiennon, Nisan; Lowe, Ryan; Leike,
Jan; Christiano, Paul (27 September 2021). "Recursively Summarizing Books with
Human Feedback". arXiv:2109.10862 [cs.CL].
"What the American Interpreter Got Wrong at Tense US-China Talks in Alaska".
www.vice.com. Retrieved 19 May 2022.
Lauriola, Ivano; Lavelli, Alberto; Aiolli, Fabio (22 January 2022). "An
introduction to Deep Learning in Natural Language Processing: Models, techniques,
and tools". Neurocomputing. 470: 443–456. doi:10.1016/j.neucom.2021.05.103. ISSN
0925-2312. S2CID 238835461.
References
Attribution
Definition of Free Cultural Works logo notext.svg This article incorporates text
from a free content work. Licensed under C-BY-SA 3.0 IGO Text taken from UNESCO
Science Report: the Race Against Time for Smarter Development., Schneegans, S., T.
Straza and J. Lewis (eds), UNESCO. To learn how to add open license text to
Wikipedia articles, please see this how-to page. For information on reusing text
from Wikipedia, please see the terms of use.
AI textbooks
These were the four the most widely used AI textbooks in 2008:

Luger, George; Stubblefield, William (2004). Artificial Intelligence: Structures


and Strategies for Complex Problem Solving (5th ed.). Benjamin/Cummings. ISBN 978-
0-8053-4780-7. Archived from the original on 26 July 2020. Retrieved 17 December
2019.
Nilsson, Nils (1998). Artificial Intelligence: A New Synthesis. Morgan Kaufmann.
ISBN 978-1-55860-467-4. Archived from the original on 26 July 2020. Retrieved 18
November 2019.
Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern
Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-
790395-2.
Poole, David; Mackworth, Alan; Goebel, Randy (1998). Computational Intelligence: A
Logical Approach. New York: Oxford University Press. ISBN 978-0-19-510270-3.
Archived from the original on 26 July 2020. Retrieved 22 August 2020.
Later editions.

Russell, Stuart J.; Norvig, Peter (2009). Artificial Intelligence: A Modern


Approach (3rd ed.). Upper Saddle River, New Jersey: Prentice Hall. ISBN 978-0-13-
604259-4..
Poole, David; Mackworth, Alan (2017). Artificial Intelligence: Foundations of
Computational Agents (2nd ed.). Cambridge University Press. ISBN 978-1-107-19539-4.
The two most widely used textbooks in 2021.Open Syllabus: Explorer

Russell, Stuart J.; Norvig, Peter (2021). Artificial Intelligence: A Modern


Approach (4th ed.). Hoboken: Pearson. ISBN 9780134610993. LCCN 20190474.
Knight, Kevin; Rich, Elaine (1 January 2010). Artificial Intelligence (3rd ed.). Mc
Graw Hill India. ISBN 9780070087705.
History of AI
Crevier, Daniel (1993). AI: The Tumultuous Search for Artificial Intelligence. New
York, NY: BasicBooks. ISBN 0-465-02997-3..
McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters,
Ltd., ISBN 1-56881-205-1.
Newquist, HP (1994). The Brain Makers: Genius, Ego, And Greed In The Quest For
Machines That Think. New York: Macmillan/SAMS. ISBN 978-0-672-30412-5.
Nilsson, Nils (2009). The Quest for Artificial Intelligence: A History of Ideas and
Achievements. New York: Cambridge University Press. ISBN 978-0-521-12293-1.
Other sources
Werbos, P. J. (1988), "Generalization of backpropagation with application to a
recurrent gas market model", Neural Networks, 1 (4): 339–356, doi:10.1016/0893-
6080(88)90007-X
Gers, Felix A.; Schraudolph, Nicol N.; Schraudolph, Jürgen (2002). "Learning
Precise Timing with LSTM Recurrent Networks" (PDF). Journal of Machine Learning
Research. 3: 115–143. Retrieved 13 June 2017.
Deng, L.; Yu, D. (2014). "Deep Learning: Methods and Applications" (PDF).
Foundations and Trends in Signal Processing. 7 (3–4): 1–199.
doi:10.1561/2000000039. Archived (PDF) from the original on 14 March 2016.
Retrieved 18 October 2014.
Schulz, Hannes; Behnke, Sven (1 November 2012). "Deep Learning". KI – Künstliche
Intelligenz. 26 (4): 357–363. doi:10.1007/s13218-012-0198-z. ISSN 1610-1987. S2CID
220523562.
Fukushima, K. (2007). "Neocognitron". Scholarpedia. 2 (1): 1717.
Bibcode:2007SchpJ...2.1717F. doi:10.4249/scholarpedia.1717. was introduced by
Kunihiko Fukushima in 1980.
Habibi, Aghdam, Hamed (30 May 2017). Guide to convolutional neural networks : a
practical application to traffic-sign detection and classification. Heravi, Elnaz
Jahani. Cham, Switzerland. ISBN 9783319575490. OCLC 987790957.
Ciresan, D.; Meier, U.; Schmidhuber, J. (2012). "Multi-column deep neural networks
for image classification". 2012 IEEE Conference on Computer Vision and Pattern
Recognition. pp. 3642–3649. arXiv:1202.2745. doi:10.1109/cvpr.2012.6248110. ISBN
978-1-4673-1228-8. S2CID 2161592.
"From not working to neural networking". The Economist. 2016. Archived from the
original on 31 December 2016. Retrieved 26 April 2018.
Thompson, Derek (23 January 2014). "What Jobs Will the Robots Take?". The Atlantic.
Archived from the original on 24 April 2018. Retrieved 24 April 2018.
Scassellati, Brian (2002). "Theory of mind for a humanoid robot". Autonomous
Robots. 12 (1): 13–24. doi:10.1023/A:1013298507114. S2CID 1979315.
Sample, Ian (14 March 2017). "Google's DeepMind makes AI program that can learn
like a human". The Guardian. Archived from the original on 26 April 2018. Retrieved
26 April 2018.
Heath, Nick (11 December 2020). "What is AI? Everything you need to know about
Artificial Intelligence". ZDNet. Retrieved 1 March 2021.
Bowling, Michael; Burch, Neil; Johanson, Michael; Tammelin, Oskari (9 January
2015). "Heads-up limit hold'em poker is solved". Science. 347 (6218): 145–149.
Bibcode:2015Sci...347..145B. doi:10.1126/science.1259433. ISSN 0036-8075. PMID
25574016. S2CID 3796371.
Solly, Meilan (15 July 2019). "This Poker-Playing A.I. Knows When to Hold 'Em and
When to Fold 'Em". Smithsonian.
"Artificial intelligence: Google's AlphaGo beats Go master Lee Se-dol". BBC News.
12 March 2016. Archived from the original on 26 August 2016. Retrieved 1 October
2016.
Rowinski, Dan (15 January 2013). "Virtual Personal Assistants & The Future Of Your
Smartphone [Infographic]". ReadWrite. Archived from the original on 22 December
2015.
Manyika, James (2022). "Getting AI Right: Introductory Notes on AI & Society".
Daedalus. 151 (2): 5–27. doi:10.1162/daed_e_01897. S2CID 248377878. Retrieved 5 May
2022.
Markoff, John (16 February 2011). "Computer Wins on 'Jeopardy!': Trivial, It's
Not". The New York Times. Archived from the original on 22 October 2014. Retrieved
25 October 2014.
Anadiotis, George (1 October 2020). "The state of AI in 2020: Democratization,
industrialization, and the way to artificial general intelligence". ZDNet.
Retrieved 1 March 2021.
Goertzel, Ben; Lian, Ruiting; Arel, Itamar; de Garis, Hugo; Chen, Shuo (December
2010). "A world survey of artificial brain projects, Part II: Biologically inspired
cognitive architectures". Neurocomputing. 74 (1–3): 30–49.
doi:10.1016/j.neucom.2010.08.012.
Robinson, A. J.; Fallside, F. (1987), "The utility driven dynamic error propagation
network.", Technical Report CUED/F-INFENG/TR.1, Cambridge University Engineering
Department
Hochreiter, Sepp (1991). Untersuchungen zu dynamischen neuronalen Netzen (PDF)
(diploma thesis). Munich: Institut f. Informatik, Technische Univ. Archived from
the original (PDF) on 6 March 2015. Retrieved 16 April 2016.
Williams, R. J.; Zipser, D. (1994), "Gradient-based learning algorithms for
recurrent networks and their computational complexity", Back-propagation: Theory,
Architectures and Applications, Hillsdale, NJ: Erlbaum
Hochreiter, Sepp; Schmidhuber, Jürgen (1997), "Long Short-Term Memory", Neural
Computation, 9 (8): 1735–1780, doi:10.1162/neco.1997.9.8.1735, PMID 9377276, S2CID
1915014
Goodfellow, Ian; Bengio, Yoshua; Courville, Aaron (2016), Deep Learning, MIT
Press., archived from the original on 16 April 2016, retrieved 12 November 2017
Hinton, G.; Deng, L.; Yu, D.; Dahl, G.; Mohamed, A.; Jaitly, N.; Senior, A.;
Vanhoucke, V.; Nguyen, P.; Sainath, T.; Kingsbury, B. (2012). "Deep Neural Networks
for Acoustic Modeling in Speech Recognition – The shared views of four research
groups". IEEE Signal Processing Magazine. 29 (6): 82–97.
Bibcode:2012ISPM...29...82H. doi:10.1109/msp.2012.2205597. S2CID 206485943.
Schmidhuber, J. (2015). "Deep Learning in Neural Networks: An Overview". Neural
Networks. 61: 85–117. arXiv:1404.7828. doi:10.1016/j.neunet.2014.09.003. PMID
25462637. S2CID 11715509.
Linnainmaa, Seppo (1970). The representation of the cumulative rounding error of an
algorithm as a Taylor expansion of the local rounding errors (Thesis) (in Finnish).
Univ. Helsinki, 6–7.|
Griewank, Andreas (2012). "Who Invented the Reverse Mode of Differentiation?
Optimization Stories". Documenta Matematica, Extra Volume ISMP: 389–400.
Werbos, Paul (1974). Beyond Regression: New Tools for Prediction and Analysis in
the Behavioral Sciences (Ph.D. thesis). Harvard University.
Werbos, Paul (1982). "Beyond Regression: New Tools for Prediction and Analysis in
the Behavioral Sciences" (PDF). System Modeling and Optimization. Applications of
advances in nonlinear sensitivity analysis. Berlin, Heidelberg: Springer. Archived
from the original (PDF) on 14 April 2016. Retrieved 16 April 2016.
"What is 'fuzzy logic'? Are there computers that are inherently fuzzy and do not
apply the usual binary logic?". Scientific American. 21 October 1999. Retrieved 5
May 2018.
Merkle, Daniel; Middendorf, Martin (2013). "Swarm Intelligence". In Burke, Edmund
K.; Kendall, Graham (eds.). Search Methodologies: Introductory Tutorials in
Optimization and Decision Support Techniques. Springer Science & Business Media.
ISBN 978-1-4614-6940-7.
van der Walt, Christiaan; Bernard, Etienne (2006). "Data characteristics that
determine classifier performance" (PDF). Archived from the original (PDF) on 25
March 2009. Retrieved 5 August 2009.
Hutter, Marcus (2005). Universal Artificial Intelligence. Berlin: Springer. ISBN
978-3-540-22139-5.
Howe, J. (November 1994). "Artificial Intelligence at Edinburgh University: a
Perspective". Archived from the original on 15 May 2007. Retrieved 30 August 2007.
Galvan, Jill (1 January 1997). "Entering the Posthuman Collective in Philip K.
Dick's "Do Androids Dream of Electric Sheep?"". Science Fiction Studies. 24 (3):
413–429. JSTOR 4240644.
McCauley, Lee (2007). "AI armageddon and the three laws of robotics". Ethics and
Information Technology. 9 (2): 153–164. CiteSeerX 10.1.1.85.8904.
doi:10.1007/s10676-007-9138-2. S2CID 37272949.
Buttazzo, G. (July 2001). "Artificial consciousness: Utopia or real possibility?".
Computer. 34 (7): 24–30. doi:10.1109/2.933500.
Anderson, Susan Leigh (2008). "Asimov's "three laws of robotics" and machine
metaethics". AI & Society. 22 (4): 477–493. doi:10.1007/s00146-007-0094-5. S2CID
1809459.
Yudkowsky, E (2008), "Artificial Intelligence as a Positive and Negative Factor in
Global Risk" (PDF), Global Catastrophic Risks, Oxford University Press, 2008,
Bibcode:2008gcr..book..303Y
McGaughey, E (2018), Will Robots Automate Your Job Away? Full Employment, Basic
Income, and Economic Democracy, p. SSRN part 2(3), SSRN 3044448, archived from the
original on 24 May 2018, retrieved 12 January 2018
IGM Chicago (30 June 2017). "Robots and Artificial Intelligence".
www.igmchicago.org. Archived from the original on 1 May 2019. Retrieved 3 July
2019.
Lohr, Steve (2017). "Robots Will Take Jobs, but Not as Fast as Some Fear, New
Report Says". The New York Times. Archived from the original on 14 January 2018.
Retrieved 13 January 2018.
Frey, Carl Benedikt; Osborne, Michael A (1 January 2017). "The future of
employment: How susceptible are jobs to computerisation?". Technological
Forecasting and Social Change. 114: 254–280. CiteSeerX 10.1.1.395.416.
doi:10.1016/j.techfore.2016.08.019. ISSN 0040-1625.
Arntz, Melanie; Gregory, Terry; Zierahn, Ulrich (2016), "The risk of automation for
jobs in OECD countries: A comparative analysis", OECD Social, Employment, and
Migration Working Papers 189
Morgenstern, Michael (9 May 2015). "Automation and anxiety". The Economist.
Archived from the original on 12 January 2018. Retrieved 13 January 2018.
Mahdawi, Arwa (26 June 2017). "What jobs will still be around in 20 years? Read
this to prepare your future". The Guardian. Archived from the original on 14
January 2018. Retrieved 13 January 2018.
Rubin, Charles (Spring 2003). "Artificial Intelligence and Human Nature". The New
Atlantis. 1: 88–100. Archived from the original on 11 June 2012.
Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. Oxford
University Press.
Brooks, Rodney (10 November 2014). "artificial intelligence is a tool, not a
threat". Archived from the original on 12 November 2014.
Sainato, Michael (19 August 2015). "Stephen Hawking, Elon Musk, and Bill Gates Warn
About Artificial Intelligence". Observer. Archived from the original on 30 October
2015. Retrieved 30 October 2015.
Harari, Yuval Noah (October 2018). "Why Technology Favors Tyranny". The Atlantic.
Robitzski, Dan (5 September 2018). "Five experts share what scares them the most
about AI". Archived from the original on 8 December 2019. Retrieved 8 December
2019.
Goffrey, Andrew (2008). "Algorithm". In Fuller, Matthew (ed.). Software studies: a
lexicon. Cambridge, Mass.: MIT Press. pp. 15–20. ISBN 978-1-4356-4787-9.
Lipartito, Kenneth (6 January 2011), The Narrative and the Algorithm: Genres of
Credit Reporting from the Nineteenth Century to Today (PDF) (Unpublished
manuscript), doi:10.2139/ssrn.1736283, S2CID 166742927
Goodman, Bryce; Flaxman, Seth (2017). "EU regulations on algorithmic decision-
making and a "right to explanation"". AI Magazine. 38 (3): 50. arXiv:1606.08813.
doi:10.1609/aimag.v38i3.2741. S2CID 7373959.
CNA (12 January 2019). "Commentary: Bad news. Artificial intelligence is biased".
CNA. Archived from the original on 12 January 2019. Retrieved 19 June 2020.
Larson, Jeff; Angwin, Julia (23 May 2016). "How We Analyzed the COMPAS Recidivism
Algorithm". ProPublica. Archived from the original on 29 April 2019. Retrieved 19
June 2020.
Müller, Vincent C.; Bostrom, Nick (2014). "Future Progress in Artificial
Intelligence: A Poll Among Experts" (PDF). AI Matters. 1 (1): 9–11.
doi:10.1145/2639475.2639478. S2CID 8510016. Archived (PDF) from the original on 15
January 2016.
Cellan-Jones, Rory (2 December 2014). "Stephen Hawking warns artificial
intelligence could end mankind". BBC News. Archived from the original on 30 October
2015. Retrieved 30 October 2015.
Rawlinson, Kevin (29 January 2015). "Microsoft's Bill Gates insists AI is a
threat". BBC News. Archived from the original on 29 January 2015. Retrieved 30
January 2015.
Holley, Peter (28 January 2015). "Bill Gates on dangers of artificial intelligence:
'I don't understand why some people are not concerned'". The Washington Post. ISSN
0190-8286. Archived from the original on 30 October 2015. Retrieved 30 October
2015.
Gibbs, Samuel (27 October 2014). "Elon Musk: artificial intelligence is our biggest
existential threat". The Guardian. Archived from the original on 30 October 2015.
Retrieved 30 October 2015.
Churm, Philip Andrew (14 May 2019). "Yuval Noah Harari talks politics, technology
and migration". euronews. Archived from the original on 14 May 2019. Retrieved 15
November 2020.
Bostrom, Nick (2015). "What happens when our computers get smarter than we are?".
TED (conference). Archived from the original on 25 July 2020. Retrieved 30 January
2020.
Post, Washington (2015). "Tech titans like Elon Musk are spending $1 billion to
save you from terminators". Chicago Tribune. Archived from the original on 7 June
2016.
Del Prado, Guia Marie (9 October 2015). "The mysterious artificial intelligence
company Elon Musk invested in is developing game-changing smart computers". Tech
Insider. Archived from the original on 30 October 2015. Retrieved 30 October 2015.
FastCompany (15 January 2015). "Elon Musk Is Donating $10M Of His Own Money To
Artificial Intelligence Research". Fast Company. Archived from the original on 30
October 2015. Retrieved 30 October 2015.
Thibodeau, Patrick (25 March 2019). "Oracle CEO Mark Hurd sees no reason to fear
ERP AI". SearchERP. Archived from the original on 6 May 2019. Retrieved 6 May 2019.
Bhardwaj, Prachi (24 May 2018). "Mark Zuckerberg responds to Elon Musk's paranoia
about AI: 'AI is going to... help keep our communities safe.'". Business Insider.
Archived from the original on 6 May 2019. Retrieved 6 May 2019.
Geist, Edward Moore (9 August 2015). "Is artificial intelligence really an
existential threat to humanity?". Bulletin of the Atomic Scientists. Archived from
the original on 30 October 2015. Retrieved 30 October 2015.
Madrigal, Alexis C. (27 February 2015). "The case against killer robots, from a guy
actually working on artificial intelligence". Fusion.net. Archived from the
original on 4 February 2016. Retrieved 31 January 2016.
Lee, Timothy B. (22 August 2014). "Will artificial intelligence destroy humanity?
Here are 5 reasons not to worry". Vox. Archived from the original on 30 October
2015. Retrieved 30 October 2015.
Law Library of Congress (U.S.). Global Legal Research Directorate, issuing body.
(2019). Regulation of artificial intelligence in selected jurisdictions. LCCN
2019668143. OCLC 1110727808.
UNESCO Science Report: the Race Against Time for Smarter Development. Paris:
UNESCO. 11 June 2021. ISBN 978-92-3-100450-6.
Berryhill, Jamie; Heang, Kévin Kok; Clogher, Rob; McBride, Keegan (2019). Hello,
World: Artificial Intelligence and its Use in the Public Sector (PDF). Paris: OECD
Observatory of Public Sector Innovation. Archived (PDF) from the original on 20
December 2019. Retrieved 9 August 2020.
Barfield, Woodrow; Pagallo, Ugo (2018). Research handbook on the law of artificial
intelligence. Cheltenham, UK. ISBN 978-1-78643-904-8. OCLC 1039480085.
Iphofen, Ron; Kritikos, Mihalis (3 January 2019). "Regulating artificial
intelligence and robotics: ethics by design in a digital society". Contemporary
Social Science. 16 (2): 170–184. doi:10.1080/21582041.2018.1563803. ISSN 2158-2041.
S2CID 59298502.
Wirtz, Bernd W.; Weyerer, Jan C.; Geyer, Carolin (24 July 2018). "Artificial
Intelligence and the Public Sector – Applications and Challenges". International
Journal of Public Administration. 42 (7): 596–615.
doi:10.1080/01900692.2018.1498103. ISSN 0190-0692. S2CID 158829602. Archived from
the original on 18 August 2020. Retrieved 22 August 2020.
Buiten, Miriam C (2019). "Towards Intelligent Regulation of Artificial
Intelligence". European Journal of Risk Regulation. 10 (1): 41–59.
doi:10.1017/err.2019.8. ISSN 1867-299X.
Wallach, Wendell (2010). Moral Machines. Oxford University Press.
Brown, Eileen (5 November 2019). "Half of Americans do not believe deepfake news
could target them online". ZDNet. Archived from the original on 6 November 2019.
Retrieved 3 December 2019.
Frangoul, Anmar (14 June 2019). "A Californian business is using A.I. to change the
way we think about energy storage". CNBC. Archived from the original on 25 July
2020. Retrieved 5 November 2019.
"The Economist Explains: Why firms are piling into artificial intelligence". The
Economist. 31 March 2016. Archived from the original on 8 May 2016. Retrieved 19
May 2016.
Lohr, Steve (28 February 2016). "The Promise of Artificial Intelligence Unfolds in
Small Steps". The New York Times. Archived from the original on 29 February 2016.
Retrieved 29 February 2016.
Smith, Mark (22 July 2016). "So you think you chose to read this article?". BBC
News. Archived from the original on 25 July 2016.
Aletras, N.; Tsarapatsanis, D.; Preotiuc-Pietro, D.; Lampos, V. (2016). "Predicting
judicial decisions of the European Court of Human Rights: a Natural Language
Processing perspective". PeerJ Computer Science. 2: e93. doi:10.7717/peerj-cs.93.
Cadena, Cesar; Carlone, Luca; Carrillo, Henry; Latif, Yasir; Scaramuzza, Davide;
Neira, Jose; Reid, Ian; Leonard, John J. (December 2016). "Past, Present, and
Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age".
IEEE Transactions on Robotics. 32 (6): 1309–1332. arXiv:1606.05830.
doi:10.1109/TRO.2016.2624754. S2CID 2596787.
Cambria, Erik; White, Bebo (May 2014). "Jumping NLP Curves: A Review of Natural
Language Processing Research [Review Article]". IEEE Computational Intelligence
Magazine. 9 (2): 48–57. doi:10.1109/MCI.2014.2307227. S2CID 206451986.
Vincent, James (7 November 2019). "OpenAI has published the text-generating AI it
said was too dangerous to share". The Verge. Archived from the original on 11 June
2020. Retrieved 11 June 2020.
Jordan, M. I.; Mitchell, T. M. (16 July 2015). "Machine learning: Trends,
perspectives, and prospects". Science. 349 (6245): 255–260.
Bibcode:2015Sci...349..255J. doi:10.1126/science.aaa8415. PMID 26185243. S2CID
677218.
Maschafilm (2010). "Content: Plug & Pray Film – Artificial Intelligence – Robots
-". plugandpray-film.de. Archived from the original on 12 February 2016.
Evans, Woody (2015). "Posthuman Rights: Dimensions of Transhuman Worlds".
Teknokultura. 12 (2). doi:10.5209/rev_TK.2015.v12.n2.49072.
Waddell, Kaveh (2018). "Chatbots Have Entered the Uncanny Valley". The Atlantic.
Archived from the original on 24 April 2018. Retrieved 24 April 2018.
Poria, Soujanya; Cambria, Erik; Bajpai, Rajiv; Hussain, Amir (September 2017). "A
review of affective computing: From unimodal analysis to multimodal fusion".
Information Fusion. 37: 98–125. doi:10.1016/j.inffus.2017.02.003. hdl:1893/25490.
"Robots could demand legal rights". BBC News. 21 December 2006. Archived from the
original on 15 October 2019. Retrieved 3 February 2011.
Horst, Steven (2005). "The Computational Theory of Mind". The Stanford Encyclopedia
of Philosophy.
Omohundro, Steve (2008). The Nature of Self-Improving Artificial Intelligence.
presented and distributed at the 2007 Singularity Summit, San Francisco, CA.
Ford, Martin; Colvin, Geoff (6 September 2015). "Will robots create more jobs than
they destroy?". The Guardian. Archived from the original on 16 June 2018. Retrieved
13 January 2018.
White Paper: On Artificial Intelligence – A European approach to excellence and
trust (PDF). Brussels: European Commission. 2020. Archived (PDF) from the original
on 20 February 2020. Retrieved 20 February 2020.
Anderson, Michael; Anderson, Susan Leigh (2011). Machine Ethics. Cambridge
University Press.
"Machine Ethics". aaai.org. Archived from the original on 29 November 2014.
Russell, Stuart (8 October 2019). Human Compatible: Artificial Intelligence and the
Problem of Control. United States: Viking. ISBN 978-0-525-55861-3. OCLC 1083694322.
"AI set to exceed human brain power". CNN. 9 August 2006. Archived from the
original on 19 February 2008.
"Robots could demand legal rights". BBC News. 21 December 2006. Archived from the
original on 15 October 2019. Retrieved 3 February 2011.
"Kismet". MIT Artificial Intelligence Laboratory, Humanoid Robotics Group. Archived
from the original on 17 October 2014. Retrieved 25 October 2014.
Smoliar, Stephen W.; Zhang, HongJiang (1994). "Content based video indexing and
retrieval". IEEE MultiMedia. 1 (2): 62–72. doi:10.1109/93.311653. S2CID 32710913.
Neumann, Bernd; Möller, Ralf (January 2008). "On scene interpretation with
description logics". Image and Vision Computing. 26 (1): 82–101.
doi:10.1016/j.imavis.2007.08.013.
Kuperman, G. J.; Reichley, R. M.; Bailey, T. C. (1 July 2006). "Using Commercial
Knowledge Bases for Clinical Decision Support: Opportunities, Hurdles, and
Recommendations". Journal of the American Medical Informatics Association. 13 (4):
369–371. doi:10.1197/jamia.M2055. PMC 1513681. PMID 16622160.
McGarry, Ken (1 December 2005). "A survey of interestingness measures for knowledge
discovery". The Knowledge Engineering Review. 20 (1): 39–61.
doi:10.1017/S0269888905000408. S2CID 14987656.
Bertini, M; Del Bimbo, A; Torniai, C (2006). "Automatic annotation and semantic
retrieval of video sequences using multimedia ontologies". MM '06 Proceedings of
the 14th ACM international conference on Multimedia. 14th ACM international
conference on Multimedia. Santa Barbara: ACM. pp. 679–682.
Kahneman, Daniel (25 October 2011). Thinking, Fast and Slow. Macmillan. ISBN 978-1-
4299-6935-2. Retrieved 8 April 2012.
Turing, Alan (1948), "Machine Intelligence", in Copeland, B. Jack (ed.), The
Essential Turing: The ideas that gave birth to the computer age, Oxford: Oxford
University Press, p. 412, ISBN 978-0-19-825080-7
Domingos, Pedro (22 September 2015). The Master Algorithm: How the Quest for the
Ultimate Learning Machine Will Remake Our World. Basic Books. ISBN 978-0465065707.
Minsky, Marvin (1986), The Society of Mind, Simon and Schuster
Pinker, Steven (4 September 2007) [1994], The Language Instinct, Perennial Modern
Classics, Harper, ISBN 978-0-06-133646-1
Chalmers, David (1995). "Facing up to the problem of consciousness". Journal of
Consciousness Studies. 2 (3): 200–219. Archived from the original on 8 March 2005.
Retrieved 11 October 2018.
Roberts, Jacob (2016). "Thinking Machines: The Search for Artificial Intelligence".
Distillations. Vol. 2, no. 2. pp. 14–23. Archived from the original on 19 August
2018. Retrieved 20 March 2018.
Pennachin, C.; Goertzel, B. (2007). "Contemporary Approaches to Artificial General
Intelligence". Artificial General Intelligence. Cognitive Technologies. Berlin,
Heidelberg: Springer. doi:10.1007/978-3-540-68677-4_1. ISBN 978-3-540-23733-4.
"Ask the AI experts: What's driving today's progress in AI?". McKinsey & Company.
Archived from the original on 13 April 2018. Retrieved 13 April 2018.
Ransbotham, Sam; Kiron, David; Gerbert, Philipp; Reeves, Martin (6 September 2017).
"Reshaping Business With Artificial Intelligence". MIT Sloan Management Review.
Archived from the original on 19 May 2018. Retrieved 2 May 2018.
Lorica, Ben (18 December 2017). "The state of AI adoption". O'Reilly Media.
Archived from the original on 2 May 2018. Retrieved 2 May 2018.
"AlphaGo – Google DeepMind". Archived from the original on 20 October 2021.
Asada, M.; Hosoda, K.; Kuniyoshi, Y.; Ishiguro, H.; Inui, T.; Yoshikawa, Y.; Ogino,
M.; Yoshida, C. (2009). "Cognitive developmental robotics: a survey". IEEE
Transactions on Autonomous Mental Development. 1 (1): 12–34.
doi:10.1109/tamd.2009.2021702. S2CID 10168773.
Ashok83 (10 September 2019). "How AI Is Getting Groundbreaking Changes In Talent
Management And HR Tech". Hackernoon. Archived from the original on 11 September
2019. Retrieved 14 February 2020.
Berlinski, David (2000). The Advent of the Algorithm. Harcourt Books. ISBN 978-0-
15-601391-8. OCLC 46890682. Archived from the original on 26 July 2020. Retrieved
22 August 2020.
Brooks, Rodney (1990). "Elephants Don't Play Chess" (PDF). Robotics and Autonomous
Systems. 6 (1–2): 3–15. CiteSeerX 10.1.1.588.7539. doi:10.1016/S0921-8890(05)80025-
9. Archived (PDF) from the original on 9 August 2007.
Butler, Samuel (13 June 1863). "Darwin among the Machines". Letters to the Editor.
The Press. Christchurch, New Zealand. Archived from the original on 19 September
2008. Retrieved 16 October 2014 – via Victoria University of Wellington.
Clark, Jack (2015a). "Musk-Backed Group Probes Risks Behind Artificial
Intelligence". Bloomberg.com. Archived from the original on 30 October 2015.
Retrieved 30 October 2015.
Clark, Jack (2015b). "Why 2015 Was a Breakthrough Year in Artificial Intelligence".
Bloomberg.com. Archived from the original on 23 November 2016. Retrieved 23
November 2016.
Dennett, Daniel (1991). Consciousness Explained. The Penguin Press. ISBN 978-0-
7139-9037-9.
Dreyfus, Hubert (1972). What Computers Can't Do. New York: MIT Press. ISBN 978-0-
06-011082-6.
Dreyfus, Hubert; Dreyfus, Stuart (1986). Mind over Machine: The Power of Human
Intuition and Expertise in the Era of the Computer. Oxford, UK: Blackwell. ISBN
978-0-02-908060-3. Archived from the original on 26 July 2020. Retrieved 22 August
2020.
Dyson, George (1998). Darwin among the Machines. Allan Lane Science. ISBN 978-0-
7382-0030-9. Archived from the original on 26 July 2020. Retrieved 22 August 2020.
Edelson, Edward (1991). The Nervous System. New York: Chelsea House. ISBN 978-0-
7910-0464-7. Archived from the original on 26 July 2020. Retrieved 18 November
2019.
Fearn, Nicholas (2007). The Latest Answers to the Oldest Questions: A Philosophical
Adventure with the World's Greatest Thinkers. New York: Grove Press. ISBN 978-0-
8021-1839-4.
Haugeland, John (1985). Artificial Intelligence: The Very Idea. Cambridge, Mass.:
MIT Press. ISBN 978-0-262-08153-5.
Hawkins, Jeff; Blakeslee, Sandra (2005). On Intelligence. New York: Owl Books. ISBN
978-0-8050-7853-4.
Henderson, Mark (24 April 2007). "Human rights for robots? We're getting carried
away". The Times Online. London. Archived from the original on 31 May 2014.
Retrieved 31 May 2014.
Kahneman, Daniel; Slovic, D.; Tversky, Amos (1982). Judgment under uncertainty:
Heuristics and biases. Science. Vol. 185. New York: Cambridge University Press. pp.
1124–1131. Bibcode:1974Sci...185.1124T. doi:10.1126/science.185.4157.1124. ISBN
978-0-521-28414-1. PMID 17835457. S2CID 143452957.
Katz, Yarden (1 November 2012). "Noam Chomsky on Where Artificial Intelligence Went
Wrong". The Atlantic. Archived from the original on 28 February 2019. Retrieved 26
October 2014.
Kurzweil, Ray (2005). The Singularity is Near. Penguin Books. ISBN 978-0-670-03384-
3.
Langley, Pat (2011). "The changing science of machine learning". Machine Learning.
82 (3): 275–279. doi:10.1007/s10994-011-5242-y.
Legg, Shane; Hutter, Marcus (15 June 2007). "A Collection of Definitions of
Intelligence". arXiv:0706.3639 [cs.AI].
Lenat, Douglas; Guha, R. V. (1989). Building Large Knowledge-Based Systems.
Addison-Wesley. ISBN 978-0-201-51752-1.
Lighthill, James (1973). "Artificial Intelligence: A General Survey". Artificial
Intelligence: a paper symposium. Science Research Council.
Lombardo, P; Boehm, I; Nairz, K (2020). "RadioComics – Santa Claus and the future
of radiology". Eur J Radiol. 122 (1): 108771. doi:10.1016/j.ejrad.2019.108771. PMID
31835078.
Lungarella, M.; Metta, G.; Pfeifer, R.; Sandini, G. (2003). "Developmental
robotics: a survey". Connection Science. 15 (4): 151–190. CiteSeerX 10.1.1.83.7615.
doi:10.1080/09540090310001655110. S2CID 1452734.
Maker, Meg Houston (2006). "AI@50: AI Past, Present, Future". Dartmouth College.
Archived from the original on 3 January 2007. Retrieved 16 October 2008.
McCarthy, John; Minsky, Marvin; Rochester, Nathan; Shannon, Claude (1955). "A
Proposal for the Dartmouth Summer Research Project on Artificial Intelligence".
Archived from the original on 26 August 2007. Retrieved 30 August 2007.
Minsky, Marvin (1967). Computation: Finite and Infinite Machines. Englewood Cliffs,
N.J.: Prentice-Hall. ISBN 978-0-13-165449-5. Archived from the original on 26 July
2020. Retrieved 18 November 2019.
Moravec, Hans (1988). Mind Children. Harvard University Press. ISBN 978-0-674-
57616-2. Archived from the original on 26 July 2020. Retrieved 18 November 2019.
NRC (United States National Research Council) (1999). "Developments in Artificial
Intelligence". Funding a Revolution: Government Support for Computing Research.
National Academy Press.
Newell, Allen; Simon, H. A. (1976). "Computer Science as Empirical Inquiry: Symbols
and Search". Communications of the ACM. 19 (3): 113–126.
doi:10.1145/360018.360022..
Nilsson, Nils (1983). "Artificial Intelligence Prepares for 2001" (PDF). AI
Magazine. 1 (1). Archived (PDF) from the original on 17 August 2020. Retrieved 22
August 2020. Presidential Address to the Association for the Advancement of
Artificial Intelligence.
Oudeyer, P-Y. (2010). "On the impact of robotics in behavioral and cognitive
sciences: from insect navigation to human cognitive development" (PDF). IEEE
Transactions on Autonomous Mental Development. 2 (1): 2–16.
doi:10.1109/tamd.2009.2039057. S2CID 6362217. Archived (PDF) from the original on 3
October 2018. Retrieved 4 June 2013.
Schank, Roger C. (1991). "Where's the AI". AI magazine. Vol. 12, no. 4.
Searle, John (1980). "Minds, Brains and Programs" (PDF). Behavioral and Brain
Sciences. 3 (3): 417–457. doi:10.1017/S0140525X00005756. S2CID 55303721. Archived
(PDF) from the original on 17 March 2019. Retrieved 22 August 2020.
Searle, John (1999). Mind, language and society. New York: Basic Books. ISBN 978-0-
465-04521-1. OCLC 231867665. Archived from the original on 26 July 2020. Retrieved
22 August 2020.
Simon, H. A. (1965). The Shape of Automation for Men and Management. New York:
Harper & Row. Archived from the original on 26 July 2020. Retrieved 18 November
2019.
Solomonoff, Ray (1956). An Inductive Inference Machine (PDF). Dartmouth Summer
Research Conference on Artificial Intelligence. Archived (PDF) from the original on
26 April 2011. Retrieved 22 March 2011 – via std.com, pdf scanned copy of the
original. Later published as
Solomonoff, Ray (1957). "An Inductive Inference Machine". IRE Convention Record.
Vol. Section on Information Theory, part 2. pp. 56–62.
Spadafora, Anthony (21 October 2016). "Stephen Hawking believes AI could be
mankind's last accomplishment". BetaNews. Archived from the original on 28 August
2017.
Tao, Jianhua; Tan, Tieniu (2005). Affective Computing and Intelligent Interaction.
Affective Computing: A Review. Lecture Notes in Computer Science. Vol. LNCS 3784.
Springer. pp. 981–995. doi:10.1007/11573548. ISBN 978-3-540-29621-8.
Tecuci, Gheorghe (March–April 2012). "Artificial Intelligence". Wiley
Interdisciplinary Reviews: Computational Statistics. 4 (2): 168–180.
doi:10.1002/wics.200. S2CID 196141190.
Thro, Ellen (1993). Robotics: The Marriage of Computers and Machines. New York:
Facts on File. ISBN 978-0-8160-2628-9. Archived from the original on 26 July 2020.
Retrieved 22 August 2020.
Turing, Alan (October 1950), "Computing Machinery and Intelligence", Mind, LIX
(236): 433–460, doi:10.1093/mind/LIX.236.433, ISSN 0026-4423.
Vinge, Vernor (1993). "The Coming Technological Singularity: How to Survive in the
Post-Human Era". Vision 21: Interdisciplinary Science and Engineering in the Era of
Cyberspace: 11. Bibcode:1993vise.nasa...11V. Archived from the original on 1
January 2007. Retrieved 14 November 2011.
Wason, P. C.; Shapiro, D. (1966). "Reasoning". In Foss, B. M. (ed.). New horizons
in psychology. Harmondsworth: Penguin. Archived from the original on 26 July 2020.
Retrieved 18 November 2019.
Weng, J.; McClelland; Pentland, A.; Sporns, O.; Stockman, I.; Sur, M.; Thelen, E.
(2001). "Autonomous mental development by robots and animals" (PDF). Science. 291
(5504): 599–600. doi:10.1126/science.291.5504.599. PMID 11229402. S2CID 54131797.
Archived (PDF) from the original on 4 September 2013. Retrieved 4 June 2013 – via
msu.edu.
Further reading
DH Author, "Why Are There Still So Many Jobs? The History and Future of Workplace
Automation" (2015) 29(3) Journal of Economic Perspectives 3.
Boden, Margaret, Mind As Machine, Oxford University Press, 2006.
Cukier, Kenneth, "Ready for Robots? How to Think about the Future of AI", Foreign
Affairs, vol. 98, no. 4 (July/August 2019), pp. 192–98. George Dyson, historian of
computing, writes (in what might be called "Dyson's Law") that "Any system simple
enough to be understandable will not be complicated enough to behave intelligently,
while any system complicated enough to behave intelligently will be too complicated
to understand." (p. 197.) Computer scientist Alex Pentland writes: "Current AI
machine-learning algorithms are, at their core, dead simple stupid. They work, but
they work by brute force." (p. 198.)
Domingos, Pedro, "Our Digital Doubles: AI will serve our species, not control it",
Scientific American, vol. 319, no. 3 (September 2018), pp. 88–93.
Gopnik, Alison, "Making AI More Human: Artificial intelligence has staged a revival
by starting to incorporate what we know about how children learn", Scientific
American, vol. 316, no. 6 (June 2017), pp. 60–65.
Halpern, Sue, "The Human Costs of AI" (review of Kate Crawford, Atlas of AI: Power,
Politics, and the Planetary Costs of Artificial Intelligence, Yale University
Press, 2021, 327 pp.; Simon Chesterman, We, the Robots?: Regulating Artificial
Intelligence and the Limits of the Law, Cambridge University Press, 2021, 289 pp.;
Keven Roose, Futureproof: 9 Rules for Humans in the Age of Automation, Random
House, 217 pp.; Erik J. Larson, The Myth of Artificial Intelligence: Why Computers
Can't Think the Way We Do, Belknap Press / Harvard University Press, 312 pp.), The
New York Review of Books, vol. LXVIII, no. 16 (21 October 2021), pp. 29–31. "AI
training models can replicate entrenched social and cultural biases. [...] Machines
only know what they know from the data they have been given. [p. 30.] [A]rtificial
general intelligence–machine-based intelligence that matches our own–is beyond the
capacity of algorithmic machine learning... 'Your brain is one piece in a broader
system which includes your body, your environment, other humans, and culture as a
whole.' [E]ven machines that master the tasks they are trained to perform can't
jump domains. AIVA, for example, can't drive a car even though it can write music
(and wouldn't even be able to do that without Bach and Beethoven [and other
composers on which AIVA is trained])." (p. 31.)
Johnston, John (2008) The Allure of Machinic Life: Cybernetics, Artificial Life,
and the New AI, MIT Press.
Koch, Christof, "Proust among the Machines", Scientific American, vol. 321, no. 6
(December 2019), pp. 46–49. Christof Koch doubts the possibility of "intelligent"
machines attaining consciousness, because "[e]ven the most sophisticated brain
simulations are unlikely to produce conscious feelings." (p. 48.) According to
Koch, "Whether machines can become sentient [is important] for ethical reasons. If
computers experience life through their own senses, they cease to be purely a means
to an end determined by their usefulness to... humans. Per GNW [the Global Neuronal
Workspace theory], they turn from mere objects into subjects... with a point of
view.... Once computers' cognitive abilities rival those of humanity, their impulse
to push for legal and political rights will become irresistible—the right not to be
deleted, not to have their memories wiped clean, not to suffer pain and
degradation. The alternative, embodied by IIT [Integrated Information Theory], is
that computers will remain only supersophisticated machinery, ghostlike empty
shells, devoid of what we value most: the feeling of life itself." (p. 49.)
Marcus, Gary, "Am I Human?: Researchers need new ways to distinguish artificial
intelligence from the natural kind", Scientific American, vol. 316, no. 3 (March
2017), pp. 58–63. A stumbling block to AI has been an incapacity for reliable
disambiguation. An example is the "pronoun disambiguation problem": a machine has
no way of determining to whom or what a pronoun in a sentence refers. (p. 61.)
E McGaughey, 'Will Robots Automate Your Job Away? Full Employment, Basic Income,
and Economic Democracy' (2018) SSRN, part 2(3) Archived 24 May 2018 at the Wayback
Machine.
George Musser, "Artificial Imagination: How machines could learn creativity and
common sense, among other human qualities", Scientific American, vol. 320, no. 5
(May 2019), pp. 58–63.
Myers, Courtney Boyd ed. (2009). "The AI Report" Archived 29 July 2017 at the
Wayback Machine. Forbes June 2009
Raphael, Bertram (1976). The Thinking Computer. W.H. Freeman and Co. ISBN 978-
0716707233. Archived from the original on 26 July 2020. Retrieved 22 August 2020.
Scharre, Paul, "Killer Apps: The Real Dangers of an AI Arms Race", Foreign Affairs,
vol. 98, no. 3 (May/June 2019), pp. 135–44. "Today's AI technologies are powerful
but unreliable. Rules-based systems cannot deal with circumstances their
programmers did not anticipate. Learning systems are limited by the data on which
they were trained. AI failures have already led to tragedy. Advanced autopilot
features in cars, although they perform well in some circumstances, have driven
cars without warning into trucks, concrete barriers, and parked cars. In the wrong
situation, AI systems go from supersmart to superdumb in an instant. When an enemy
is trying to manipulate and hack an AI system, the risks are even greater." (p.
140.)
Serenko, Alexander (2010). "The development of an AI journal ranking based on the
revealed preference approach" (PDF). Journal of Informetrics. 4 (4): 447–59.
doi:10.1016/j.joi.2010.04.001. Archived (PDF) from the original on 4 October 2013.
Retrieved 24 August 2013.
Serenko, Alexander; Michael Dohan (2011). "Comparing the expert survey and citation
impact journal ranking methods: Example from the field of Artificial Intelligence"
(PDF). Journal of Informetrics. 5 (4): 629–49. doi:10.1016/j.joi.2011.06.002.
Archived (PDF) from the original on 4 October 2013. Retrieved 12 September 2013.
Tom Simonite (29 December 2014). "2014 in Computing: Breakthroughs in Artificial
Intelligence". MIT Technology Review.
Sun, R. & Bookman, L. (eds.), Computational Architectures: Integrating Neural and
Symbolic Processes. Kluwer Academic Publishers, Needham, MA. 1994.
Taylor, Paul, "Insanely Complicated, Hopelessly Inadequate" (review of Brian
Cantwell Smith, The Promise of Artificial Intelligence: Reckoning and Judgment,
MIT, 2019, ISBN 978-0262043045, 157 pp.; Gary Marcus and Ernest Davis, Rebooting
AI: Building Artificial Intelligence We Can Trust, Ballantine, 2019, ISBN 978-
1524748258, 304 pp.; Judea Pearl and Dana Mackenzie, The Book of Why: The New
Science of Cause and Effect, Penguin, 2019, ISBN 978-0141982410, 418 pp.), London
Review of Books, vol. 43, no. 2 (21 January 2021), pp. 37–39. Paul Taylor writes
(p. 39): "Perhaps there is a limit to what a computer can do without knowing that
it is manipulating imperfect representations of an external reality."
Tooze, Adam, "Democracy and Its Discontents", The New York Review of Books, vol.
LXVI, no. 10 (6 June 2019), pp. 52–53, 56–57. "Democracy has no clear answer for
the mindless operation of bureaucratic and technological power. We may indeed be
witnessing its extension in the form of artificial intelligence and robotics.
Likewise, after decades of dire warning, the environmental problem remains
fundamentally unaddressed.... Bureaucratic overreach and environmental catastrophe
are precisely the kinds of slow-moving existential challenges that democracies deal
with very badly.... Finally, there is the threat du jour: corporations and the
technologies they promote." (pp. 56–57.)
External links
Artificial intelligence
at Wikipedia's sister projects
Definitions from Wiktionary
Media from Commons
Quotations from Wikiquote
Textbooks from Wikibooks
Resources from Wikiversity
Data from Wikidata
"Artificial Intelligence". Internet Encyclopedia of Philosophy.
Thomason, Richmond. "Logic and Artificial Intelligence". In Zalta, Edward N. (ed.).
Stanford Encyclopedia of Philosophy.
Artificial Intelligence. BBC Radio 4 discussion with John Agar, Alison Adam & Igor
Aleksander (In Our Time, 8 December 2005).
Articles related to Artificial intelligence
Authority control: National libraries Edit this at Wikidata
SpainFrance (data)GermanyIsraelUnited StatesLatviaJapanCzech Republic
Categories: Artificial intelligenceCyberneticsFormal sciencesComputational
neuroscienceEmerging technologiesUnsolved problems in computer scienceComputational
fields of study
Navigation menu
Not logged in
Talk
Contributions
Create account
Log in
ArticleTalk
ReadView sourceView history
Search
Search Wikipedia
Main page
Contents
Current events
Random article
About Wikipedia
Contact us
Donate
Contribute
Help
Learn to edit
Community portal
Recent changes
Upload file
Tools
What links here
Related changes
Special pages
Permanent link
Page information
Cite this page
Wikidata item
Print/export
Download as PDF
Printable version
In other projects
Wikimedia Commons
Wikibooks
Wikinews
Wikiquote
Wikiversity

Languages
‫العربية‬
Español
हिन्दी
Ilokano
Tagalog
Татарча/tatarça
‫اردو‬
Winaray
中文
119 more
Edit links
This page was last edited on 6 August 2022, at 13:56 (UTC).
Text is available under the Creative Commons Attribution-ShareAlike License 3.0;
additional terms may apply. By using this site, you agree to the Terms of Use and
Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation,
Inc., a non-profit organization.
Privacy policyAbout WikipediaDisclaimersContact WikipediaMobile
viewDevelopersStatisticsCookie statementWikimedia FoundationPowered by MediaWiki

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy