Upload 2
Upload 2
Artificial intelligence
From Wikipedia, the free encyclopedia
Jump to navigationJump to search
"AI" redirects here. For other uses, see AI (disambiguation) and Artificial
intelligence (disambiguation).
This article may have too many section headers dividing up its content. Please help
improve the article by merging similar sections and removing unneeded subheaders.
(July 2022) (Learn how and when to remove this template message)
Part of a series on
Artificial intelligence
Anatomy-1751201 1280.png
Major goals
Approaches
Philosophy
History
Technology
Glossary
vte
Artificial intelligence (AI) is intelligence demonstrated by machines, as opposed
to the natural intelligence displayed by animals including humans. AI research has
been defined as the field of study of intelligent agents, which refers to any
system that perceives its environment and takes actions that maximize its chance of
achieving its goals.[a]
The term "artificial intelligence" had previously been used to describe machines
that mimic and display "human" cognitive skills that are associated with the human
mind, such as "learning" and "problem-solving". This definition has since been
rejected by major AI researchers who now describe AI in terms of rationality and
acting rationally, which does not limit how intelligence can be articulated.[b]
The various sub-fields of AI research are centered around particular goals and the
use of particular tools. The traditional goals of AI research include reasoning,
knowledge representation, planning, learning, natural language processing,
perception, and the ability to move and manipulate objects.[c] General intelligence
(the ability to solve an arbitrary problem) is among the field's long-term goals.
[12] To solve these problems, AI researchers have adapted and integrated a wide
range of problem-solving techniques—including search and mathematical optimization,
formal logic, artificial neural networks, and methods based on statistics,
probability and economics. AI also draws upon computer science, psychology,
linguistics, philosophy, and many other fields.
The field was founded on the assumption that human intelligence "can be so
precisely described that a machine can be made to simulate it".[d] This raised
philosophical arguments about the mind and the ethical consequences of creating
artificial beings endowed with human-like intelligence; these issues have
previously been explored by myth, fiction and philosophy since antiquity.[14]
Computer scientists and philosophers have since suggested that AI may become an
existential risk to humanity if its rational capacities are not steered towards
beneficial goals.[e]
Contents
1 History
1.1 Fictions and early concepts
1.2 Early researches
1.3 From expert systems to machine learning
2 Goals
2.1 Reasoning, problem-solving
2.2 Knowledge representation
2.3 Planning
2.4 Learning
2.5 Natural language processing
2.6 Perception
2.7 Motion and manipulation
2.8 Social intelligence
2.9 General intelligence
3 Tools
3.1 Search and optimization
3.2 Logic
3.3 Probabilistic methods for uncertain reasoning
3.4 Classifiers and statistical learning methods
3.5 Artificial neural networks
3.5.1 Deep learning
3.6 Specialized languages and hardware
4 Applications
4.1 Legal aspects
5 Philosophy
5.1 Defining artificial intelligence
5.1.1 Thinking vs. acting: the Turing test
5.1.2 Acting humanly vs. acting intelligently: intelligent agents
5.2 Evaluating approaches to AI
5.2.1 Symbolic AI and its limits
5.2.2 Neat vs. scruffy
5.2.3 Soft vs. hard computing
5.2.4 Narrow vs. general AI
5.3 Machine consciousness, sentience and mind
5.3.1 Consciousness
5.3.2 Computationalism and functionalism
5.3.3 Robot rights
6 Future
6.1 Superintelligence
6.2 Risks
6.2.1 Technological unemployment
6.2.2 Bad actors and weaponized AI
6.2.3 Algorithmic bias
6.2.4 Existential risk
6.3 Ethical machines
6.4 Regulation
7 In fiction
8 Scientific diplomacy
8.1 Warfare
8.1.1 Russo-Ukrainian War
8.1.2 Warfare regulations
8.2 Cybersecurity
8.2.1 Czech Republic's approach
8.2.2 Germany's approach
8.2.3 European Union's approach
8.2.4 Russo-Ukrainian War
8.3 Election security
8.4 Future of work
8.4.1 Facial recognition
8.4.2 AI and school
8.4.3 AI and medicine
8.4.4 AI in business
8.4.5 Business and diplomacy
8.5 AI and foreign policy
9 See also
10 Explanatory notes
11 Citations
12 References
12.1 AI textbooks
12.2 History of AI
12.3 Other sources
13 Further reading
14 External links
History
Main articles: History of artificial intelligence and Timeline of artificial
intelligence
Fictions and early concepts
Silver didrachma from Crete depicting Talos, an ancient mythical automaton with
artificial intelligence
Artificial beings with intelligence appeared as storytelling devices in antiquity,
[15] and have been common in fiction, as in Mary Shelley's Frankenstein or Karel
Čapek's R.U.R.[16] These characters and their fates raised many of the same issues
now discussed in the ethics of artificial intelligence.[17]
Early researches
By the 1950s, two visions for how to achieve machine intelligence emerged. One
vision, known as Symbolic AI or GOFAI, was to use computers to create a symbolic
representation of the world and systems that could reason about the world.
Proponents included Allen Newell, Herbert A. Simon, and Marvin Minsky. Closely
associated with this approach was the "heuristic search" approach, which likened
intelligence to a problem of exploring a space of possibilities for answers. The
second vision, known as the connectionist approach, sought to achieve intelligence
through learning. Proponents of this approach, most prominently Frank Rosenblatt,
sought to connect Perceptron in ways inspired by connections of neurons.[21] James
Manyika and others have compared the two approaches to the mind (Symbolic AI) and
the brain (connectionist). Manyika argues that symbolic approaches dominated the
push for artificial intelligence in this period, due in part to its connection to
intellectual traditions of Descarte, Boole, Gottlob Frege, Bertrand Russell, and
others. Connectionist approaches based on cybernetics or artificial neural networks
were pushed to the background but have gained new prominence in recent decades.[22]
Researchers in the 1960s and the 1970s were convinced that symbolic approaches
would eventually succeed in creating a machine with artificial general intelligence
and considered this the goal of their field.[30] Herbert Simon predicted, "machines
will be capable, within twenty years, of doing any work a man can do".[31] Marvin
Minsky agreed, writing, "within a generation ... the problem of creating
'artificial intelligence' will substantially be solved".[32]
They failed to recognize the difficulty of some of the remaining tasks. Progress
slowed and in 1974, in response to the criticism of Sir James Lighthill[33] and
ongoing pressure from the US Congress to fund more productive projects, both the
U.S. and British governments cut off exploratory research in AI. The next few years
would later be called an "AI winter", a period when obtaining funding for AI
projects was difficult.[8]
Many researchers began to doubt that the symbolic approach would be able to imitate
all the processes of human cognition, especially perception, robotics, learning and
pattern recognition. A number of researchers began to look into "sub-symbolic"
approaches to specific AI problems.[35] Robotics researchers, such as Rodney
Brooks, rejected symbolic AI and focused on the basic engineering problems that
would allow robots to move, survive, and learn their environment.[j] Interest in
neural networks and "connectionism" was revived by Geoffrey Hinton, David Rumelhart
and others in the middle of the 1980s.[40] Soft computing tools were developed in
the 80s, such as neural networks, fuzzy systems, Grey system theory, evolutionary
computation and many tools drawn from statistics or mathematical optimization.
AI gradually restored its reputation in the late 1990s and early 21st century by
finding specific solutions to specific problems. The narrow focus allowed
researchers to produce verifiable results, exploit more mathematical methods, and
collaborate with other fields (such as statistics, economics and mathematics).[41]
By 2000, solutions developed by AI researchers were being widely used, although in
the 1990s they were rarely described as "artificial intelligence".[11]
Numerous academic researchers became concerned that AI was no longer pursuing the
original goal of creating versatile, fully intelligent machines. Much of current
research involves statistical AI, which is overwhelmingly used to solve specific
problems, even highly successful techniques such as deep learning. This concern has
led to the subfield of artificial general intelligence (or "AGI"), which had
several well-funded institutions by the 2010s.[12]
Goals
The general problem of simulating (or creating) intelligence has been broken down
into sub-problems. These consist of particular traits or capabilities that
researchers expect an intelligent system to display. The traits described below
have received the most attention.[c]
Reasoning, problem-solving
Early researchers developed algorithms that imitated step-by-step reasoning that
humans use when they solve puzzles or make logical deductions.[45] By the late
1980s and 1990s, AI research had developed methods for dealing with uncertain or
incomplete information, employing concepts from probability and economics.[46]
Knowledge representation
Main articles: Knowledge representation, Commonsense knowledge, Description logic,
and Ontology
Planning
Main article: Automated planning and scheduling
An intelligent agent that can plan makes a representation of the state of the
world, makes predictions about how their actions will change it and make choices
that maximize the utility (or "value") of the available choices.[62] In classical
planning problems, the agent can assume that it is the only system acting in the
world, allowing the agent to be certain of the consequences of its actions.[63]
However, if the agent is not the only actor, then it requires that the agent reason
under uncertainty, and continuously re-assess its environment and adapt.[64] Multi-
agent planning uses the cooperation and competition of many agents to achieve a
given goal. Emergent behavior such as this is used by evolutionary algorithms and
swarm intelligence.[65]
Learning
Main article: Machine learning
Machine learning (ML), a fundamental concept of AI research since the field's
inception,[l] is the study of computer algorithms that improve automatically
through experience.[m]
Symbolic AI used formal syntax to translate the deep structure of sentences into
logic. This failed to produce useful applications, due to the intractability of
logic[47] and the breadth of commonsense knowledge.[56] Modern statistical
techniques include co-occurrence frequencies (how often one word appears near
another), "Keyword spotting" (searching for a particular word to retrieve
information), transformer-based deep learning (which finds patterns in text), and
others.[75] They have achieved acceptable accuracy at the page or paragraph level,
and, by 2019, could generate coherent text.[76]
Perception
Main articles: Machine perception, Computer vision, and Speech recognition
Motion planning is the process of breaking down a movement task into "primitives"
such as individual joint movements. Such movement often involves compliant motion,
a process where movement requires maintaining physical contact with an object.
Robots can learn from experience how to move efficiently despite the presence of
friction and gear slippage.[83]
Social intelligence
Main article: Affective computing
General intelligence
Main article: Artificial general intelligence
A machine with general intelligence can solve a wide variety of problems with
breadth and versatility similar to human intelligence. There are several competing
ideas about how to develop artificial general intelligence. Hans Moravec and Marvin
Minsky argue that work in different individual domains can be incorporated into an
advanced multi-agent system or cognitive architecture with general intelligence.
[88] Pedro Domingos hopes that there is a conceptually straightforward, but
mathematically difficult, "master algorithm" that could lead to AGI.[89] Others
believe that anthropomorphic features like an artificial brain[90] or simulated
child development[n] will someday reach a critical point where general intelligence
emerges.
Tools
Search and optimization
Main articles: Search algorithm, Mathematical optimization, and Evolutionary
computation
Many problems in AI can be solved theoretically by intelligently searching through
many possible solutions:[91] Reasoning can be reduced to performing a search. For
example, logical proof can be viewed as searching for a path that leads from
premises to conclusions, where each step is the application of an inference rule.
[92] Planning algorithms search through trees of goals and subgoals, attempting to
find a path to a target goal, a process called means-ends analysis.[93] Robotics
algorithms for moving limbs and grasping objects use local searches in
configuration space.[94]
Simple exhaustive searches[95] are rarely sufficient for most real-world problems:
the search space (the number of places to search) quickly grows to astronomical
numbers. The result is a search that is too slow or never completes. The solution,
for many problems, is to use "heuristics" or "rules of thumb" that prioritize
choices in favor of those more likely to reach a goal and to do so in a shorter
number of steps. In some search methodologies, heuristics can also serve to
eliminate some choices unlikely to lead to a goal (called "pruning the search
tree"). Heuristics supply the program with a "best guess" for the path on which the
solution lies.[96] Heuristics limit the search for solutions into a smaller sample
size.[97]
Logic
Main articles: Logic programming and Automated reasoning
Logic[101] is used for knowledge representation and problem-solving, but it can be
applied to other problems as well. For example, the satplan algorithm uses logic
for planning[102] and inductive logic programming is a method for learning.[103]
A key concept from the science of economics is "utility", a measure of how valuable
something is to an intelligent agent. Precise mathematical tools have been
developed that analyze how an agent can make choices and plan, using decision
theory, decision analysis,[116] and information value theory.[117] These tools
include models such as Markov decision processes,[118] dynamic decision networks,
[115] game theory and mechanism design.[119]
A classifier can be trained in various ways; there are many statistical and machine
learning approaches. The decision tree is the simplest and most widely used
symbolic machine learning algorithm.[121] K-nearest neighbor algorithm was the most
widely used analogical AI until the mid-1990s.[122] Kernel methods such as the
support vector machine (SVM) displaced k-nearest neighbor in the 1990s.[123] The
naive Bayes classifier is reportedly the "most widely used learner"[124] at Google,
due in part to its scalability.[125] Neural networks are also used for
classification.[126]
Modern neural networks model complex relationships between inputs and outputs and
find patterns in data. They can learn continuous functions and even digital logical
operations. Neural networks can be viewed as a type of mathematical optimization—
they perform gradient descent on a multi-dimensional topology that was created by
training the network. The most common training technique is the backpropagation
algorithm.[128] Other learning techniques for neural networks are Hebbian learning
("fire together, wire together"), GMDH or competitive learning.[129]
The main categories of networks are acyclic or feedforward neural networks (where
the signal passes in only one direction) and recurrent neural networks (which allow
feedback and short-term memories of previous input events). Among the most popular
feedforward networks are perceptrons, multi-layer perceptrons and radial basis
networks.[130]
Deep learning
Representing Images on Multiple Layers of Abstraction in Deep Learning
Representing images on multiple layers of abstraction in deep learning[131]
Deep learning[132] uses several layers of neurons between the network's inputs and
outputs. The multiple layers can progressively extract higher-level features from
the raw input. For example, in image processing, lower layers may identify edges,
while higher layers may identify the concepts relevant to a human such as digits or
letters or faces.[133] Deep learning has drastically improved the performance of
programs in many important subfields of artificial intelligence, including computer
vision, speech recognition, image classification[134] and others.
Deep learning often uses convolutional neural networks for many or all of its
layers. In a convolutional layer, each neuron receives input from only a restricted
area of the previous layer called the neuron's receptive field. This can
substantially reduce the number of weighted connections between neurons,[135] and
creates a hierarchy similar to the organization of the animal visual cortex.[136]
In a recurrent neural network the signal will propagate through a layer more than
once;[137] thus, an RNN is an example of deep learning.[138] RNNs can be trained by
gradient descent,[139] however long-term gradients which are back-propagated can
"vanish" (that is, they can tend to zero) or "explode" (that is, they can tend to
infinity), known as the vanishing gradient problem.[140] The long short term memory
(LSTM) technique can prevent this in most cases.[141]
Applications
Main article: Applications of artificial intelligence
See also: Embodied cognition and Legal informatics
For this project the AI had to learn the typical patterns in the colors and
brushstrokes of Renaissance painter Raphael. The portrait shows the face of the
actress Ornella Muti, "painted" by AI in the style of Raphael.
AI is relevant to any intellectual task.[142] Modern artificial intelligence
techniques are pervasive and are too numerous to list here.[143] Frequently, when a
technique reaches mainstream use, it is no longer considered artificial
intelligence; this phenomenon is described as the AI effect.[144]
In the 2010s, AI applications were at the heart of the most commercially successful
areas of computing, and have become a ubiquitous feature of daily life. AI is used
in search engines (such as Google Search), targeting online advertisements,[145]
recommendation systems (offered by Netflix, YouTube or Amazon), driving internet
traffic,[146][147] targeted advertising (AdSense, Facebook), virtual assistants
(such as Siri or Alexa),[148] autonomous vehicles (including drones and self-
driving cars), automatic language translation (Microsoft Translator, Google
Translate), facial recognition (Apple's Face ID or Microsoft's DeepFace), image
labeling (used by Facebook, Apple's iPhoto and TikTok) and spam filtering.
There are also thousands of successful AI applications used to solve problems for
specific industries or institutions. A few examples are energy storage,[149]
deepfakes,[150] medical diagnosis, military logistics, or supply chain management.
Game playing has been a test of AI's strength since the 1950s. Deep Blue became the
first computer chess-playing system to beat a reigning world chess champion, Garry
Kasparov, on 11 May 1997.[151] In 2011, in a Jeopardy! quiz show exhibition match,
IBM's question answering system, Watson, defeated the two greatest Jeopardy!
champions, Brad Rutter and Ken Jennings, by a significant margin.[152] In March
2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol,
becoming the first computer Go-playing system to beat a professional Go player
without handicaps.[153] Other programs handle imperfect-information games; such as
for poker at a superhuman level, Pluribus[q] and Cepheus.[155] DeepMind in the
2010s developed a "generalized artificial intelligence" that could learn many
diverse Atari games on its own.[156]
By 2020, Natural Language Processing systems such as the enormous GPT-3 (then by
far the largest artificial neural network) were matching human performance on pre-
existing benchmarks, albeit without the system attaining a commonsense
understanding of the contents of the benchmarks.[157] DeepMind's AlphaFold 2 (2020)
demonstrated the ability to approximate, in hours rather than months, the 3D
structure of a protein.[158] Other applications predict the result of judicial
decisions,[159] create art (such as poetry or painting) and prove mathematical
theorems.
Legal aspects
AI's decisions making abilities raises the questions of legal responsibility and
copyright status of created works. This issues are being refined in various
jurisdictions.[162]
Philosophy
Main article: Philosophy of artificial intelligence
Defining artificial intelligence
Thinking vs. acting: the Turing test
Main articles: Turing test, Dartmouth Workshop, and Synthetic intelligence
Alan Turing wrote in 1950 "I propose to consider the question 'can machines
think'?"[163] He advised changing the question from whether a machine "thinks", to
"whether or not it is possible for machinery to show intelligent behaviour".[164]
The only thing visible is the behavior of the machine, so it does not matter if the
machine is conscious, or has a mind, or whether the intelligence is merely a
"simulation" and not "the real thing". He noted that we also don't know these
things about other people, but that we extend a "polite convention" that they are
actually "thinking". This idea forms the basis of the Turing test.[165][r]
The paradigm has other advantages for AI. It provides a reliable and scientific way
to test programs; researchers can directly compare or even combine different
approaches to isolated problems, by asking which agent is best at maximizing a
given "goal function". It also gives them a common language to communicate with
other fields – such as mathematical optimization (which is defined in terms of
"goals") or economics (which uses the same definition of a "rational agent").[171]
Evaluating approaches to AI
No established unifying theory or paradigm has guided AI research for most of its
history.[t] The unprecedented success of statistical machine learning in the 2010s
eclipsed all other approaches (so much so that some sources, especially in the
business world, use the term "artificial intelligence" to mean "machine learning
with neural networks"). This approach is mostly sub-symbolic, neat, soft and narrow
(see below). Critics argue that these questions may have to be revisited by future
generations of AI researchers.
However, the symbolic approach failed dismally on many tasks that humans solve
easily, such as learning, recognizing an object or commonsense reasoning. Moravec's
paradox is the discovery that high-level "intelligent" tasks were easy for AI, but
low level "instinctive" tasks were extremely difficult.[175] Philosopher Hubert
Dreyfus had argued since the 1960s that human expertise depends on unconscious
instinct rather than conscious symbol manipulation, and on having a "feel" for the
situation, rather than explicit symbolic knowledge.[176] Although his arguments had
been ridiculed and ignored when they were first presented, eventually, AI research
came to agree.[u][48]
The issue is not resolved: sub-symbolic reasoning can make many of the same
inscrutable mistakes that human intuition does, such as algorithmic bias. Critics
such as Noam Chomsky argue continuing research into symbolic AI will still be
necessary to attain general intelligence,[178][179] in part because sub-symbolic AI
is a move away from explainable AI: it can be difficult or impossible to understand
why a modern statistical AI program made a particular decision.
Consciousness
Main articles: Hard problem of consciousness and Theory of mind
David Chalmers identified two problems in understanding the mind, which he named
the "hard" and "easy" problems of consciousness.[185] The easy problem is
understanding how the brain processes signals, makes plans and controls behavior.
The hard problem is explaining how this feels or why it should feel like anything
at all. Human information processing is easy to explain, however, human subjective
experience is difficult to explain. For example, it is easy to imagine a color-
blind person who has learned to identify which objects in their field of view are
red, but it is not clear what would be required for the person to know what red
looks like.[186]
Robot rights
Main article: Robot rights
If a machine has a mind and subjective experience, then it may also have sentience
(the ability to feel), and if so, then it could also suffer, and thus it would be
entitled to certain rights.[191] Any hypothetical robot rights would lie on a
spectrum with animal rights and human rights.[192] This issue has been considered
in fiction for centuries,[193] and is now being considered by, for example,
California's Institute for the Future, however, critics argue that the discussion
is premature.[194]
Future
Superintelligence
Main articles: Superintelligence, Technological singularity, and Transhumanism
A superintelligence, hyperintelligence, or superhuman intelligence, is a
hypothetical agent that would possess intelligence far surpassing that of the
brightest and most gifted human mind. Superintelligence may also refer to the form
or degree of intelligence possessed by such an agent.[183]
Robot designer Hans Moravec, cyberneticist Kevin Warwick, and inventor Ray Kurzweil
have predicted that humans and machines will merge in the future into cyborgs that
are more capable and powerful than either. This idea, called transhumanism, has
roots in Aldous Huxley and Robert Ettinger.[198]
Risks
Technological unemployment
Main articles: Workplace impact of artificial intelligence and Technological
unemployment
In the past technology has tended to increase rather than reduce total employment,
but economists acknowledge that "we're in uncharted territory" with AI.[200] A
survey of economists showed disagreement about whether the increasing use of robots
and AI will cause a substantial increase in long-term unemployment, but they
generally agree that it could be a net benefit if productivity gains are
redistributed.[201] Subjective estimates of the risk vary widely; for example,
Michael Osborne and Carl Benedikt Frey estimate 47% of U.S. jobs are at "high risk"
of potential automation, while an OECD report classifies only 9% of U.S. jobs as
"high risk".[w][203]
Unlike previous waves of automation, many middle-class jobs may be eliminated by
artificial intelligence; The Economist states that "the worry that AI could do to
white-collar jobs what steam power did to blue-collar ones during the Industrial
Revolution" is "worth taking seriously".[204] Jobs at extreme risk range from
paralegals to fast food cooks, while job demand is likely to increase for care-
related professions ranging from personal healthcare to the clergy.[205]
Terrorists, criminals and rogue states may use other forms of weaponized AI such as
advanced digital warfare and lethal autonomous weapons. By 2015, over fifty
countries were reported to be researching battlefield robots.[207]
Algorithmic bias
Main article: Algorithmic bias
AI programs can become biased after learning from real-world data. It is not
typically introduced by the system designers but is learned by the program, and
thus the programmers are often unaware that the bias exists.[209] Bias can be
inadvertently introduced by the way training data is selected.[210] It can also
emerge from correlations: AI is used to classify individuals into groups and then
make predictions assuming that the individual will resemble other members of the
group. In some cases, this assumption may be unfair.[211] An example of this is
COMPAS, a commercial program widely used by U.S. courts to assess the likelihood of
a defendant becoming a recidivist. ProPublica claims that the COMPAS-assigned
recidivism risk level of black defendants is far more likely to be overestimated
than that of white defendants, despite the fact that the program was not told the
races of the defendants.[212] Other examples where algorithmic bias can lead to
unfair outcomes are when AI is used for credit rating or hiring.
Existential risk
Main articles: Existential risk from artificial general intelligence and
Superintelligence
Superintelligent AI may be able to improve itself to the point that humans could
not control it. This could, as physicist Stephen Hawking puts it, "spell the end of
the human race".[214] Philosopher Nick Bostrom argues that sufficiently intelligent
AI if it chooses actions based on achieving some goal, will exhibit convergent
behavior such as acquiring resources or protecting itself from being shut down. If
this AI's goals do not fully reflect humanity's, it might need to harm humanity to
acquire more resources or prevent itself from being shut down, ultimately to better
achieve its goal. He concludes that AI poses a risk to mankind, however humble or
"friendly" its stated goals might be.[215] Political scientist Charles T. Rubin
argues that "any sufficiently advanced benevolence may be indistinguishable from
malevolence." Humans should not assume machines or robots would treat us favorably
because there is no a priori reason to believe that they would share our system of
morality.[216]
The opinion of experts and industry insiders is mixed, with sizable fractions both
concerned and unconcerned by risk from eventual superhumanly-capable AI.[217]
Stephen Hawking, Microsoft founder Bill Gates, history professor Yuval Noah Harari,
and SpaceX founder Elon Musk have all expressed serious misgivings about the future
of AI.[218] Prominent tech titans including Peter Thiel (Amazon Web Services) and
Musk have committed more than $1 billion to nonprofit companies that champion
responsible AI development, such as OpenAI and the Future of Life Institute.[219]
Mark Zuckerberg (CEO, Facebook) has said that artificial intelligence is helpful in
its current form and will continue to assist humans.[220] Other experts argue is
that the risks are far enough in the future to not be worth researching, or that
humans will be valuable from the perspective of a superintelligent machine.[221]
Rodney Brooks, in particular, has said that "malevolent" AI is still centuries
away.[x]
Ethical machines
Main articles: Machine ethics, Friendly AI, Artificial moral agents, and Human
Compatible
Friendly AI are machines that have been designed from the beginning to minimize
risks and to make choices that benefit humans. Eliezer Yudkowsky, who coined the
term, argues that developing friendly AI should be a higher research priority: it
may require a large investment and it must be completed before AI becomes an
existential risk.[223]
Machines with intelligence have the potential to use their intelligence to make
ethical decisions. The field of machine ethics provides machines with ethical
principles and procedures for resolving ethical dilemmas.[224] Machine ethics is
also called machine morality, computational ethics or computational morality,[224]
and was founded at an AAAI symposium in 2005.[225]
Regulation
Main articles: Regulation of artificial intelligence, Regulation of algorithms, and
AI control problem
The regulation of artificial intelligence is the development of public sector
policies and laws for promoting and regulating artificial intelligence (AI); it is
therefore related to the broader regulation of algorithms.[228] The regulatory and
policy landscape for AI is an emerging issue in jurisdictions globally.[229]
Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI.
[44] Most EU member states had released national AI strategies, as had Canada,
China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab
Emirates, USA and Vietnam. Others were in the process of elaborating their own AI
strategy, including Bangladesh, Malaysia and Tunisia.[44] The Global Partnership on
Artificial Intelligence was launched in June 2020, stating a need for AI to be
developed in accordance with human rights and democratic values, to ensure public
confidence and trust in the technology.[44] Henry Kissinger, Eric Schmidt, and
Daniel Huttenlocher published a joint statement in November 2021 calling for a
government commission to regulate AI.[230]
In fiction
Main article: Artificial intelligence in fiction
The word "robot" itself was coined by Karel Čapek in his 1921 play R.U.R., the
title standing for "Rossum's Universal Robots".
Thought-capable artificial beings have appeared as storytelling devices since
antiquity,[15] and have been a persistent theme in science fiction.[17]
A common trope in these works began with Mary Shelley's Frankenstein, where a human
creation becomes a threat to its masters. This includes such works as Arthur C.
Clarke's and Stanley Kubrick's 2001: A Space Odyssey (both 1968), with HAL 9000,
the murderous computer in charge of the Discovery One spaceship, as well as The
Terminator (1984) and The Matrix (1999). In contrast, the rare loyal robots such as
Gort from The Day the Earth Stood Still (1951) and Bishop from Aliens (1986) are
less prominent in popular culture.[231]
Isaac Asimov introduced the Three Laws of Robotics in many books and stories, most
notably the "Multivac" series about a super-intelligent computer of the same name.
Asimov's laws are often brought up during lay discussions of machine ethics;[232]
while almost all artificial intelligence researchers are familiar with Asimov's
laws through popular culture, they generally consider the laws useless for many
reasons, one of which is their ambiguity.[233]
Transhumanism (the merging of humans and machines) is explored in the manga Ghost
in the Shell and the science-fiction series Dune.
Several works use AI to force us to confront the fundamental question of what makes
us human, showing us artificial beings that have the ability to feel, and thus to
suffer. This appears in Karel Čapek's R.U.R., the films A.I. Artificial
Intelligence and Ex Machina, as well as the novel Do Androids Dream of Electric
Sheep?, by Philip K. Dick. Dick considers the idea that our understanding of human
subjectivity is altered by technology created with artificial intelligence.[234]
Scientific diplomacy
Warfare
As technology and research evolve and the world enters the third revolution of
warfare following gunpowder and nuclear weapons, the artificial intelligence arms
race ensues between the United States, China, and Russia, three countries with the
world's top five highest military budgets.[235] Intentions of being a world leader
in AI research by 2030[236] have been declared by China's leader Xi Jinping, and
President Putin of Russia has stated that "Whoever becomes the leader in this
sphere will become the ruler of the world".[237] If Russia were to become the
leader in AI research, President Putin has stated Russia's intent to share some of
their research with the world so as to not monopolize the field,[237] similar to
their current sharing of nuclear technologies, maintaining science diplomacy
relations. The United States, China, and Russia, are some examples of countries
that have taken their stances toward military artificial intelligence since as
early as 2014, having established military programs to develop cyber weapons,
control lethal autonomous weapons, and drones that can be used for surveillance.
Russo-Ukrainian War
President Putin announced that artificial intelligence is the future for all
mankind [237] and recognizes the power and opportunities that the development and
deployment of lethal autonomous weapons AI technology can hold in warfare and
homeland security, as well as its threats. President Putin's prediction that future
wars will be fought using AI has started to come to fruition to an extent after
Russia invaded Ukraine on 24 February 2022. The Ukrainian military is making use
of the Turkish Bayraktar TB2-drones[238] that still require human operation to
deploy laser-guided bombs but can take off, land, and cruise autonomously. Ukraine
has also been using Switchblade drones supplied by the US and receiving information
gathering by the United States's own surveillance operations regarding battlefield
intelligence and national security about Russia.[239] Similarly, Russia can use AI
to help analyze battlefield data from surveillance footage taken by drones. Reports
and images show that Russia's military has deployed KUB- BLA suicide drones [240]
into Ukraine, with speculations of intentions to assassinate Ukrainian President
Volodymyr Zelenskyy.
Warfare regulations
As research in the AI realm progresses, there is pushback about the use of AI from
the Campaign to Stop Killer Robots and world technology leaders have sent a
petition[241] to the United Nations calling for new regulations on the development
and use of AI technologies in 2017, including a ban on the use of lethal autonomous
weapons due to ethical concerns for innocent civilian populations.
Cybersecurity
With the ever evolving cyber-attacks and generation of devices, AI can be used for
threat detection and more effective response by risk prioritization. With this
tool, some challenges are also presented such as privacy, informed consent, and
responsible use.[242] According to CISA, the cyberspace is difficult to secure for
the following factors: the ability of malicious actors to operate from anywhere in
the world, the linkages between cyberspace and physical systems, and the difficulty
of reducing vulnerabilities and consequences in complex cyber networks.[243] With
the increased technological advances of the world, the risk for wide scale
consequential events rises. Paradoxically, the ability to protect information and
create a line of communication between the scientific and diplomatic community
thrives. The role of cybersecurity in diplomacy has become increasingly relevant,
creating the term of cyber diplomacy – which is not uniformly defined and not
synonymous with cyber defence.[244] Many nations have developed unique approaches
to scientific diplomacy in cyberspace.
Germany's approach
Cybersecurity is recognized as a governmental task, dividing into three ministries
of responsibility: the Federal Ministry of the Interior, the Federal Ministry of
Defence, and the Federal Foreign Office.[248] These distinctions promoted the
creation of various institutions, such as The German National Office for
Information Security, The National Cyberdefence Centre, The German National Cyber
Security Council, and The Cyber and Information Domain Service.[246] In 2018, a new
strategy for artificial intelligence was established by the German government, with
the creation of a German-French virtual research and innovation network,[249]
holding opportunity for research expansion into cybersecurity.
Russo-Ukrainian War
With the 2022 invasion of Ukraine, there has been a rise in malicious cyber
activity against the United States,[254] Ukraine, and Russia. A prominent and rare
documented use of artificial intelligence in conflict is on behalf of Ukraine,
using facial recognition software to uncover Russian assailants and identify
Ukrainians killed in the ongoing war.[255] Though these governmental figures are
not primarily focused on scientific and cyber diplomacy, other institutions are
commenting on the use of artificial intelligence in cybersecurity with that focus.
For example, Georgetown University's Center for Security and Emerging Technology
(CSET) has the Cyber-AI Project, with one goal being to attract policymakers'
attention to the growing body of academic research, whichexposes the exploitive
consequences of AI and machine-learning (ML) algorithms.[256] This vulnerability
can be a plausible explanation as to why Russia is not engaging in the use of AI in
conflict per, Andrew Lohn, a senior fellow at CSET. In addition to use on the
battlefield, AI is being used by the Pentagon to analyze data from the war,
analyzing to strengthen cybersecurity and warfare intelligence for the United
States.[239][257]
Election security
As artificial intelligence grows and the overwhelming amount of news portrayed
through cyberspace expands, it is becoming extremely overwhelming for a voter to
know what to believe. There are many intelligent codes, referred to as bots,
written to portray people on social media with the goal of spreading miss
information.[258] The 2016 USA election is a victim of such actions. During the
Hillary Clinton and Donald Trump campaign, artificial intelligent bots from Russia
were spreading misinformation about the candidates in order to help the Trump
campaign.[259] Analysts concluded that approximately 19% of Twitter tweets centered
around the 2016 election were detected to come from bots.[259] YouTube in recent
years has been used to spread political information as well. Although there is no
proof that the platform attempts to manipulate its viewers opinions, Youtubes AI
algorithm recommends videos of similar variety.[260] If a person begins to research
right wing political podcasts, then YouTube's algorithm will recommend more right
wing videos.[261] The uprising in a program called Deepfake, a software used to
replicate someone's face and words, has also shown its potential threat. In 2018 a
Deepfake video of Barack Obama was released saying words he claims to have never
said.[262] While in a national election a Deepfake will quickly be debunked, the
software has the capability to heavily sway a smaller local election. This tool
holds a lot of potential for spreading misinformation and is monitored with great
attention.[263] Although it may be seen as a tool used for harm, AI can help
enhance election campaigns as well. AI bots can be programed to target articles
with known misinformation. The bots can then indicate what is being misinformed to
help shine light on the truth. AI can also be used to inform a person where each
parts stands on a certain topic such as healthcare or climate change.[264] The
political leaders of a nation have heavy sway on international affairs. Thus, a
political leader with a lack of interest for international collaborative scientific
advancement can have a negative impact in the scientific diplomacy of that
nation[265]
Future of work
Facial recognition
The use of artificial intelligence (AI) has subtly grown to become part of everyday
life. It is used every day in facial recognition software. It is the first measure
of security for many companies in the form of a biometric authentication. This
means of authentication allows even the most official organizations such as the
United States Internal Revenue Service to verify a person's identity [266] via a
database generated from machine learning. As of the year 2022, the United States
IRS requires those who do not undergo a live interview with an agent to complete a
biometric verification of their identity via ID.me's facial recognition tool.[266]
AI and school
In Japan and South Korea, artificial intelligence software is used in the
instruction of English language via the company Riiid.[267] Riiid is a Korean
education company working alongside Japan to give students the means to learn and
use their English communication skills via engaging with artificial intelligence in
a live chat.[267] Riid is not the only company to do this. An American company such
as Duolingo is very well known for their automated teaching of 41 languages.
Babbel, a German language learning program also uses artificial intelligence in its
teaching automation, allowing for European students to learn vital communication
skills needed in social, economic, and diplomatic settings. Artificial
intelligence will also automate the routine tasks that teachers need to do such as
grading, taking attendance, and handling routine student inquiries.[268] This
enables the teacher to carry on with the complexities of teaching that an automated
machine cannot handle. These include creating exams, explaining complex material in
a way that will benefit students individually and handling unique questions from
students.
AI and medicine
Unlike the human brain, which possess generalized intelligence, the specialized
intelligence of AI can serve as a means of support to physicians internationally.
The medical field has a diverse and profound amount of data in which AI can employ
to generate a predictive diagnosis. Researchers at an Oxford hospital have
developed artificial intelligence that can diagnose heart scans for heart disease
and cancer.[269] This artificial intelligence can pick up diminutive details in the
scans that doctors may miss. As such, artificial intelligence in medicine will
better the industry, giving doctors the means to precisely diagnose their patients
using the tools available. The artificial intelligence algorithms will also be used
to further improve diagnosis over time, via an application of machine learning
called precision medicine.[270] Furthermore, the narrow application of artificial
intelligence can use "deep learning" in order to improve medical image analysis. In
radiology imaging, AI uses deep learning algorithms to identify potentially
cancerous lesions which is an important process assisting in early diagnosis.[271]
AI in business
Data analysis is a fundamental property of artificial intelligence that enables it
to be used in every facet of life from search results to the way people buy
product. According to NewVantage Partners,[272] over 90% of top businesses have
ongoing investments in artificial intelligence. According to IBM, one of the
world's leaders in technology, 45% of respondents from companies with over 1,000
employees have adopted AI.[273] Recent data shows that the business market [274]
for artificial intelligence during the year 2020 was valued at $51.08 billion. The
business market for artificial intelligence is projected to be over $640.3 billion
by the year 2028.[274] To prevent harm, AI-deploying organizations need to play a
central role in creating and deploying trustworthy AI in line with the principles
of trustworthy AI,[275] and take accountability to mitigate the risks.[276]
Over 200 applications of artificial intelligence are being used by over 46 United
Nations agencies, in sectors ranging from health care dealing with issues such as
combating COVID-19 to smart agriculture, to assist the UN in political and
diplomatic relations.[282] One example is the use of AI by the UN Global Pulse
program to model the effect of the spread of COVID-19 on internally displaced
people (IDP) and refugee settlements to assist them in creating an appropriate
global health policy.[283][284]
Novel AI tools such as remote sensing can also be employed by diplomats for
collecting and analyzing data and near-real-time tracking of objects such as troop
or refugee movements along borders in violent conflict zones.[283][285]
Languages
العربية
Español
हिन्दी
Ilokano
Tagalog
Татарча/tatarça
اردو
Winaray
中文
119 more
Edit links
This page was last edited on 6 August 2022, at 13:56 (UTC).
Text is available under the Creative Commons Attribution-ShareAlike License 3.0;
additional terms may apply. By using this site, you agree to the Terms of Use and
Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation,
Inc., a non-profit organization.
Privacy policyAbout WikipediaDisclaimersContact WikipediaMobile
viewDevelopersStatisticsCookie statementWikimedia FoundationPowered by MediaWiki