AI
AI
Ready to get started? New customers get $300 in free credits to spend on Google
Cloud.
Artificial intelligence defined
Artificial intelligence is a field of science concerned with building computers and
machines that can reason, learn, and act in such a way that would normally require
human intelligence or that involves data whose scale exceeds what humans can
analyze.
This learning process often involves algorithms, which are sets of rules or
instructions that guide the AI's analysis and decision-making. In machine learning,
a popular subset of AI, algorithms are trained on labeled or unlabeled data to make
predictions or categorize information.
Want to learn how to get started with AI? Take the free beginner's introduction to
generative AI.
Reactive machines: Limited AI that only reacts to different kinds of stimuli based
on preprogrammed rules. Does not use memory and thus cannot learn with new data.
IBM’s Deep Blue that beat chess champion Garry Kasparov in 1997 was an example of a
reactive machine.
Limited memory: Most modern AI is considered to be limited memory. It can use
memory to improve over time by being trained with new data, typically through an
artificial neural network or other training model. Deep learning, a subset of
machine learning, is considered limited memory artificial intelligence.
Theory of mind: Theory of mind AI does not currently exist, but research is ongoing
into its possibilities. It describes AI that can emulate the human mind and has
decision-making capabilities equal to that of a human, including recognizing and
remembering emotions and reacting in social situations as a human would.
Self aware: A step above theory of mind AI, self-aware AI describes a mythical
machine that is aware of its own existence and has the intellectual and emotional
capabilities of a human. Like theory of mind AI, self-aware AI does not currently
exist.
A more useful way of broadly categorizing types of artificial intelligence is by
what the machine can do. All of what we currently call artificial intelligence is
considered artificial "narrow" intelligence, in that it can perform only narrow
sets of actions based on its programming and training. For instance, an AI
algorithm that is used for object classification won’t be able to perform natural
language processing. Google Search is a form of narrow AI, as is predictive
analytics, or virtual assistants.
Artificial general intelligence (AGI) would be the ability for a machine to "sense,
think, and act" just like a human. AGI does not currently exist. The next level
would be artificial superintelligence (ASI), in which the machine would be able to
function in all ways superior to a human.
In broad strokes, three kinds of learnings models are often used in machine
learning:
Some of the most common types of artificial neural networks you may encounter
include:
Feedforward neural networks (FF) are one of the oldest forms of neural networks,
with data flowing one way through layers of artificial neurons until the output is
achieved. In modern days, most feedforward neural networks are considered "deep
feedforward" with several layers (and more than one "hidden" layer). Feedforward
neural networks are typically paired with an error-correction algorithm called
"backpropagation" that, in simple terms, starts with the result of the neural
network and works back through to the beginning, finding errors to improve the
accuracy of the neural network. Many simple but powerful neural networks are deep
feedforward.
Recurrent neural networks (RNN) differ from feedforward neural networks in that
they typically use time series data or data that involves sequences. Unlike
feedforward neural networks, which use weights in each node of the network,
recurrent neural networks have "memory" of what happened in the previous layer as
contingent to the output of the current layer. For instance, when performing
natural language processing, RNNs can "keep in mind" other words used in a
sentence. RNNs are often used for speech recognition, translation, and to caption
images.
Long/short term memory (LSTM) is an advanced form of RNN that can use memory to
"remember" what happened in previous layers. The difference between RNNs and LSTM
is that LSTM can remember what happened several layers ago, through the use of
"memory cells." LSTM is often used in speech recognition and making predictions.
Convolutional neural networks (CNN) include some of the most common neural networks
in modern artificial intelligence. Most often used in image recognition, CNNs use
several distinct layers (a convolutional layer, then a pooling layer) that filter
different parts of an image before putting it back together (in the fully connected
layer). The earlier convolutional layers may look for simple features of an image,
such as colors and edges, before looking for more complex features in additional
layers.
Generative adversarial networks (GAN) involve two neural networks competing against
each other in a game that ultimately improves the accuracy of the output. One
network (the generator) creates examples that the other network (the discriminator)
attempts to prove true or false. GANs have been used to create realistic images and
even make art.