0% found this document useful (0 votes)
3 views3 pages

AI

Artificial intelligence (AI) encompasses technologies that enable computers to perform tasks requiring human-like intelligence, such as data analysis and language processing. It can be categorized into various types based on its capabilities, including narrow AI, which performs specific tasks, and future concepts like artificial general intelligence (AGI) and artificial superintelligence (ASI). AI systems learn from data through techniques like machine learning and neural networks, with applications ranging from image recognition to natural language processing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views3 pages

AI

Artificial intelligence (AI) encompasses technologies that enable computers to perform tasks requiring human-like intelligence, such as data analysis and language processing. It can be categorized into various types based on its capabilities, including narrow AI, which performs specific tasks, and future concepts like artificial general intelligence (AGI) and artificial superintelligence (ASI). AI systems learn from data through techniques like machine learning and neural networks, with applications ranging from image recognition to natural language processing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 3

What is Artificial Intelligence (AI)?

Artificial intelligence (AI) is a set of technologies that enable computers to


perform a variety of advanced functions, including the ability to see, understand
and translate spoken and written language, analyze data, make recommendations, and
more.

AI is the backbone of innovation in modern computing, unlocking value for


individuals and businesses. For example, optical character recognition (OCR) uses
AI to extract text and data from images and documents, turns unstructured content
into business-ready structured data, and unlocks valuable insights.

Ready to get started? New customers get $300 in free credits to spend on Google
Cloud.
Artificial intelligence defined
Artificial intelligence is a field of science concerned with building computers and
machines that can reason, learn, and act in such a way that would normally require
human intelligence or that involves data whose scale exceeds what humans can
analyze.

AI is a broad field that encompasses many different disciplines, including computer


science, data analytics and statistics, hardware and software engineering,
linguistics, neuroscience, and even philosophy and psychology.

On an operational level for business use, AI is a set of technologies that are


based primarily on machine learning and deep learning, used for data analytics,
predictions and forecasting, object categorization, natural language processing,
recommendations, intelligent data retrieval, and more.

How does AI work?


While the specifics vary across different AI techniques, the core principle
revolves around data. AI systems learn and improve through exposure to vast amounts
of data, identifying patterns and relationships that humans may miss.

This learning process often involves algorithms, which are sets of rules or
instructions that guide the AI's analysis and decision-making. In machine learning,
a popular subset of AI, algorithms are trained on labeled or unlabeled data to make
predictions or categorize information.

Deep learning, a further specialization, utilizes artificial neural networks with


multiple layers to process information, mimicking the structure and function of the
human brain. Through continuous learning and adaptation, AI systems become
increasingly adept at performing specific tasks, from recognizing images to
translating languages and beyond.

Want to learn how to get started with AI? Take the free beginner's introduction to
generative AI.

Types of artificial intelligence


Artificial intelligence can be organized in several ways, depending on stages of
development or actions being performed.

For instance, four stages of AI development are commonly recognized.

Reactive machines: Limited AI that only reacts to different kinds of stimuli based
on preprogrammed rules. Does not use memory and thus cannot learn with new data.
IBM’s Deep Blue that beat chess champion Garry Kasparov in 1997 was an example of a
reactive machine.
Limited memory: Most modern AI is considered to be limited memory. It can use
memory to improve over time by being trained with new data, typically through an
artificial neural network or other training model. Deep learning, a subset of
machine learning, is considered limited memory artificial intelligence.
Theory of mind: Theory of mind AI does not currently exist, but research is ongoing
into its possibilities. It describes AI that can emulate the human mind and has
decision-making capabilities equal to that of a human, including recognizing and
remembering emotions and reacting in social situations as a human would.
Self aware: A step above theory of mind AI, self-aware AI describes a mythical
machine that is aware of its own existence and has the intellectual and emotional
capabilities of a human. Like theory of mind AI, self-aware AI does not currently
exist.
A more useful way of broadly categorizing types of artificial intelligence is by
what the machine can do. All of what we currently call artificial intelligence is
considered artificial "narrow" intelligence, in that it can perform only narrow
sets of actions based on its programming and training. For instance, an AI
algorithm that is used for object classification won’t be able to perform natural
language processing. Google Search is a form of narrow AI, as is predictive
analytics, or virtual assistants.

Artificial general intelligence (AGI) would be the ability for a machine to "sense,
think, and act" just like a human. AGI does not currently exist. The next level
would be artificial superintelligence (ASI), in which the machine would be able to
function in all ways superior to a human.

Artificial intelligence training models


When businesses talk about AI, they often talk about "training data." But what does
that mean? Remember that limited-memory artificial intelligence is AI that improves
over time by being trained with new data. Machine learning is a subset of
artificial intelligence that uses algorithms to train data to obtain results.

In broad strokes, three kinds of learnings models are often used in machine
learning:

Supervised learning is a machine learning model that maps a specific input to an


output using labeled training data (structured data). In simple terms, to train the
algorithm to recognize pictures of cats, feed it pictures labeled as cats.

Unsupervised learning is a machine learning model that learns patterns based on


unlabeled data (unstructured data). Unlike supervised learning, the end result is
not known ahead of time. Rather, the algorithm learns from the data, categorizing
it into groups based on attributes. For instance, unsupervised learning is good at
pattern matching and descriptive modeling.

In addition to supervised and unsupervised learning, a mixed approach called semi-


supervised learning is often employed, where only some of the data is labeled. In
semi-supervised learning, an end result is known, but the algorithm must figure out
how to organize and structure the data to achieve the desired results.

Reinforcement learning is a machine learning model that can be broadly described as


"learn by doing." An "agent" learns to perform a defined task by trial and error (a
feedback loop) until its performance is within a desirable range. The agent
receives positive reinforcement when it performs the task well and negative
reinforcement when it performs poorly. An example of reinforcement learning would
be teaching a robotic hand to pick up a ball.

Common types of artificial neural networks


A common type of training model in AI is an artificial neural network, a model
loosely based on the human brain.
A neural network is a system of artificial neurons—sometimes called perceptrons—
that are computational nodes used to classify and analyze data. The data is fed
into the first layer of a neural network, with each perceptron making a decision,
then passing that information onto multiple nodes in the next layer. Training
models with more than three layers are referred to as "deep neural networks" or
"deep learning." Some modern neural networks have hundreds or thousands of layers.
The output of the final perceptrons accomplish the task set to the neural network,
such as classify an object or find patterns in data.

Some of the most common types of artificial neural networks you may encounter
include:

Feedforward neural networks (FF) are one of the oldest forms of neural networks,
with data flowing one way through layers of artificial neurons until the output is
achieved. In modern days, most feedforward neural networks are considered "deep
feedforward" with several layers (and more than one "hidden" layer). Feedforward
neural networks are typically paired with an error-correction algorithm called
"backpropagation" that, in simple terms, starts with the result of the neural
network and works back through to the beginning, finding errors to improve the
accuracy of the neural network. Many simple but powerful neural networks are deep
feedforward.

Recurrent neural networks (RNN) differ from feedforward neural networks in that
they typically use time series data or data that involves sequences. Unlike
feedforward neural networks, which use weights in each node of the network,
recurrent neural networks have "memory" of what happened in the previous layer as
contingent to the output of the current layer. For instance, when performing
natural language processing, RNNs can "keep in mind" other words used in a
sentence. RNNs are often used for speech recognition, translation, and to caption
images.

Long/short term memory (LSTM) is an advanced form of RNN that can use memory to
"remember" what happened in previous layers. The difference between RNNs and LSTM
is that LSTM can remember what happened several layers ago, through the use of
"memory cells." LSTM is often used in speech recognition and making predictions.

Convolutional neural networks (CNN) include some of the most common neural networks
in modern artificial intelligence. Most often used in image recognition, CNNs use
several distinct layers (a convolutional layer, then a pooling layer) that filter
different parts of an image before putting it back together (in the fully connected
layer). The earlier convolutional layers may look for simple features of an image,
such as colors and edges, before looking for more complex features in additional
layers.

Generative adversarial networks (GAN) involve two neural networks competing against
each other in a game that ultimately improves the accuracy of the output. One
network (the generator) creates examples that the other network (the discriminator)
attempts to prove true or false. GANs have been used to create realistic images and
even make art.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy