0% found this document useful (0 votes)
253 views

Artificial Intelligence and Robotics

The document discusses the relationship between artificial intelligence and robotics. It notes that AI and robotics emerged in the 1950s without a clear distinction, as intelligent machines led to robots. Through the 1970s, robotics focused on industrial automation while AI used robots to test theories. Later, difficulties in designing robots for uncontrolled environments led AI researchers to dismiss robotics. However, in the 1990s robots again populated AI labs as robot competitions helped reestablish the relationship between the fields. The document goes on to discuss research issues in action and perception for intelligent robots.

Uploaded by

Ramalaxmi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
253 views

Artificial Intelligence and Robotics

The document discusses the relationship between artificial intelligence and robotics. It notes that AI and robotics emerged in the 1950s without a clear distinction, as intelligent machines led to robots. Through the 1970s, robotics focused on industrial automation while AI used robots to test theories. Later, difficulties in designing robots for uncontrolled environments led AI researchers to dismiss robotics. However, in the 1990s robots again populated AI labs as robot competitions helped reestablish the relationship between the fields. The document goes on to discuss research issues in action and perception for intelligent robots.

Uploaded by

Ramalaxmi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

ARTIFICIAL INTELLIGENCE AND ROBOTICS

1 Introduction
Artificial Intelligence and Robotics have a common root and a (relatively) long
history of interaction and scientific discussion. The birth of Artificial
Intelligence and Robotics takes place in the same period (’50), and initially
there was no clear distinction between the two disciplines. The reason is that
the notion of “intelligent machine” naturally leads to robots and Robotics. One
might argue that not every machine is a robot, and certainly Artificial
Intelligence is concerned also with virtual agents (i.e. agents that are not
embodied in a physical machine). On the other hand, many of the technical
problems and solutions that are needed in order to design robots are not dealt
with by Artificial Intelligence research
A clear separation between the fields can be seen in the ’70, when Robotics
becomes more focused on industrial automation, while Artificial Intelligence
uses robots to demonstrate that machines can act also in everyday
environments.
Later, the difficulties encountered in the design of robotic systems
capable to act in unconstrained environments led AI researchers to dismiss
Robotics as a preferred testbed for Artificial Intelligence. Conversely, the
research in Robotics led to the development of more and more sophisticated
industrial robots.
This state of affairs changed in the ’90s, when robots begun to
populate again AI laboratories and Robotics specifically addressed also less
controlled environments. In particular, robot competitions1 started: indeed
they played a major role in re establishing a strict relationship between AI and
Robotics, that is nowadays one of the most promising developments of
research both in the national context and at the European level.
Summarizing, the borderline between the work in Artificial Intelligent and
Robotics is certainly very difficult to establish; however, the problems to be
addressed in order 1See, for example, AAAI robot competitions and challenges
(www.aaai.org) and RoboCup competitions (www.robocup.org) to build
intelligent robots are clearly identified by the research community, and the
developement of robots is again viewed as a prototypical case of AI system.
2 Research issues
In this section we analyse the recent work which can be characterised as AI
Robotics, by arranging it into the two basic issues in robot design: Action and
Perception.

2.1 Action
While there is nowadays a general agreement on the basic structure of the
autonomous agent/robot, the question of how this structure can be
implemented has been subject to a long debate and is still under investigation.
Agents and, specifically, robots, usually present various kinds of sensing and
acting devices. The flow of data from the sensors to the actuators is processed
by several different modules and the description of the interaction among
these modules defines the agentÕs architecture.
The first, purely deliberative, architectures [12, 22] view the robot
as an agent embedding a high-level representaCONTRIBUTI SCIENTIFICI A
RTIFICIAL INTELLIGENCE AND ROBOTICS ANTONIO CHELLA Dipartimento di
Ingegneria Informatica - Università degli Studi di Palerm o LUCA IOCCHI
Dipartimento di Informatica e Sistemistica - Università degli Studi di Roma “La
Sapienza” IRENE MACALUSO Dipartimento di Ingegneria Informatica -
Università degli Studi di Palerm o DANIELE NARDI Dipartimento di Informatica
e Sistemistica - Università degli Studi di Roma “La Sapienza” tion of the
environment and of the actions that it can perform. Perceptual data are
interpreted for creating a model of the world, a planner generates the actions
to be performed, and the execution module takes care of executing these
plans. In practice a sense-plan-act cycle is repeatedly executed. The problem is
that building a high-level world model and generating a plan are time
consuming activities and thus these systems have shown to be inadequate for
agents embedded in dynamic worlds.
Reactive architectures focus on the basic functionalities of the robot, such as
navigation or sensor interpretation, and propose a direct connection between
stimuli and response. Brooks’s subsumption architecture [4] is composed by
levels of competence containing a class of taskoriented behaviors. Each level is
in charge of accomplishing a specific task (such as obstacle avoidance,
wandering, etc.) and the perceptual data are interpreted only for that specific
task. Reactive architectures, while suitably addressing the dynamics of the
environment, do not generally allow the designer to consider general aspects
of perception (not related to a specific behavior), and to identify complex
situations. In fact, the use of a symbolic highlevel language is not possible,
since it would necessarily require building a world model, and thus reasoning is
usually compiled into the structures of the executing program. The lack of
previsions about the future limits these systems in terms of efficiency and goal
achievement.
The above considerations led to a renewed effort to combine a logic-based
view of the robot as an intelligent agent, with its reactive functionalities. To
this end a new research field is developing in the last years: Cognitive Robotics.
The name was first introduced by the research group at the University of
Toronto led by Ray Reiter [19]. The most recent view of cognitive robots, that
has been accepted, for example in the EU framework, certainly keeps the
original goal of embedding a reasoning agent into a real robot, but also takes a
more general perspective, by looking at the perception/action cycle in a
broader sense, in bio-inspired systems, as well as in the work on recognition
and generation of emotional behaviours (see next section). Cognitive Robotics
aims at designing and realizing actual agents (in particular mobile robots) that
are able to accomplish complex tasks in real, and hence dynamic,
unpredictable and incompletely known environments, without human
assistance. Cognitive robots can be controlled at a high level, by providing
them with a description of the world and expressing the tasks to be performed
in the form of goals to be achieved.
The characterizing feature of a cognitive robot is the presence of cognitive
capabilities for reasoning about the information sensed from the environment
and about the actions it can perform. The design and realization of cognitive
robots has been addressed from different perspectives, that can be classified
into two groups: action theories and system architectures.

Action theories
A number of theories of actions have been developed in order to represent the
agent’s knowledge. They are characterized by the expressive power, that is the
ability of representing complex situations, by the deductive services allowed,
and by the implementation of automatic reasoning procedures. Several
formalisms have been investigated starting from Reiter’s Situation Calculus
[27, 13]: A-Languages (e.g., [14]), Dynamic Logics (e.g., [11]), Fluent and Event
Calculi (e.g., [8]). The proposed formalisms address several aspects of action
representation including sensing, persistence, nondeterminism, concurrency.
Moreover, they have been further extended with probabistic representations,
representations of time etc. However, much of the work carried out on action
theories has been disconnected from applications on real robots, with some
notable exceptions. (see for example [5, 3, 7, 11]). A more popular approach to
action representation on robots is based on decision making techniques, which
maximise the utility of the actions selected by the robot, depending on the
operational context [29]. However, this approach does not provide an explicit
representation of the properties that characterize the dynamic system, while
focussing on the action selection mechanism.

Architectures
There are many features that are considered important in the design of
agents’ architectures and each proposal describes a solution that provides for
some of these features. Approaches to architectures that try to combine
symbolic and reactive reasoning are presented for example in [1, 26] as so
called Hybrid Architectures. We can roughly describe a layered hybrid
architecture of an agent with two levels: the deliberative level, in which a high-
level state of the agent is maintained and decisions on which actions are to be
performed are taken, and the operative level, in which conditions on the world
are verified and actions are actually executed.
The embodied intelligence approach generalizes Brooks’s ideas (see
e.g., [32], [25]). The robot is a real physical agent tightly interacting with the
environment and the robot behavior is generated not by the robot controller
alone, but it emerges by means of the interactions between the robot with its
body and the environment. Other contributions to the realization of robot
architectures come from evolutionary computing, where evolutionary robotics
is a research field aiming at developing robots through evolutionary processes
inspired by biological systems [23]. For example, neuro-fuzzy systems have
been successfully used in the design of robot architectures. Often, the work on
architectures is developed in the context of robot programming environments,
including ad-hoc specialised control languages. Most of this work is more
concerned with engineering aspects and will not be addressed here.

2.2 Perception
Robot perception is a prominent research field in AI and Robotics. Current
robotic systems have been limited by visual perception systems. In fact, robots
have to use other kinds of sensors such as laser range finder, sonar, and so on
in order to bypass the difficulties of vision in dynamic and unstructured
environments.
A robotic agent acting in the real world has to deal with rich and unstructured
environments that are populated by moving and interacting objects, by other
agents (either robots or people), and so on. To appropriately move and act, a
robot must be able to understand the perceptions of the environment.
Understanding, from an AI perspective, involves the generation of a high-level,
declarative description of the perceived world. Developing such a description
requires both bottom-up, data driven processes that associate symbolic
knowledge representation structures with the data coming out of a vision
system, and top-down processes in which high-level, symbolic information is
employed to drive and further refine the interpretation of the scene.
To accomplish its tasks, a robot must be endowed with selective reasoning
capabilities, in order to interpret, classify, track and anticipate the behavior of
the surrounding objects and agents. Such capabilities require rich inner
representations of the environment firmly anchored to the input signals
coming from the sensors. In other words, the meaning of the symbols of the
robot reasoning system must be anchored in sensorimotor mechanisms.
On the one side , the robot vision community approached the problem of the
representation of scenes mainly in terms of 2D/3D reconstruction of shapes
and of recovery of their motion parameters, possibly in the presence of noise
and occlusions ,in order to control the motion of the robot. This approach is
known as visual servoing of robot system[10]. On the other side, the AI
community developed rich and expressive formalisms for image interpretation
and for representation of processes, events, actions and, in general, of
dynamic situations, as mentioned in the previous section.
However, the research on robot vision and on AI knowledge representation
evolved separately, and concentrated on different kinds of problems. On the
one hand, the robot vision researchers implicitly assumed that the problem of
visual representation ends with the 2D/3D reconstruction of moving scenes
and of their motion parameters. On the other hand, the AI community usually
did not face the problem of anchoring the representations on the data coming
from sensors.
Starting from the seminal paper of Reiter and Mackworth [28], some proposal
have been made in this research field, a few of them briefly described below.
The main steps toward an effective cognitive vision system for dynamic scene
interpretation have been recently discussed [20] by adopting a fuzzy metric
temporal Horn logic in order to provide an intermediate formalism that
represents schematic and instantiated knowledge about dynamic scenes. This
conceptual formalism mediates between the spatiotemporal geometric
descriptions extracted by video cameras and the high-level system for the
generation of natural language text.
A related system [6] is based on three level of representations: the sub
conceptual, the conceptual and the symbolic level. In particular, the main
assumption is that an intermediate representation level is missing between the
two classes of representations mentioned above. In order to fill this gap, the
notion of conceptual space is adopted, a representation where information is
characterized in terms of a metric space. A conceptual space acts as an
intermediaterepresentationbetweensubconceptual(i.e.,notyetconceptuallycat
egorized)information,andsymbolicallyorganized knowledge.
Some basic primitives (Find, Track, Reacquire) that define the anchoring of
symbols in sensory data as a problem perse and independent of any specific
implementation have been proposed and discussed [9].
Inorder to define a more general logical account of robot perception linking
sensory data to high-level representation, recently an abductive theory of
perception has been proposed [31]. In this theory, the task of robot perception
is to find and explanation of sensory data according to a background theory
describing the robot interactions with the environment.

3 Interaction with other AI elds


As already mentioned, the research on AI Robotics intersects a number of
subfields of AI. Indeed, the robotic agent can be seen as a main target for the
grand goal of Artificial Intelligence, and thus for all the aspects of AI some what
related to Robotics. Below, we address the main connections with the other AI
research topics included in this collection.
Machine Learning
Learning approaches are being applied to many problems arising in the design
of robots. According to the structure adopted above, both action and
perception can be supported by learning approaches. Moreover, several
approaches that include a training step are pursued ranging from machine
learning approaches to genetic programming, and neural networks.
From the standpoint of action, learning approaches can be used for the basic
action skills, specifically locomotion, but also learning cooperative behaviours,
adaptation to the environment, and learning opponents’ behavior, among
others.
Obviously, the learning process must face the challenges of the experiments
with real robots. Nevertheless, in several experimental settings (e.g. RoboCup),
learning and adaptation of the basic skill, such as walking, vision calibration,
have shown to be much more effective than parameter tuning by hand.
Edutainment
Toy robots are very promising to be used both for research purposes and for
education, because of low costs and high attraction for students. Even though,
at this moment, the available educational kits seem to provide too limited
capabilities, toy robots are certainly an interesting commercial market.
Consequently, the design of intelligent toy robots is an interesting opportunity
for AI researchers. The experience with Aibo robots [33] shows this potential:
they have been successfully used by many research groups in the world not
only in the RoboCup competitions (Four-Legged League), but also for
demonstrating other AI and Robotics research issues.
Multi agent systems
A multi-robot system (MRS) can be considered as a multi-agent system (MAS),
but the techniques for achieving coordination and cooperation in MAS are
often not well suited to deal with the uncertainty and model incompleteness
that are typical of Robotics. Multiple robots may achieve more robust and
more effective behavior by accomplishing coordinated tasks that are not
possible for single robots. Groups of homogeneous and heterogeneous robots
have a great potential for application in complex domains that may require the
intelligent use and merge of diverse capabilities. The design, implementation,
and evaluation of robots organized as teams pose a variety of scientific and
technical challenges.
Natural Language Processing
It is an obvious requirement of home and service robotics the ability to interact
with people in natural language; therefore, natural language processing
techniques find an interestingapplicationdomainonrobots(seeforexamplethe
RoboCare project below).
Logics for AI and Automated Reasoning
The connection to the Logics for AI and Automated Reasoning is central to the
work on Cognitive Robotics, but we do not further expand it here, as it is
discussed in the previous section.
Evolutionary Computation and Genetic Programming
Evolutionary Robotics is a new approach that looks at robots as autonomous
artificial organisms that develop their own skills in close interaction with the
environment without human intervention. Evolutionary robotics thus applies
techniques coming from evolutionary computation.

4 Interaction with other disciplines


Robotics is a multidisciplinary field: to make an operational robot, several
contributions from many disciplinesare needed: physics, electrical engineering,
electronic engineering, mechanical engineering, computer science, AI, and so
on. It is therefore difficult also to have a common background of terms,
notations and methodologies. In this sense,the efforts to define a common
ontology of terms for a robotics science [15] are noteworthy. In particular, AI
Robotics interacts with several research disciplines outside AI.

Industrial Robotics
Many contact points may be found between AI, Robotics and Industrial
Robotics. In early days there were not clear and cut distinctions between the
two fields, as already mentioned. Today, research in Industrial Robotics is
oriented towards the safe and intelligent control of industrial manipulators and
in the field of service robotics. The methodologies in Industrial Robotics are
grounded in Automatic Control Theory [30]. The relationship between the
robot and the environment is generally modeled by means of several types of
feedback systems. Moreover, methodologies are typically based on numerical
methods and optimization theory
Computer Vision
Robot Vision is specific with respect to computer vision, because Robot Vision
is intrinsically active, in the sense that the robot may actively find its
information sources and it can also reach the best view position to maximize
the visual information. Moreover, Robot Vision must be performed in real-
time, because the robot must immediately react to visual stimuli. In general,
the robot cannot process for a long time the same image because the
environmental conditions may vary, so the robot has to deal with approximate,
but just in time information. Several research topics and debates in this field
have strong correlations with AI and Robotics, for example, if a Computer
Vision system may be based on inner representation of the environment or it
should be purely reactive.

Mechatronics
Mechatronics encompasses competencies from electrical engineering,
electronic engineering, mechanical engineering. All of these competencies are
strictly related to AI and Robotics: the research field of electrical engineering
concerns motors and actuators, while electronic engineering mainly concerns
boards for robot control, for data acquisition and in general for the hardware
that makes the robot operational. Mechanical engineering concerns of course
the mechanical apparatus of the robot itself. From this point of view,
Mechatronics, AI and Robotics have tight relations: Mechatronics mainly
focuses on the robot hardware at all levels, while AI and Robotics take care of
the software that makes the robot operative and autonomous. Embedded
Systems The AI software architecture of a robot is naturally embedded into the
physical apparatus of the robot. Therefore, the robot software system needs to
work in real time in order to guarantee that the robot correctly copes with the
changing environment; it must be fail safe with graceful degradation in order
to ensure that the robot may operate also in case of damages; the hardware
system of the robot must be low power designed to optimize the batteries,
and so on. From this point of view, several of the typical challenges of
embedded systems are also challenges for robotics systems.

Human Robot Interface


The field of Human Robot Interface (HRI) is related to the interaction
modalities between the user and the robot. This field may be subdivided into
two subfields: the cognitive HRI (cHRI) and the physical HRI (pHRI) [2].
Cognitive HRI analyzes the flow of information between the user and the robot
and it mainly focuses on interaction modalities, which may span from textual
interfaces to voice and gestures. The interface may be more or less intelligent
in the sense that the robot may be constrained by a fixed set of commands or
it may interpret a string written in natural language or a sequence of gestures
performed by the operator. The interface may also be adaptive in the sense
that the robot may adapt to the operator through a suitable training phase.
Physical HRI instead concerns the design of intrinsically safe robots. The main
idea is to interpose compliant elements between motors and moving parts of
the robot in order to prevent damages in case of impact, and without
performance loss. Hence, cHRI research is closely related to the research of AI
and Robotics, while pHRI research is more linked with research in Industrial
Robotics.

5 Applications
In this section, we report on a few application scenarios, where the research
on Artificial Intelligence and Robotics has been developed in Italy.

5.1 Robotic Soccer


RoboCup started its activity about ten years ago by taking soccer games
(football for Europeans), as a scientific testbed for the research in AI and
Robotics. Italian researchers gave a significant contribution to RoboCup over
the years ,both at the organization level and in terms of participating teams.
RoboCup 2003 was held in Padova [24], and it attracted more than a thousand
participants from all over the world. Below we focus on the leagues, where the
Italian participation has been more relevant. The Middle-Size league is played
within a 5x9 meters field by 4 wheeled robots per team and the body of the
robot must be within a cylinder of 50 cm diameter and 80 cm height. All
sensing devices must be on board the robots, in particular global vision as well
as other external sensing devices are not available. The Italian participation in
RoboCup was boosted by the creation of a national team, called ART (Azzurra
Robot Team) [21], formed by several universities and the Consorzio Padova
Ricerche. ART obtained the 2nd place in 1999 and subsequently it was split into
several local teams: Golem, Artisti Veneti and Milan RoboCup team.
The Four Legged Robot league is played in 4x6 meters field by 4 four-legged
Aibo robots. The Aibo have on board a color camera and their mechanical
structure provides 18 degrees of freedom. The availability of a standard
platform has significantly contributed to the scientific evaluation of the
solutions proposed. The SPQR team
participatedinthecompetitionsince2000obtainingthe4thplace and accessing the
quarter finals several times.
Recently, a Humanoid Robot league started to approach the ultimate goal of
RoboCup to build a humanoid team to play with humans [17]. Humanoid
Robotics is currently one of the main challenges for many researchers, mostly
focussing on mechanics and locomotion. Politecnico of Torino developed the
humanoid robot Isaac that has participated to RoboCup Humanoid League
since 2003. IASLab of University of Padova later joined the Humanoid League,
with a fully autonomous humanoid robot that uses an omnidirectional visor.
It is worth emphasizing that the ART national model led to scientific and
technical success: ART showed the ability to realize competitive robotic
football players, but foremost the ability to blend in a single national team
methodologies and implementation techniques individually developed by the
research groups. In this respect, the work done on the issue of coordination,
leading to the definition of communication and coordination protocols used by
the ARTplayers[16],has been both very challenging and very successful. Finally,
collaboration/competition achieved in the project has been essential to the
final results, since it allowed for a project development with a tight interaction
and exchange of results, compared to conventional research projects.

5.2 Rescue Robotics


Besides soccer, RoboCup promotes other leagues, aiming at the transfer of the
research results into socially and industrially relevant contexts. Specifically,
RoboCup Rescue [18] aims at the design of systems to search and rescue for
large scale disasters. Here we focus on the rescue robot league, that aims at
the design of robots searching victimsin an unknown environment
representing a disaster scenario. This kind of application brings in scientific
challenges, related to the uncertainty about the environment, that are not
present in the soccer leagues. The experimental set up, called arena,i sbeing
developed inclose cooperation with USAR 2. The arenas have already been
used in various experiments (including RoboCup and AAAI rescue competions)
and nowadays represent a reference for experimental evaluation of the
performance of rescue robots. The current aim of the competition is twofold:
mobility and autonomy. As for the former, the research is focussed on the
mechanical design that allows the robot to overcome the obstacles present in
the environment; the latter is concerned with the design of robots that can
autonomously explore the environment, possibly working in a team, build the
map, find the victims and locate them in the map.
Two Italian teams participate in these competitions since 2004: the first one
from SIED Lab, within a collaboration between ”Istituto Superiore Antincendi”
and the University of Rome ”La Sapienza”; the second one from the ALCOR lab
of the University of Rome “La Sapienza”, which developed a model-based
approach to the executive control of a rescue rover, winning the third award in
2004. The RoboCup activity contributed and benefitted from the results of the
research project Simulation and Robotics
SystemsforOperationsinEmergencyScenarios(SRSOES 2003-2005), funded by
Italian MIUR 3.

5.3 Space Robotics


The aim of the project An Intelligent System for the Supervision of
Autonomous Robots in Space, funded by the
ItalianSpaceAgency(ASI)duringyears1997-2000,istheapplication of AI
techniques to the design and realization of space robotics systems for
planetary exploration missions, that require an increasing autonomy. In
particular, the aim of this project has been the application of AI techniques to
the design and realization of an effective and flexible system for the
supervision of the ASI robotic arm SPIDER. The project was coordinated by the
unit at the University of Palermo. Subproject units were the Universities of
Roma “La Sapienza”, Torino, Genova, Parma, and the research centers ISTC-
CNR Roma and IRST-ITC Trento.
The scientific objective of the project is the design and development of an
intelligent system able to supervise autonomous robots in space. The system is
based on a multiagent architecture in which each block is a software agent
interfaced with the rest of the system. This design choice is motivated by high
flexibility, agent interchangeability with consequent easy improvement of the
architecture, reuse of all the agents or part of them, or of the architecture
itself. The architecture has been designed by keeping in mind the ASI missions;
but it is fully general and the single modules and the whole architecture may
be easily reconfigured for the supervision of other robotic systems. The project
aimed at realizing an innovative research product, and it is complementary to
ASI activities.
5.4 Robotics for Elderly and Impaired People
The goal of the project RoboCare4 sponsored by Italian Ministry of Education,
University and Research (MIUR) from 2002 to 2006 is to build a multi-agent
system which generates user services for human assistance. The system is
implemented on a distributed and heterogeneous platform, consisting of a
hardware and software prototype. The project, currently running, is
coordinated by the ISTC-CNR Roma, subproject units are at the Universities of
Genova, Torino, Bologna, Parma, Roma “La Sapienza”, and at the CNR research
centers of Genova, Palermo, and Milano.
The use of autonomous robotics and distributed computing technologies
constitutes the basis for the implementation of a number of services in an
environment with elderly people, such as a health-care institution or a home
environment. The fact that robotic components, intelligent systems and
human beings are to act in a cooperative setting is what makes the study of
such a system challenging, for research and also from the technology
integration point of view.
The project is organized in 3 tasks: the development of a HW/SW framework to
support the system; the study and implementation of a supervisor agent;
realization of robotic agents and technology integration. Alongside the above
research tasks, common usability and acceptability issues are analyzed,
contributing to the implementation of SW development, visualization and
simulation tools for multi-robot systems.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy