Robot Programming - From Simple Moves To Complex Robot Tasks
Robot Programming - From Simple Moves To Complex Robot Tasks
Robot Programming - From Simple Moves To Complex Robot Tasks
Introduction
The development of robot programming concepts is almost as old as the development of robot manipulators itself. As the ultimate goal of industrial robotics has
been (and still is!) the development of sophisticated production machines with the
hope to reduce costs in manufacturing areas like material handling, welding, spraypainting and assembly, tremendous efforts have been undertaken by the international robotics community to design user-friendly and at the same time powerful
programming methods. The evolution reaches from early control concepts on the
hardware level via point-to-point and simple motion level languages to motionoriented structured robot programming languages. Comprehensive surveys on the
historical development of robot programming may be found in [1, 2]. A characteristic feature of robot programming is, that usually it is dealing with two different
worlds, (ref. to Fig. 1): (1) The real physical world to be manipulated, and (2) abstract models representing this world in a functional or descriptive manner by programs and data. In the simplest case, these models are pure imagination of the
programmers; in high level programming languages, e.g. it may consist of CAD
data. In any case, commands based on some model are causing robots to change
the state of the real world as well as the world model itself. During a sequence of
actions both worlds have to be kept consistent to each other. This can be ensured
by integrating internal robot sensors as well as external sensors like force/torque
and vision sensors. Already in the early seventies research groups started to focus
on so-called task-oriented robot programming languages. One of the first examples
are IBMs AUTOPASS system [3], the RAPT language developed at the University of Edinburgh [4] and the LAMA system proposed by MIT [5].
abstract models
physical world
Robots
actions
commands
internal
robot
states
Computers
programs,data
commands
sensor
feedback
Work space
objects
e.g. work pieces,
tools, etc.
observations
Sensors
Figure 1: General robot programming paradigm.
The basic idea behind these approaches is, to relieve the programmer from knowing all specific machine details and free him from coding every tiny motion/action;
rather, he is specifying his application on a high abstraction level, telling the machine in an intuitive way what has to be done and not how this has to be done. This
implicit programming concept implies many complex modules leading to automated robot programming. E.g., there is a need for user-friendly human interfaces
for specifying robot applications; this may range from graphical specifications/annotations within a CAD environment, till to spoken commands or gestures,
interpreted by some speech understanding or vision system respectively. These
commands have to be converted automatically into a sequence of actions/motions
by a task planning system; at the Technical University of Braunschweig we developed HighLAP [6, 7], which will be outlined below.
During the course of the years it turned out, that the most difficult part in automatic
robot programming is the execution and control of automatically generated action/motion sequences including automated generation of collision-free paths and
force controlled mating operations. Besides some first simple industrial applications, like in 4-DOF assembly of PCBs, this still is subject of international ongoing
research. The necessary prerequisites from control theory have been developed
during the last decades; they are ready to be used. Already in the early eighties
Mason has shown, how to control robots, when they are in contact with their environment [8]. His concept is outlined below, when we are describing the concept of
skill primitives as versatile interface between programming and control. However,
before diving into advanced issues of implicit robot programming, we briefly will
discuss the state of the art of explicit, i.e. motion oriented programming techniques
in the following section.
allow a simple notation of transform matrix equations, etc. For the application
programmer (who at least in the industrial world usually is not an expert of robotics) the details of the robot hardware have to be hidden behind well-defined easyto-use software interfaces. Respecting this, from the application programmers
point of view, also the difference between programming serial or parallel robots
should diminish!
Usually, advanced applications are heavily dependent on sensor integration of
internal as well as external sensors, like force/torque and vision sensors. The robot
programming implications of this led in the early eighties to the development of
the so-called monitor concept [12]. Monitors are small pieces of concurrent userdefined or built-in programs, heavily communicating with the users applications
and the motion pipelines of robot control systems. I.e., monitors are reading a
specific sensor or sets of sensors in specified time intervals; in dependency of the
sensor values the monitors are modifying via the motion pipeline the robots paths
guided motion or are triggering some action (e.g., an immediate stop of motion,
guarded motion). Usually, sensor integration requires from the robot programmer
an in-depth understanding of the robots functionality. Thus, it is very important to
supply programmers with powerful programming language constructs to ease such
difficult tasks. Fig. 2 shows a simple path following example keeping a constant
distance between the robots tool center point and the surface of the corrugated
sheet of iron. As can be seen, this application can be coded with just a few lines of
ZERO++ code. The main section simply defines start and goal positions. After
moving the robot to the start position in joint interpolation mode, it is moved in
Cartesian interpolation mode to the goal position while a Monitor has been
activated. The Monitor function is reading ultrasonic sensor values, which are
used to compute delta frames used by the ZERO++ Kernel to continuously modify interpolated frame values between start and goal. In similar ways any functional dependencies of some path properties (speed, force, torque, distance, etc.)
can be specified in a textual programming manner, which, however, is cumbersome and error-prone. Unfortunately, up to now there is a lack of off-line tools
supporting robot programmers to specify robot path properties comfortably.
void GuidedMotion()
{
RX60 robot();
FRAME start=Trans(100.0, 20.0, 70.0);
FRAME goal=start*Trans(0.0, 400.0, 0.0);
robot.Move(start,ROBOT::JointInterp);
robot.Move(goal,MonitorFunction,ROBOT::FrameInterp);
}
int MonitorFunction(FRAME &delta)
{
static USSENSOR sensor;
double
height;
height=sensor.GetValue();
if(height!=DIST)
delta.TransZ(DIST height);
else
return IGNORE;
return CHANGE;
}
means, that the lower FACE2 OF BULB lies against FACE2 OF RACK, which is
inside the socket and is perpendicular to the hole axis. Herewith, just one degree of
freedom is left, namely a rotation around the hole axis. To specify the assembly
group completely, every degree of freedom has to be eliminated in a similar way.
Fig. 3 shows some parts of an automotive headlight assembly with corresponding
SSRs. The easiest way to generate the spatial relations explicitly, is to interactively
let the design engineer click on the surfaces, e. g. shafts, holes and faces in his
CAD environment in order to specify appropriate features (i. e. coordinate systems) and subsequently allow him to select suitable relations between these features, e. g. fits, against, coplanar. Fig. 4 displays the user interface of the assembly
planner HighLAP, which has been embedded in the commercial robot simulation
system Robcad. The system supports the user while he is specifying the relations in
such a manner, that it signals contradictory specifications and shows the user remaining degrees of freedom between assembly objects.
The symbolic spatial relations specifying an assembly also can be used for the
automatic calculation of possible assembly plans as well as for planning of appropriate sensors, which may guide the assembly process during execution. This kind
of specification provides an easy to use interactive graphical tool to define any
kind of assembly; the user has to deal only with a limited and manageable amount
of spatial information in a very comfortable manner.
shows a bulb light and its corresponding bayonet socket of an automotive indicator
light. Based on the configuration space representation (Fig. 6 left) a suitable mating
direction for the bulb can be computed [17].
For sake of simplicity we assume the CoC at the tip of the peg. A simple control of
an insertion (neglecting the possibility of jamming) could be, to select the following desired values for robot control in the Cartesian task space
diag C = ( x force = 0 N, y force = 0 N, z velocity = 0.01 m/s, x torque = 0 Nm,
y torque = 0 Nm, z orientation = 0.1 rad)T
As no time limit for the motion is specified, the robot holding the peg would collide with the object containing the hole at some point. I.e., for proper operation, in
addition to the CoC and C some stopping condition s has to be specified, which
may be defined as some Boolean expression. In our simplified example this could
be
s = ( z position > 340 mm OR
time_out > 5 s)
In order to prove the suitability and the strength of skill primitive based robot task
implementation, we have chosen a bayonet securing task of an automotive light
bulb. In our experimental set up, we have used a Stubli 6 DOF robot, equipped
with an external JR3 force/torque sensor mounted on the robots wrist. The robots
control unit is connected via TCP/IP to a PC equipped with the JR3 interface card;
the PC is running the control process. The Stubli robot control system receives
and executes each 16 ms a move operation.
Fig. 7 shows the light bulb (4) to be secured by means of a bayonet socket (6).
Assuming, that the rotational axis of the bulb is fairly aligned with the axis of the
socket, the decomposition of the robot task into skill primitives leads to the following four states (Fig. 8): Moving from free space into contact state (a) until the
first electrical contact (1) is pressed by base (4) of the bulb; (ref. to Fig. 7), while
minimizing lateral forces arising from small displacements. The next skill primitive moves the bulb further down in z-direction until the central electrical contact
(2) is pressed with 15N.
Conclusion
The purpose of this paper subject to limited space has been two-fold: (1) Revisiting
major developments and concepts in robot programming and discussion of its
current state and (2) indicating insufficiencies and further developments. As has
been pointed out, there is a tremendous gap between commercially available robot
programming languages and methods developed in research laboratories all over
the world. Although powerful motion oriented programming concepts like the
above mentioned monitor concept - are known since almost three decades, they
rarely have found their way to products. Most commercial systems still offer simple set_value commands for specifying properties of robot paths, like
set_speed(), set_force(), etc. specifying the properties of the next path segment(s) by some given (constant) parameters. To our knowledge, there is no single
commercial robot programming language available, allowing flexible specification
of functional interdependencies of path properties, e.g., applied forces/torques as
function of position/orientation, etc. Since the early work of Mason we know, how
such principles can be implemented in a task oriented manner. In the meanwhile,
there also is a huge body of control concepts available, ready to support functional
sensor guided motion applications. In our opinion, the skill primitive concept outlined above offers a versatile interface between programming and control. However, we recognize a big backlog for developing tools to support motion-oriented
sensor-guided robot programming by hand.
As has been pointed out, skill primitives also can serve as a powerful interface
between automated robot programming and robot control. Whereas the art of assembly planning already has reached a mature level, autonomous execution of
automatically generated assembly plans still is in its infancy. In this paper and
elsewhere we have shown some first examples, how to implement basic robot tasks
(placing objects, securing light bulbs into bayonet sockets, etc.); they are ready to
be used in automated assembly robot programming environments. Nevertheless,
there are yet many open issues to be solved in order to achieve fully automated
assembly planning/programming and sensor guided execution of assembly processes. Certainly, one big important research area in the future will concentrate on
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
C. Pelich, F. M. Wahl: ZERO++: An OOP Environment for Multiprocessor Robot Control. International Journal Robotics and Automation, Vol.
12, No. 2, 1997.
M. A. Lavin, L. I. Lieberman: AML/V: An Industrial Machine Vision
Programming System. International. Journal Robotics Research Vol. 1,
No. 3, 1982.
A. P. Ambler, R. J. Popplestone: Inferring the Positions of Bodies from
Specified Spatial Relationships. Artificial Intelligence, Vol. 6, 1975.
L. Kavraki, J.-C. Latombe, R. H. Wilson: On the Complexity of Assembly
Partitioning. Information Processing Letters, Vol. 48, 1993
L. S. Homen de Mello and S. Lee: Computer-Aided Mechanical Assembly
Planning. Kluwer Academic Publisher, 1991
T. Lozano-Prez: Spatial Planning: A Configuration Space Approach.
IEEE Transactions On Computers, Vol. C-32, No. 2, 1983.
U. Thomas, M. Barrenscheen, F. M. Wahl: Efficient Calculation of Mating Directions Based on Configuration Spaces. To be published elsewhere.
T. Hasegawa, T. Suehiro, K. Takase: A Model-Based Manipulation System with Skill-Based Execution. IEEE Transactions on Robotics and
Automation, Vol. 8, No. 5, 1992
B. Finkemeyer, T. Krger, F. M. Wahl: Accurate Placing of Polyhedral
Objects in Unknown Environments. To be published elsewhere.